Sample records for unsigned error aue

  1. Optimization of parameters for semiempirical methods V: Modification of NDDO approximations and application to 70 elements

    PubMed Central

    2007-01-01

    Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol−1. For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol−1. The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6–31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6–31G*: 7.4, and AM1: 10.0 kcal mol−1. Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries. Figure Calculated structure of the complex ion [Ta6Cl12]2+ (footnote): Reference value in parenthesis Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0233-4) contains supplementary material, which is available to authorized users. PMID:17828561

  2. Application of Molecular Dynamics Simulations in Molecular Property Prediction II: Diffusion Coefficient

    PubMed Central

    Wang, Junmei; Hou, Tingjun

    2011-01-01

    In this work, we have evaluated how well the General AMBER force field (GAFF) performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, 5 organic compounds in aqueous solutions, 4 proteins in aqueous solutions, and 9 organic compounds in non-aqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned error (AUE) and the root-mean-square error (RMSE) are 0.137 and 0.171 ×10−5 cm−2s−1, respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for 8 organic solvents with experimental data (R2 = 0.784), 4 proteins in aqueous solutions (R2 = 0.996) and 9 organic compounds in non-aqueous solutions (R2 = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide (DMSO) and cyclohexane have been studied. The major MD settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement (MSD) collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. PMID:21953689

  3. Surprise beyond prediction error

    PubMed Central

    Chumbley, Justin R; Burke, Christopher J; Stephan, Klaas E; Friston, Karl J; Tobler, Philippe N; Fehr, Ernst

    2014-01-01

    Surprise drives learning. Various neural “prediction error” signals are believed to underpin surprise-based reinforcement learning. Here, we report a surprise signal that reflects reinforcement learning but is neither un/signed reward prediction error (RPE) nor un/signed state prediction error (SPE). To exclude these alternatives, we measured surprise responses in the absence of RPE and accounted for a host of potential SPE confounds. This new surprise signal was evident in ventral striatum, primary sensory cortex, frontal poles, and amygdala. We interpret these findings via a normative model of surprise. PMID:24700400

  4. Application of Molecular Dynamics Simulations in Molecular Property Prediction I: Density and Heat of Vaporization

    PubMed Central

    Wang, Junmei; Tingjun, Hou

    2011-01-01

    Molecular mechanical force field (FF) methods are useful in studying condensed phase properties. They are complementary to experiment and can often go beyond experiment in atomic details. Even a FF is specific for studying structures, dynamics and functions of biomolecules, it is still important for the FF to accurately reproduce the experimental liquid properties of small molecules that represent the chemical moieties of biomolecules. Otherwise, the force field may not describe the structures and energies of macromolecules in aqueous solutions properly. In this work, we have carried out a systematic study to evaluate the General AMBER Force Field (GAFF) in studying densities and heats of vaporization for a large set of organic molecules that covers the most common chemical functional groups. The latest techniques, such as the particle mesh Ewald (PME) for calculating electrostatic energies, and Langevin dynamics for scaling temperatures, have been applied in the molecular dynamics (MD) simulations. For density, the average percent error (APE) of 71 organic compounds is 4.43% when compared to the experimental values. More encouragingly, the APE drops to 3.43% after the exclusion of two outliers and four other compounds for which the experimental densities have been measured with pressures higher than 1.0 atm. For heat of vaporization, several protocols have been investigated and the best one, P4/ntt0, achieves an average unsigned error (AUE) and a root-mean-square error (RMSE) of 0.93 and 1.20 kcal/mol, respectively. How to reduce the prediction errors through proper van der Waals (vdW) parameterization has been discussed. An encouraging finding in vdW parameterization is that both densities and heats of vaporization approach their “ideal” values in a synchronous fashion when vdW parameters are tuned. The following hydration free energy calculation using thermodynamic integration further justifies the vdW refinement. We conclude that simple vdW parameterization can significantly reduce the prediction errors. We believe that GAFF can greatly improve its performance in predicting liquid properties of organic molecules after a systematic vdW parameterization, which will be reported in a separate paper. PMID:21857814

  5. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    NASA Astrophysics Data System (ADS)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  6. The 6-31B(d) basis set and the BMC-QCISD and BMC-CCSD multicoefficient correlation methods.

    PubMed

    Lynch, Benjamin J; Zhao, Yan; Truhlar, Donald G

    2005-03-03

    Three new multicoefficient correlation methods (MCCMs) called BMC-QCISD, BMC-CCSD, and BMC-CCSD-C are optimized against 274 data that include atomization energies, electron affinities, ionization potentials, and reaction barrier heights. A new basis set called 6-31B(d) is developed and used as part of the new methods. BMC-QCISD has mean unsigned errors in calculating atomization energies per bond and barrier heights of 0.49 and 0.80 kcal/mol, respectively. BMC-CCSD has mean unsigned errors of 0.42 and 0.71 kcal/mol for the same two quantities. BMC-CCSD-C is an equally effective variant of BMC-CCSD that employs Cartesian rather than spherical harmonic basis sets. The mean unsigned error of BMC-CCSD or BMC-CCSD-C for atomization energies, barrier heights, ionization potentials, and electron affinities is 22% lower than G3SX(MP2) at an order of magnitude less cost for gradients for molecules with 9-13 atoms, and it scales better (N6 vs N,7 where N is the number of atoms) when the size of the molecule is increased.

  7. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    PubMed

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0.2 D in dipole moments, and ∼4 kcal/mol in Zn-ligand bond energies) that cannot be neglected for accurate modeling, but the same density functionals that do well in all-electron nonrelativistic calculations do well with relativistic effective core potentials. Although most tests are carried out with augmented polarized triple-ζ basis sets, we also carried out some tests with an augmented polarized double-ζ basis set, and we found, on average, that with the smaller basis set DFT has no loss in accuracy for dipole moments and only ∼10% less accurate bond lengths.

  8. Explicitly Representing the Solvation Shell in Continuum Solvent Calculations

    PubMed Central

    Svendsen, Hallvard F.; Merz, Kenneth M.

    2009-01-01

    A method is presented to explicitly represent the first solvation shell in continuum solvation calculations. Initial solvation shell geometries were generated with classical molecular dynamics simulations. Clusters consisting of solute and 5 solvent molecules were fully relaxed in quantum mechanical calculations. The free energy of solvation of the solute was calculated from the free energy of formation of the cluster and the solvation free energy of the cluster calculated with continuum solvation models. The method has been implemented with two continuum solvation models, a Poisson-Boltzmann model and the IEF-PCM model. Calculations were carried out for a set of 60 ionic species. Implemented with the Poisson-Boltzmann model the method gave an unsigned average error of 2.1 kcal/mol and a RMSD of 2.6 kcal/mol for anions, for cations the unsigned average error was 2.8 kcal/mol and the RMSD 3.9 kcal/mol. Similar results were obtained with the IEF-PCM model. PMID:19425558

  9. Hydrogen bonding and pi-stacking: how reliable are force fields? A critical evaluation of force field descriptions of nonbonded interactions.

    PubMed

    Paton, Robert S; Goodman, Jonathan M

    2009-04-01

    We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.

  10. Analytic Energy Gradients for Variational Two-Electron Reduced-Density-Matrix-Driven Complete Active Space Self-Consistent Field Theory.

    PubMed

    Maradzike, Elvis; Gidofalvi, Gergely; Turney, Justin M; Schaefer, Henry F; DePrince, A Eugene

    2017-09-12

    Analytic energy gradients are presented for a variational two-electron reduced-density-matrix (2-RDM)-driven complete active space self-consistent field (CASSCF) method. The active-space 2-RDM is determined using a semidefinite programing (SDP) algorithm built upon an augmented Lagrangian formalism. Expressions for analytic gradients are simplified by the fact that the Lagrangian is stationary with respect to variations in both the primal and the dual solutions to the SDP problem. Orbital response contributions to the gradient are identical to those that arise in conventional CASSCF methods in which the electronic structure of the active space is described by a full configuration interaction (CI) wave function. We explore the relative performance of variational 2-RDM (v2RDM)- and CI-driven CASSCF for the equilibrium geometries of 20 small molecules. When enforcing two-particle N-representability conditions, full-valence v2RDM-CASSCF-optimized bond lengths display a mean unsigned error of 0.0060 Å and a maximum unsigned error of 0.0265 Å, relative to those obtained from full-valence CI-CASSCF. When enforcing partial three-particle N-representability conditions, the mean and maximum unsigned errors are reduced to only 0.0006 and 0.0054 Å, respectively. For these same molecules, full-valence v2RDM-CASSCF bond lengths computed in the cc-pVQZ basis set deviate from experimentally determined ones on average by 0.017 and 0.011 Å when enforcing two- and three-particle conditions, respectively, whereas CI-CASSCF displays an average deviation of 0.010 Å. The v2RDM-CASSCF approach with two-particle conditions is also applied to the equilibrium geometry of pentacene; optimized bond lengths deviate from those derived from experiment, on average, by 0.015 Å when using a cc-pVDZ basis set and a (22e,22o) active space.

  11. A swarm of autonomous miniature underwater robot drifters for exploring submesoscale ocean dynamics.

    PubMed

    Jaffe, Jules S; Franks, Peter J S; Roberts, Paul L D; Mirza, Diba; Schurgers, Curt; Kastner, Ryan; Boch, Adrien

    2017-01-24

    Measuring the ever-changing 3-dimensional (3D) motions of the ocean requires simultaneous sampling at multiple locations. In particular, sampling the complex, nonlinear dynamics associated with submesoscales (<1-10 km) requires new technologies and approaches. Here we introduce the Mini-Autonomous Underwater Explorer (M-AUE), deployed as a swarm of 16 independent vehicles whose 3D trajectories are measured near-continuously, underwater. As the vehicles drift with the ambient flow or execute preprogrammed vertical behaviours, the simultaneous measurements at multiple, known locations resolve the details of the flow within the swarm. We describe the design, construction, control and underwater navigation of the M-AUE. A field programme in the coastal ocean using a swarm of these robots programmed with a depth-holding behaviour provides a unique test of a physical-biological interaction leading to plankton patch formation in internal waves. The performance of the M-AUE vehicles illustrates their novel capability for measuring submesoscale dynamics.

  12. A swarm of autonomous miniature underwater robot drifters for exploring submesoscale ocean dynamics

    NASA Astrophysics Data System (ADS)

    Jaffe, Jules S.; Franks, Peter J. S.; Roberts, Paul L. D.; Mirza, Diba; Schurgers, Curt; Kastner, Ryan; Boch, Adrien

    2017-01-01

    Measuring the ever-changing 3-dimensional (3D) motions of the ocean requires simultaneous sampling at multiple locations. In particular, sampling the complex, nonlinear dynamics associated with submesoscales (<1-10 km) requires new technologies and approaches. Here we introduce the Mini-Autonomous Underwater Explorer (M-AUE), deployed as a swarm of 16 independent vehicles whose 3D trajectories are measured near-continuously, underwater. As the vehicles drift with the ambient flow or execute preprogrammed vertical behaviours, the simultaneous measurements at multiple, known locations resolve the details of the flow within the swarm. We describe the design, construction, control and underwater navigation of the M-AUE. A field programme in the coastal ocean using a swarm of these robots programmed with a depth-holding behaviour provides a unique test of a physical-biological interaction leading to plankton patch formation in internal waves. The performance of the M-AUE vehicles illustrates their novel capability for measuring submesoscale dynamics.

  13. Multi-Site λ-dynamics for simulated Structure-Activity Relationship studies

    PubMed Central

    Knight, Jennifer L.; Brooks, Charles L.

    2011-01-01

    Multi-Site λ-dynamics (MSλD) is a new free energy simulation method that is based on λ-dynamics. It has been developed to enable multiple substituents at multiple sites on a common ligand core to be modeled simultaneously and their free energies assessed. The efficacy of MSλD for estimating relative hydration free energies and relative binding affinties is demonstrated using three test systems. Model compounds representing multiple identical benzene, dihydroxybenzene and dimethoxybenzene molecules show total combined MSλD trajectory lengths of ~1.5 ns are sufficient to reliably achieve relative hydration free energy estimates within 0.2 kcal/mol and are less sensitive to the number of trajectories that are used to generate these estimates for hybrid ligands that contain up to ten substituents modeled at a single site or five substituents modeled at each of two sites. Relative hydration free energies among six benzene derivatives calculated from MSλD simulations are in very good agreement with those from alchemical free energy simulations (with average unsigned differences of 0.23 kcal/mol and R2=0.991) and experiment (with average unsigned errors of 1.8 kcal/mol and R2=0.959). Estimates of the relative binding affinities among 14 inhibitors of HIV-1 reverse transcriptase obtained from MSλD simulations are in reasonable agreement with those from traditional free energy simulations and experiment (average unsigned errors of 0.9 kcal/mol and R2=0.402). For the same level of accuracy and precision MSλD simulations are achieved ~20–50 times faster than traditional free energy simulations and thus with reliable force field parameters can be used effectively to screen tens to hundreds of compounds in structure-based drug design applications. PMID:22125476

  14. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile.

    PubMed

    Zanith, Caroline C; Pliego, Josefredo R

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol(-1) in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol(-1), respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  15. LOGISMOS-B for primates: primate cortical surface reconstruction and thickness measurement

    NASA Astrophysics Data System (ADS)

    Oguz, Ipek; Styner, Martin; Sanchez, Mar; Shi, Yundi; Sonka, Milan

    2015-03-01

    Cortical thickness and surface area are important morphological measures with implications for many psychiatric and neurological conditions. Automated segmentation and reconstruction of the cortical surface from 3D MRI scans is challenging due to the variable anatomy of the cortex and its highly complex geometry. While many methods exist for this task in the context of the human brain, these methods are typically not readily applicable to the primate brain. We propose an innovative approach based on our recently proposed human cortical reconstruction algorithm, LOGISMOS-B, and the Laplace-based thickness measurement method. Quantitative evaluation of our approach was performed based on a dataset of T1- and T2-weighted MRI scans from 12-month-old macaques where labeling by our anatomical experts was used as independent standard. In this dataset, LOGISMOS-B has an average signed surface error of 0.01 +/- 0.03mm and an unsigned surface error of 0.42 +/- 0.03mm over the whole brain. Excluding the rather problematic temporal pole region further improves unsigned surface distance to 0.34 +/- 0.03mm. This high level of accuracy reached by our algorithm even in this challenging developmental dataset illustrates its robustness and its potential for primate brain studies.

  16. Performance of the SMD and SM8 models for predicting solvation free energy of neutral solutes in methanol, dimethyl sulfoxide and acetonitrile

    NASA Astrophysics Data System (ADS)

    Zanith, Caroline C.; Pliego, Josefredo R.

    2015-03-01

    The continuum solvation models SMD and SM8 were developed using 2,346 solvation free energy values for 318 neutral molecules in 91 solvents as reference. However, no solvation data of neutral solutes in methanol was used in the parametrization, while only few solvation free energy values of solutes in dimethyl sulfoxide and acetonitrile were used. In this report, we have tested the performance of the models for these important solvents. Taking data from literature, we have generated solvation free energy, enthalpy and entropy values for 37 solutes in methanol, 21 solutes in dimethyl sulfoxide and 19 solutes in acetonitrile. Both SMD and SM8 models have presented a good performance in methanol and acetonitrile, with mean unsigned error equal or less than 0.66 and 0.55 kcal mol-1 in methanol and acetonitrile, respectively. However, the correlation is worse in dimethyl sulfoxide, where the SMD and SM8 methods present mean unsigned error of 1.02 and 0.95 kcal mol-1, respectively. Our results point out the SMx family of models need be improved for dimethyl sulfoxide solvent.

  17. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task.

    PubMed

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-03-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior. © The Author(s) 2016.

  18. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task

    PubMed Central

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-01-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior. PMID:26755547

  19. Automated contour detection in X-ray left ventricular angiograms using multiview active appearance models and dynamic programming.

    PubMed

    Oost, Elco; Koning, Gerhard; Sonka, Milan; Oemrawsingh, Pranobe V; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2006-09-01

    This paper describes a new approach to the automated segmentation of X-ray left ventricular (LV) angiograms, based on active appearance models (AAMs) and dynamic programming. A coupling of shape and texture information between the end-diastolic (ED) and end-systolic (ES) frame was achieved by constructing a multiview AAM. Over-constraining of the model was compensated for by employing dynamic programming, integrating both intensity and motion features in the cost function. Two applications are compared: a semi-automatic method with manual model initialization, and a fully automatic algorithm. The first proved to be highly robust and accurate, demonstrating high clinical relevance. Based on experiments involving 70 patient data sets, the algorithm's success rate was 100% for ED and 99% for ES, with average unsigned border positioning errors of 0.68 mm for ED and 1.45 mm for ES. Calculated volumes were accurate and unbiased. The fully automatic algorithm, with intrinsically less user interaction was less robust, but showed a high potential, mostly due to a controlled gradient descent in updating the model parameters. The success rate of the fully automatic method was 91% for ED and 83% for ES, with average unsigned border positioning errors of 0.79 mm for ED and 1.55 mm for ES.

  20. Non-neutralized Electric Currents in Solar Active Regions and Flare Productivity

    NASA Astrophysics Data System (ADS)

    Kontogiannis, Ioannis; Georgoulis, Manolis K.; Park, Sung-Hong; Guerra, Jordan A.

    2017-11-01

    We explore the association of non-neutralized currents with solar flare occurrence in a sizable sample of observations, aiming to show the potential of such currents in solar flare prediction. We used the high-quality vector magnetograms that are regularly produced by the Helioseismic Magnetic Imager, and more specifically, the Space weather HMI Active Region Patches (SHARP). Through a newly established method that incorporates detailed error analysis, we calculated the non-neutralized currents contained in active regions (AR). Two predictors were produced, namely the total and the maximum unsigned non-neutralized current. Both were tested in AR time-series and a representative sample of point-in-time observations during the interval 2012 - 2016. The average values of non-neutralized currents in flaring active regions are higher by more than an order of magnitude than in non-flaring regions and correlate very well with the corresponding flare index. The temporal evolution of these parameters appears to be connected to physical processes, such as flux emergence and/or magnetic polarity inversion line formation, that are associated with increased solar flare activity. Using Bayesian inference of flaring probabilities, we show that the total unsigned non-neutralized current significantly outperforms the total unsigned magnetic flux and other well-established current-related predictors. It therefore shows good prospects for inclusion in an operational flare-forecasting service. We plan to use the new predictor in the framework of the FLARECAST project along with other highly performing predictors.

  1. Unsignaled Delay of Reinforcement, Relative Time, and Resistance to Change

    ERIC Educational Resources Information Center

    Shahan, Timothy A.; Lattal, Kennon A.

    2005-01-01

    Two experiments with pigeons examined the effects of unsignaled, nonresetting delays of reinforcement on responding maintained by different reinforcement rates. In Experiment 1, 3-s unsignaled delays were introduced into each component of a multiple variable-interval (VI) 15-s VI 90-s VI 540-s schedule. When considered as a proportion of the…

  2. Algorithms for sorting unsigned linear genomes by the DCJ operations.

    PubMed

    Jiang, Haitao; Zhu, Binhai; Zhu, Daming

    2011-02-01

    The double cut and join operation (abbreviated as DCJ) has been extensively used for genomic rearrangement. Although the DCJ distance between signed genomes with both linear and circular (uni- and multi-) chromosomes is well studied, the only known result for the NP-complete unsigned DCJ distance problem is an approximation algorithm for unsigned linear unichromosomal genomes. In this article, we study the problem of computing the DCJ distance on two unsigned linear multichromosomal genomes (abbreviated as UDCJ). We devise a 1.5-approximation algorithm for UDCJ by exploiting the distance formula for signed genomes. In addition, we show that UDCJ admits a weak kernel of size 2k and hence an FPT algorithm running in O(2(2k)n) time.

  3. Using polarizable POSSIM force field and fuzzy-border continuum solvent model to calculate pK(a) shifts of protein residues.

    PubMed

    Sharma, Ity; Kaminski, George A

    2017-01-15

    Our Fuzzy-Border (FB) continuum solvent model has been extended and modified to produce hydration parameters for small molecules using POlarizable Simulations Second-order Interaction Model (POSSIM) framework with an average error of 0.136 kcal/mol. It was then used to compute pK a shifts for carboxylic and basic residues of the turkey ovomucoid third domain (OMTKY3) protein. The average unsigned errors in the acid and base pK a values were 0.37 and 0.4 pH units, respectively, versus 0.58 and 0.7 pH units as calculated with a previous version of polarizable protein force field and Poisson Boltzmann continuum solvent. This POSSIM/FB result is produced with explicit refitting of the hydration parameters to the pK a values of the carboxylic and basic residues of the OMTKY3 protein; thus, the values of the acidity constants can be viewed as additional fitting target data. In addition to calculating pK a shifts for the OMTKY3 residues, we have studied aspartic acid residues of Rnase Sa. This was done without any further refitting of the parameters and agreement with the experimental pK a values is within an average unsigned error of 0.65 pH units. This result included the Asp79 residue that is buried and thus has a high experimental pK a value of 7.37 units. Thus, the presented model is capable or reproducing pK a results for residues in an environment that is significantly different from the solvated protein surface used in the fitting. Therefore, the POSSIM force field and the FB continuum solvent parameters have been demonstrated to be sufficiently robust and transferable. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  4. Area utilization efficiency of a sloping heliostat system for solar concentration.

    PubMed

    Wei, L Y

    1983-02-15

    Area utilization efficiency (AUE) is formulated for a sloping heliostat system facing any direction. The effects of slope shading, incidence factor, sun shading, and tower blocking by the mirrors are all taken into account. Our results show that annually averaged AUEs calculated for heliostat systems (1) increase with tower height at low slope angles but less rapidly at high slopes, (2) increase monotonically with slope angle and saturate at large slopes for systems facing due south, (3) reach a maximum at a certain slope for systems facing other directions than due south, and (4) drop sharply at slopes greater than a certain value for systems facing due east or west due to slope shading effect. The results are useful for solar energy collection on nonflat terrains.

  5. Multiscale Architectures and Parallel Algorithms for Video Object Tracking

    DTIC Science & Technology

    2011-10-01

    0 4 : if FIFO1 contains nDt frames then 5: Partition data into blocks. 6: Put SPE control block information...char buf 4 = FF; vec to r unsigned char buf 5 = FF; vec to r unsigned char buf 6 = FF; vec to r unsigned char buf 7 = FF; for ( j = 0 ; j < s i z e ; j...Public Release; Distribution Unlimited 8 7 u 6 ill :J (;) 5 E -;::; c 0 4 ~ u Q) X 3 Q) 8 7 6 u Q) Ul 5 :J (;) E :;::; 4 c 0

  6. On-the-fly Numerical Surface Integration for Finite-Difference Poisson-Boltzmann Methods.

    PubMed

    Cai, Qin; Ye, Xiang; Wang, Jun; Luo, Ray

    2011-11-01

    Most implicit solvation models require the definition of a molecular surface as the interface that separates the solute in atomic detail from the solvent approximated as a continuous medium. Commonly used surface definitions include the solvent accessible surface (SAS), the solvent excluded surface (SES), and the van der Waals surface. In this study, we present an efficient numerical algorithm to compute the SES and SAS areas to facilitate the applications of finite-difference Poisson-Boltzmann methods in biomolecular simulations. Different from previous numerical approaches, our algorithm is physics-inspired and intimately coupled to the finite-difference Poisson-Boltzmann methods to fully take advantage of its existing data structures. Our analysis shows that the algorithm can achieve very good agreement with the analytical method in the calculation of the SES and SAS areas. Specifically, in our comprehensive test of 1,555 molecules, the average unsigned relative error is 0.27% in the SES area calculations and 1.05% in the SAS area calculations at the grid spacing of 1/2Å. In addition, a systematic correction analysis can be used to improve the accuracy for the coarse-grid SES area calculations, with the average unsigned relative error in the SES areas reduced to 0.13%. These validation studies indicate that the proposed algorithm can be applied to biomolecules over a broad range of sizes and structures. Finally, the numerical algorithm can also be adapted to evaluate the surface integral of either a vector field or a scalar field defined on the molecular surface for additional solvation energetics and force calculations.

  7. Safety and Guidelines for Marked and Unmarked Pedestrian Crosswalks at Unsignalized Intersections in Nevada

    DOT National Transportation Integrated Search

    2012-09-01

    This report examines two aspects of marked and unmarked crosswalks at unsignalized intersections. Firstly, the report assesses the safety performance of marked/unmarked crosswalks in Nevada through comparing pedestrian-related crash rates. In which, ...

  8. 10. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRINT #11, THIRD FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  9. 11. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    11. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRINT #11, FOURTH FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  10. 9. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRING #11, SECOND FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  11. 8. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. Historic American Buildings Survey Photocopy of Unsigned, Undated Drawing, Courtesy, The Cosmos Club PROPOSED FLOOR PLANS, 2121 MASSACHUSETTS AVENUE, N.W., BLUEPRINT #11, FIRST-FLOOR PLAN - Townsend House, 2121 Massachusetts Avenue Northwest, Washington, District of Columbia, DC

  12. Safety effects of unsignalized superstreets in North Carolina.

    PubMed

    Ott, Sarah E; Haley, Rebecca L; Hummer, Joseph E; Foyle, Robert S; Cunningham, Christopher M

    2012-03-01

    Arterials across the United States are experiencing far too many collisions. Agencies tasked with improving these arterials have few available effective solutions. Superstreets, called restricted crossing u-turns by the Federal Highway Administration (FHWA), are part of a menu of unconventional arterial intersection designs that may provide a promising solution. Up to this point, there is little valid information available on the safety effects of superstreets, as study results have been from basic analyses that only account for traffic volume changes. The purpose of this research was to determine the safety effects of the unsignalized superstreet countermeasure on existing arterials in North Carolina. The safety study involved traffic flow adjustment, comparison-group, and Empirical Bayes analyses of 13 unsignalized superstreet intersections in North Carolina. The superstreets have been installed in the last few years across the state as opportunities presented themselves, but not necessarily at the most hazardous sites. The unsignalized superstreet countermeasure showed a significant reduction in total, angle and right turn, and left turn collisions in all analyses. Analyses also showed a significant reduction in fatal and injury collisions. The authors recommend that future analysts use a crash modification factor of 46 percent when considering the conversion of a typical unsignalized arterial intersection into a superstreet. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Performance of Frozen Density Embedding for Modeling Hole Transfer Reactions.

    PubMed

    Ramos, Pablo; Papadakis, Markos; Pavanello, Michele

    2015-06-18

    We have carried out a thorough benchmark of the frozen density-embedding (FDE) method for calculating hole transfer couplings. We have considered 10 exchange-correlation functionals, 3 nonadditive kinetic energy functionals, and 3 basis sets. Overall, we conclude that with a 7% mean relative unsigned error, the PBE and PW91 functionals coupled with the PW91k nonadditive kinetic energy functional and a TZP basis set constitute the most stable and accurate levels of theory for hole transfer coupling calculations. The FDE-ET method is found to be an excellent tool for computing diabatic couplings for hole transfer reactions.

  14. Developing a short range vehicle to infrastructure communication system to enhance the safety at STOP sign intersections : final report.

    DOT National Transportation Integrated Search

    2016-05-01

    Stop sign controlled unsignalized intersections raise a public safe concern. Even though various strategies, such as engineering, education, and policy, have been applied in practice, there are a number of fatal crashes occurred at unsignalized inter...

  15. Contingency Tracking during Unsignaled Delayed Reinforcement

    ERIC Educational Resources Information Center

    Keely, Josue; Feola, Tyler; Lattal, Kennon A.

    2007-01-01

    Three experiments were conducted with rats in which responses on one lever (labeled the functional lever) produced reinforcers after an unsignaled delay period that reset with each response during the delay. Responses on a second, nonfunctional, lever did not initiate delays, but, in the first and third experiments, such responses during the last…

  16. Comparing alchemical and physical pathway methods for computing the absolute binding free energy of charged ligands.

    PubMed

    Deng, Nanjie; Cui, Di; Zhang, Bin W; Xia, Junchao; Cruz, Jeffrey; Levy, Ronald

    2018-06-13

    Accurately predicting absolute binding free energies of protein-ligand complexes is important as a fundamental problem in both computational biophysics and pharmaceutical discovery. Calculating binding free energies for charged ligands is generally considered to be challenging because of the strong electrostatic interactions between the ligand and its environment in aqueous solution. In this work, we compare the performance of the potential of mean force (PMF) method and the double decoupling method (DDM) for computing absolute binding free energies for charged ligands. We first clarify an unresolved issue concerning the explicit use of the binding site volume to define the complexed state in DDM together with the use of harmonic restraints. We also provide an alternative derivation for the formula for absolute binding free energy using the PMF approach. We use these formulas to compute the binding free energy of charged ligands at an allosteric site of HIV-1 integrase, which has emerged in recent years as a promising target for developing antiviral therapy. As compared with the experimental results, the absolute binding free energies obtained by using the PMF approach show unsigned errors of 1.5-3.4 kcal mol-1, which are somewhat better than the results from DDM (unsigned errors of 1.6-4.3 kcal mol-1) using the same amount of CPU time. According to the DDM decomposition of the binding free energy, the ligand binding appears to be dominated by nonpolar interactions despite the presence of very large and favorable intermolecular ligand-receptor electrostatic interactions, which are almost completely cancelled out by the equally large free energy cost of desolvation of the charged moiety of the ligands in solution. We discuss the relative strengths of computing absolute binding free energies using the alchemical and physical pathway methods.

  17. Toward polarizable AMOEBA thermodynamics at fixed charge efficiency using a dual force field approach: application to organic crystals.

    PubMed

    Nessler, Ian J; Litman, Jacob M; Schnieders, Michael J

    2016-11-09

    First principles prediction of the structure, thermodynamics and solubility of organic molecular crystals, which play a central role in chemical, material, pharmaceutical and engineering sciences, challenges both potential energy functions and sampling methodologies. Here we calculate absolute crystal deposition thermodynamics using a novel dual force field approach whose goal is to maintain the accuracy of advanced multipole force fields (e.g. the polarizable AMOEBA model) while performing more than 95% of the sampling in an inexpensive fixed charge (FC) force field (e.g. OPLS-AA). Absolute crystal sublimation/deposition phase transition free energies were determined using an alchemical path that grows the crystalline state from a vapor reference state based on sampling with the OPLS-AA force field, followed by dual force field thermodynamic corrections to change between FC and AMOEBA resolutions at both end states (we denote the three step path as AMOEBA/FC). Importantly, whereas the phase transition requires on the order of 200 ns of sampling per compound, only 5 ns of sampling was needed for the dual force field thermodynamic corrections to reach a mean statistical uncertainty of 0.05 kcal mol -1 . For five organic compounds, the mean unsigned error between direct use of AMOEBA and the AMOEBA/FC dual force field path was only 0.2 kcal mol -1 and not statistically significant. Compared to experimental deposition thermodynamics, the mean unsigned error for AMOEBA/FC (1.4 kcal mol -1 ) was more than a factor of two smaller than uncorrected OPLS-AA (3.2 kcal mol -1 ). Overall, the dual force field thermodynamic corrections reduced condensed phase sampling in the expensive force field by a factor of 40, and may prove useful for protein stability or binding thermodynamics in the future.

  18. Resistance to Change of Responding Maintained by Unsignaled Delays to Reinforcement: A Response-Bout Analysis

    ERIC Educational Resources Information Center

    Podlesnik, Christopher A.; Jimenez-Gomez, Corina; Ward, Ryan D.; Shahan, Timothy A.

    2006-01-01

    Previous experiments have shown that unsignaled delayed reinforcement decreases response rates and resistance to change. However, the effects of different delays to reinforcement on underlying response structure have not been investigated in conjunction with tests of resistance to change. In the present experiment, pigeons responded on a…

  19. Signaled and Unsignaled Terminal Links in Concurrent Chains I: Effects of Reinforcer Probability and Immediacy

    ERIC Educational Resources Information Center

    Mattson, Karla M.; Hucks, Andrew; Grace, Randolph C.; McLean, Anthony P.

    2010-01-01

    Eight pigeons responded in a three-component concurrent-chains procedure, with either independent or dependent initial links. Relative probability and immediacy of reinforcement in the terminal links were both varied, and outcomes on individual trials (reinforcement or nonreinforcement) were either signaled or unsignaled. Terminal-link fixed-time…

  20. Context Blocking in Rat Autoshaping: Sign-Tracking versus Goal-Tracking

    ERIC Educational Resources Information Center

    Costa, Daniel S. J.; Boakes, Robert A.

    2009-01-01

    Prior experience of unsignaled food can interfere with subsequent acquisition by birds of autoshaped key-pecking at a signal light. This has been understood to indicate that unsignaled food results in context conditioning, which blocks subsequent learning about the keylight-food relationship. In the present experiment with rats lever insertion as…

  1. Analysis of Crossing Speed of the Pedestrians in Marked and Unmarked Crosswalks in the Signalized and Un-Signalized Intersections (Case Study: Rasht city)

    NASA Astrophysics Data System (ADS)

    Behbahani, Hamid; Najafi Moghaddam Gilani, Vahid; Jahangir Samet, Mehdi; Salehfard, Reza

    2017-10-01

    Pedestrians affect the traffic in the signalized and un-signalized intersections. Therefore, identifying the behavioural features of the pedestrians is of great importance and may result in better designing facilities for them. In this study, by shooting the four intersections in Rasht for 15 hours and inventory from 4568 pedestrians, crossing speed of the pedestrians in the marked crosswalks and unmarked crosswalks was evaluated and analysed. Results showed that pedestrians‧ crossing speed in the marked crosswalks is higher than their crossing speed in the unmarked crosswalks in both signalized and un-signalized intersections. Moreover, in the unmarked crosswalks in the signalized intersections, 15th percentile speed of male pedestrians, female pedestrians and group of pedestrians’ decrease 6.4%, 5.4% and 12.2%, respectively, compared with the 15th percentile speed in the marked crosswalks. Above-mentioned values in the unmarked crosswalks in the un-signalized intersections for male pedestrians, female pedestrians, and group of pedestrians decrease 1.2%, 3.8%, and 1.4%, respectively.

  2. A Generalized DIF Effect Variance Estimator for Measuring Unsigned Differential Test Functioning in Mixed Format Tests

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Algina, James

    2006-01-01

    One approach to measuring unsigned differential test functioning is to estimate the variance of the differential item functioning (DIF) effect across the items of the test. This article proposes two estimators of the DIF effect variance for tests containing dichotomous and polytomous items. The proposed estimators are direct extensions of the…

  3. The charger transfer electronic coupling in diabatic perspective: A multi-state density functional theory study

    NASA Astrophysics Data System (ADS)

    Guo, Xinwei; Qu, Zexing; Gao, Jiali

    2018-01-01

    The multi-state density functional theory (MSDFT) provides a convenient way to estimate electronic coupling of charge transfer processes based on a diabatic representation. Its performance has been benchmarked against the HAB11 database with a mean unsigned error (MUE) of 17 meV between MSDFT and ab initio methods. The small difference may be attributed to different representations, diabatic from MSDFT and adiabatic from ab initio calculations. In this discussion, we conclude that MSDFT provides a general and efficient way to estimate the electronic coupling for charge-transfer rate calculations based on the Marcus-Hush model.

  4. Evaluation of fixed momentary dro schedules under signaled and unsignaled arrangements.

    PubMed

    Hammond, Jennifer L; Iwata, Brian A; Fritz, Jennifer N; Dempsey, Carrie M

    2011-01-01

    Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response requirement of the schedule per se or (b) discrimination of the contingency made more salient by the signal. To separate these two potential influences, we compared the effects of signaled versus unsignaled FM DRO with 4 individuals with developmental disabilities whose problem behavior was maintained by social-positive reinforcement. During signaled FM DRO, the experimenter presented a visual stimulus 3 s prior to the end of the DRO interval and delivered reinforcement contingent on the absence of problem behavior at the second the interval elapsed. Unsignaled DRO was identical except that interval termination was not signaled. Results indicated that signaled FM DRO was effective in decreasing 2 subjects' problem behavior, whereas an unsignaled schedule was required for the remaining 2 subjects. These results suggest that the response requirement per se of FM DRO may not be problematic if it is not easily discriminated.

  5. Toward an Episodic Context Account of Retrieval-Based Learning: Dissociating Retrieval Practice and Elaboration

    ERIC Educational Resources Information Center

    Lehman, Melissa; Smith, Megan A.; Karpicke, Jeffrey D.

    2014-01-01

    We tested the predictions of 2 explanations for retrieval-based learning; while the elaborative retrieval hypothesis assumes that the retrieval of studied information promotes the generation of semantically related information, which aids in later retrieval (Carpenter, 2009), the episodic context account proposed by Karpicke, Lehman, and Aue (in…

  6. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    PubMed

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.

  7. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action

    PubMed Central

    Roesch, Matthew R.

    2017-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036

  8. Accelerating Chemical Discovery with Machine Learning: Simulated Evolution of Spin Crossover Complexes with an Artificial Neural Network.

    PubMed

    Janet, Jon Paul; Chan, Lydia; Kulik, Heather J

    2018-03-01

    Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.

  9. Second Language Learners' Attitudes towards English Varieties

    ERIC Educational Resources Information Center

    Zhang, Weimin; Hu, Guiling

    2008-01-01

    This pilot project investigates second language (L2) learners' attitudes towards three varieties of English: American (AmE), British (BrE) and Australian (AuE). A 69-word passage spoken by a female speaker of each variety was used. Participants were 30 Chinese students pursuing Masters or Doctoral degrees in the United States, who listened to each…

  10. An extended car-following model at un-signalized intersections under V2V communication environment

    PubMed Central

    Wang, Tao; Li, Peng

    2018-01-01

    An extended car-following model is proposed in this paper to analyze the impacts of V2V (vehicle to vehicle) communication on the micro driving behavior at the un-signalized intersection. A four-leg un-signalized intersection with twelve streams (left-turn, through movement, and right turn from each leg) is used. The effect of the guidance strategy on the reduction of the rate of stops and total delay is explored by comparing the proposed model and the traditional FVD car-following model. The numerical results illustrate that potential conflicts between vehicles can be predicted and some stops can be avoided by decelerating in advance. The driving comfort and traffic efficiency can be improved accordingly. More benefits could be obtained under the long communication range, low to medium traffic density, and simple traffic pattern conditions. PMID:29425243

  11. Do Practical Standard Coupled Cluster Calculations Agree Better than Kohn–Sham Calculations with Currently Available Functionals When Compared to the Best Available Experimental Data for Dissociation Energies of Bonds to 3d Transition Metals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng

    2015-05-12

    Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less

  12. A comparative study of the structures and electronic properties of graphene fragments: A DFT and MP2 survey

    NASA Astrophysics Data System (ADS)

    de Carvalho, E. F. V.; Lopez-Castillo, A.; Roberto-Neto, O.

    2018-01-01

    Graphene can be viewed as sheet of benzene rings fused together forming a variety of structures including the trioxotriangulenes (TOTs) which is a class of organic molecules with electro-active properties. In order to clarify such properties, structures and electronic properties of the graphene fragments phenalenyl, triangulene, 6-oxophenalenoxyl, and X3TOT (X = H, F, Cl) are computed. Validation of the methodologies are carried out using the density functionals B3LYP, M06-2X, B2PLYP-D, and the MP2 theory, giving equilibrium geometries of benzene, naphthalene, and anthracene with mean unsigned error (MUE) of only 0.003, 0.007, 0.004, and 0.007 Å, respectively in relation to experiment.

  13. Frontal Theta Reflects Uncertainty and Unexpectedness during Exploration and Exploitation

    PubMed Central

    Figueroa, Christina M.; Cohen, Michael X; Frank, Michael J.

    2012-01-01

    In order to understand the exploitation/exploration trade-off in reinforcement learning, previous theoretical and empirical accounts have suggested that increased uncertainty may precede the decision to explore an alternative option. To date, the neural mechanisms that support the strategic application of uncertainty-driven exploration remain underspecified. In this study, electroencephalography (EEG) was used to assess trial-to-trial dynamics relevant to exploration and exploitation. Theta-band activities over middle and lateral frontal areas have previously been implicated in EEG studies of reinforcement learning and strategic control. It was hypothesized that these areas may interact during top-down strategic behavioral control involved in exploratory choices. Here, we used a dynamic reward–learning task and an associated mathematical model that predicted individual response times. This reinforcement-learning model generated value-based prediction errors and trial-by-trial estimates of exploration as a function of uncertainty. Mid-frontal theta power correlated with unsigned prediction error, although negative prediction errors had greater power overall. Trial-to-trial variations in response-locked frontal theta were linearly related to relative uncertainty and were larger in individuals who used uncertainty to guide exploration. This finding suggests that theta-band activities reflect prefrontal-directed strategic control during exploratory choices. PMID:22120491

  14. Assessing the Accuracy of Density Functional and Semiempirical Wave Function Methods for Water Nanoparticles: Comparing Binding and Relative Energies of (H2O)16 and (H2O)17 to CCSD(T) Results.

    PubMed

    Leverentz, Hannah R; Qi, Helena W; Truhlar, Donald G

    2013-02-12

    The binding energies and relative conformational energies of five configurations of the water 16-mer are computed using 61 levels of density functional (DF) theory, 12 methods combining DF theory with molecular mechanics damped dispersion (DF-MM), seven semiempirical-wave function (SWF) methods, and five methods combining SWF theory with molecular mechanics damped dispersion (SWF-MM). The accuracies of the computed energies are assessed by comparing them to recent high-level ab initio results; this assessment is more relevant to bulk water than previous tests on small clusters because a 16-mer is large enough to have water molecules that participate in more than three hydrogen bonds. We find that for water 16-mer binding energies the best DF, DF-MM, SWF, and SWF-MM methods (and their mean unsigned errors in kcal/mol) are respectively M06-2X (1.6), ωB97X-D (2.3), SCC-DFTB-γ(h) (35.2), and PM3-D (3.2). We also mention the good performance of CAM-B3LYP (1.8), M05-2X (1.9), and TPSSLYP (3.0). In contrast, for relative energies of various water nanoparticle 16-mer structures, the best methods (and mean unsigned errors in kcal/mol), in the same order of classes of methods, are SOGGA11-X (0.3), ωB97X-D (0.2), PM6 (0.4), and PMOv1 (0.6). We also mention the good performance of LC-ωPBE-D3 (0.3) and ωB97X (0.4). When both relative and binding energies are taken into consideration, the best methods overall (out of the 85 tested) are M05-2X without molecular mechanics and ωB97X-D when molecular mechanics corrections are included; with considerably higher average errors and considerably lower cost, the best SWF or SWF-MM method is PMOv1. We use six of the best methods for binding energies of the water 16-mers to calculate the binding energies of water hexamers and water 17-mers to test whether these methods are also reliable for binding energy calculations on other types of water clusters.

  15. Noise and Sonic Boom Impact Technology. Initial Development of an Assessment System for Aircraft Noise (ASAN). Volume 3. Technical Description

    DTIC Science & Technology

    1989-06-01

    field width, it will be padded on the left (or right, if the left adjustment indicator has been given) to make up the field width. The padding character...is blank (space) normally, and zero if the field width was specified with a leading zero (this zero does not imply an octal field width). 25 " a...unsigned octal notation (without a leading zero ). ’x The argument is coverted to unsigned hexadecimal notation (without a leading Ox). * u The argument is

  16. Determination of geochemical affinities of granitic rocks from the Aue-Schwarzenberg zone (Erzgebirge, Germany) by multivariate statistics

    USGS Publications Warehouse

    Forster, H.-J.; Davis, J.C.

    2000-01-01

    Variscan granites of the Erzgebirge region can be effectively classified into five genetically distinct major groups by canonical analysis of geochemical variables. The same classification procedure, when applied to small plutons in the Aue-Schwarzenberg granite zone (ASGZ), shows that all ASGZ granites have compositional affinities to low-F biotite or low-F two-mica granite groups. This suggests that the ASGZ granites were emplaced during the first, late-collisional stage of silicic magmatism in the region, which occurred between about 325 and 318 Ma. The numerous biotite granite bodies in the zone are geochemically distinct from both the neighboring Kirchberg granite pluton and the spatially displaced Niederbobritzsch biotite granite massif. Instead, these bodies seem to constitute a third sub-group within the low-F biotite granite class. The ASGZ biotite granites represent three or more genetically distinct bodies, thus highlighting the enormous compositional variability within this group of granites. Least evolved samples of two-mica granites from the ASGZ apparently reflect the assimilation of low-grade metamorphic country rocks during emplacement, altering the original composition of the melts by enhancing primary Al content. The same genesis is implied for the rare "cordierite granite" facies of the Bergen massif, the type pluton for the low-F two-mica granite group in the Erzgebirge.

  17. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method.

    PubMed

    Sinha, Debalina; Pavanello, Michele

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term the Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.

  18. Exact kinetic energy enables accurate evaluation of weak interactions by the FDE-vdW method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinha, Debalina; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu

    2015-08-28

    The correlation energy of interaction is an elusive and sought-after interaction between molecular systems. By partitioning the response function of the system into subsystem contributions, the Frozen Density Embedding (FDE)-vdW method provides a computationally amenable nonlocal correlation functional based on the adiabatic connection fluctuation dissipation theorem applied to subsystem density functional theory. In reproducing potential energy surfaces of weakly interacting dimers, we show that FDE-vdW, either employing semilocal or exact nonadditive kinetic energy functionals, is in quantitative agreement with high-accuracy coupled cluster calculations (overall mean unsigned error of 0.5 kcal/mol). When employing the exact kinetic energy (which we term themore » Kohn-Sham (KS)-vdW method), the binding energies are generally closer to the benchmark, and the energy surfaces are also smoother.« less

  19. SecureQEMU: Emulation-Based Software Protection Providing Encrypted Code Execution and Page Granularity Code Signing

    DTIC Science & Technology

    2008-12-01

    SHA256 DIGEST LENGTH) ) ; peAddSection(&sF i l e , " . S i g S t u b " , dwStubSecSize , dwStubSecSize ) ; 169 peSecure(&sF i l e , deqAddrSize...deqAuthPageAddrSize . s i z e ( ) /2) ∗ (8 + SHA256 DIGEST LENGTH) ) + 16 ; bCode [ 3 4 ] = ( ( char∗)&dwSize ) [ 0 ] ; bCode [ 3 5 ] = ( ( char∗)&dwSize ) [ 1...2) ∗ (8 + SHA256 DIGEST LENGTH... ) ) ; AES KEY aesKey ; unsigned char i v s a l t [ 1 6 ] , temp iv [ 1 6 ] ; 739 unsigned char ∗key

  20. Pavlovian contingencies and temporal information.

    PubMed

    Balsam, Peter D; Fairhurst, Stephen; Gallistel, Charles R

    2006-07-01

    The effects of altering the contingency between the conditioned stimulus (CS) and the unconditioned stimulus (US) on the acquisition of autoshaped responding was investigated by changing the frequency of unsignaled USs during the intertrial interval. The addition of the unsignaled USs had an effect on acquisition speed comparable with that of massing trials. The effects of these manipulations can be understood in terms of their effect on the amount of information (number of bits) that the average CS conveys to the subject about the timing of the next US. The number of reinforced CSs prior to acquisition is inversely related to the information content of the CS.

  1. Signal functions in delayed reinforcement

    PubMed Central

    Lattal, Kennon A.

    1984-01-01

    Three experiments were conducted with pigeons to examine the role of the signal in delay-of-reinforcement procedures. In the first, a blackout accompanying a period of nonreinforcement increased key-peck response rates maintained by immediate reinforcement. The effects of dissociating the blackout from the delay interval were examined in the second experiment. In three conditions, blackouts and unsignaled delays were negatively correlated or occurred randomly with respect to one another. A signaled delay and an unsignaled delay that omitted the blackouts were studied in two other conditions. All delay-of-reinforcement conditions generally produced response rates lower than those produced by immediate reinforcement. Signaled delays maintained higher response rates than did any of the various unsignaled-delay conditions, with or without dissociated blackouts. The effects of these latter conditions did not differ systematically from one another. The final experiment showed that response rates varied as a function of the frequency with which a blackout accompanied delay intervals. By eliminating a number of methodological difficulties present in previous delay-of-reinforcement experiments, these results suggest the importance of the signal in maintaining responding during delay-of-reinforcement procedures and, conversely, the importance of the delay interval in decreasing responding. PMID:16812387

  2. A Feasibility Study for Measuring Accurate Chest Compression Depth and Rate on Soft Surfaces Using Two Accelerometers and Spectral Analysis

    PubMed Central

    Gutiérrez, J. J.; Russell, James K.

    2016-01-01

    Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808

  3. Statistical Analysis in Dental Research Papers.

    DTIC Science & Technology

    1983-08-08

    AD A136, 019 STATISTICAL ANALYSS IN DENTAL RESEARCH PAPERS(Ul ARMY I INS OF DENTAL NESEARCH WASHINGTON DC L LORTON 0R AUG983 UNCL ASS FED F/S 6/5 IEE...BEFORE COSTL’,..G FORM 2. GOVT ACCESSION NO 3. RECIPIENTS CATALOG NUbER d Ste S. TYPE OF REPORT A PERIOD COVERED ,cistical Analysis in Dental Research ...Papers Submission of papaer Jan- Aue 1983 X!t AUTHOR(&) ". COTACO.RATN Lewis Lorton 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT

  4. Proceedings of the Annual Conference of Psychologists in the Army Medical Service (4th) Held in New York, New York on August 30, 1961

    DTIC Science & Technology

    1986-02-01

    Seturity CWcatio•i (U) Proceedings Fourth Annual Conference of Psychologists it: The Army Medical Service, August, 1961 12. PERSONAL AUeTHORIS WILKIN...direction of personal satisfaction in his job. These can be simed up briefly as effective management and effective communication. Whenever an...to an inexperienced individual while at the same time you would be unresponsible tor the outcome except as it affects all persons called upon to read

  5. Shared-hole graph search with adaptive constraints for 3D optic nerve head optical coherence tomography image segmentation

    PubMed Central

    Yu, Kai; Shi, Fei; Gao, Enting; Zhu, Weifang; Chen, Haoyu; Chen, Xinjian

    2018-01-01

    Optic nerve head (ONH) is a crucial region for glaucoma detection and tracking based on spectral domain optical coherence tomography (SD-OCT) images. In this region, the existence of a “hole” structure makes retinal layer segmentation and analysis very challenging. To improve retinal layer segmentation, we propose a 3D method for ONH centered SD-OCT image segmentation, which is based on a modified graph search algorithm with a shared-hole and locally adaptive constraints. With the proposed method, both the optic disc boundary and nine retinal surfaces can be accurately segmented in SD-OCT images. An overall mean unsigned border positioning error of 7.27 ± 5.40 µm was achieved for layer segmentation, and a mean Dice coefficient of 0.925 ± 0.03 was achieved for optic disc region detection. PMID:29541497

  6. Using non-empirically tuned range-separated functionals with simulated emission bands to model fluorescence lifetimes.

    PubMed

    Wong, Z C; Fan, W Y; Chwee, T S; Sullivan, Michael B

    2017-08-09

    Fluorescence lifetimes were evaluated using TD-DFT under different approximations for the emitting molecule and various exchange-correlation functionals, such as B3LYP, BMK, CAM-B3LYP, LC-BLYP, M06, M06-2X, M11, PBE0, ωB97, ωB97X, LC-BLYP*, and ωB97X* where the range-separation parameters in the last two functionals were tuned in a non-empirical fashion. Changes in the optimised molecular geometries between the ground and electronically excited states were found to affect the quality of the calculated lifetimes significantly, while the inclusion of vibronic features led to further improvements over the assumption of a vertical electronic transition. The LC-BLYP* functional was found to return the most accurate fluorescence lifetimes with unsigned errors that are mostly within 1.5 ns of experimental values.

  7. Electric field theory based approach to search-direction line definition in image segmentation: application to optimal femur-tibia cartilage segmentation in knee-joint 3-D MR

    NASA Astrophysics Data System (ADS)

    Yin, Y.; Sonka, M.

    2010-03-01

    A novel method is presented for definition of search lines in a variety of surface segmentation approaches. The method is inspired by properties of electric field direction lines and is applicable to general-purpose n-D shapebased image segmentation tasks. Its utility is demonstrated in graph construction and optimal segmentation of multiple mutually interacting objects. The properties of the electric field-based graph construction guarantee that inter-object graph connecting lines are non-intersecting and inherently covering the entire object-interaction space. When applied to inter-object cross-surface mapping, our approach generates one-to-one and all-to-all vertex correspondent pairs between the regions of mutual interaction. We demonstrate the benefits of the electric field approach in several examples ranging from relatively simple single-surface segmentation to complex multiobject multi-surface segmentation of femur-tibia cartilage. The performance of our approach is demonstrated in 60 MR images from the Osteoarthritis Initiative (OAI), in which our approach achieved a very good performance as judged by surface positioning errors (average of 0.29 and 0.59 mm for signed and unsigned cartilage positioning errors, respectively).

  8. Signals, resistance to change, and conditioned reinforcement in a multiple schedule.

    PubMed

    Bell, Matthew C; Gomez, Belen E; Kessler, Kira

    2008-06-01

    The effect of signals on resistance to change was evaluated using pigeons responding on a three-component multiple schedule. Each component contained a variable-interval initial link followed by a fixed-time terminal link. One component was an unsignaled-delay schedule, and two were equivalent signaled-delay schedules. After baseline training, resistance to change was assessed through (a) extinction and (b) adding free food to the intercomponent interval. During these tests, the signal stimulus from one of the signaled-delay components (SIG-T) was replaced with the initial-link stimulus from that component, converting it to an unsignaled-delay schedule. That signal stimulus was added to the delay period of the unsignaled-delay component (UNS), converting it to a signaled-delay schedule. The remaining signaled component remained unchanged (SIG-C). Resistance-to-change tests showed removing the signal had a minimal effect on resistance to change in the SIG-T component compared to the unchanged SIG-C component except for one block during free-food testing. Adding the signal to the UNS component significantly increased response rates suggesting that component had low response strength. Interestingly, the direction of the effect was in the opposite direction from what is typically observed. Results are consistent with the conclusion that the signal functioned as a conditioned reinforcer and inconsistent with a generalization-decrement explanation.

  9. Multipole moments in the effective fragment potential method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.

    In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less

  10. Multipole moments in the effective fragment potential method

    DOE PAGES

    Bertoni, Colleen; Slipchenko, Lyudmila V.; Misquitta, Alston J.; ...

    2017-02-17

    In the effective fragment potential (EFP) method the Coulomb potential is represented using a set of multipole moments generated by the distributed multipole analysis (DMA) method. Misquitta, Stone, and Fazeli recently developed a basis space-iterated stockholder atom (BS-ISA) method to generate multipole moments. This study assesses the accuracy of the EFP interaction energies using sets of multipole moments generated from the BS-ISA method, and from several versions of the DMA method (such as analytic and numeric grid-based), with varying basis sets. Both methods lead to reasonable results, although using certain implementations of the DMA method can result in large errors.more » With respect to the CCSD(T)/CBS interaction energies, the mean unsigned error (MUE) of the EFP method for the S22 data set using BS-ISA–generated multipole moments and DMA-generated multipole moments (using a small basis set and the analytic DMA procedure) is 0.78 and 0.72 kcal/mol, respectively. Here, the MUE accuracy is on the same order as MP2 and SCS-MP2. The MUEs are lower than in a previous study benchmarking the EFP method without the EFP charge transfer term, demonstrating that the charge transfer term increases the accuracy of the EFP method. Regardless of the multipole moment method used, it is likely that much of the error is due to an insufficient short-range electrostatic term (i.e., charge penetration term), as shown by comparisons with symmetry-adapted perturbation theory.« less

  11. Automatic short axis orientation of the left ventricle in 3D ultrasound recordings

    NASA Astrophysics Data System (ADS)

    Pedrosa, João.; Heyde, Brecht; Heeren, Laurens; Engvall, Jan; Zamorano, Jose; Papachristidis, Alexandros; Edvardsen, Thor; Claus, Piet; D'hooge, Jan

    2016-04-01

    The recent advent of three-dimensional echocardiography has led to an increased interest from the scientific community in left ventricle segmentation frameworks for cardiac volume and function assessment. An automatic orientation of the segmented left ventricular mesh is an important step to obtain a point-to-point correspondence between the mesh and the cardiac anatomy. Furthermore, this would allow for an automatic division of the left ventricle into the standard 17 segments and, thus, fully automatic per-segment analysis, e.g. regional strain assessment. In this work, a method for fully automatic short axis orientation of the segmented left ventricle is presented. The proposed framework aims at detecting the inferior right ventricular insertion point. 211 three-dimensional echocardiographic images were used to validate this framework by comparison to manual annotation of the inferior right ventricular insertion point. A mean unsigned error of 8, 05° +/- 18, 50° was found, whereas the mean signed error was 1, 09°. Large deviations between the manual and automatic annotations (> 30°) only occurred in 3, 79% of cases. The average computation time was 666ms in a non-optimized MATLAB environment, which potentiates real-time application. In conclusion, a successful automatic real-time method for orientation of the segmented left ventricle is proposed.

  12. Curvature correction of retinal OCTs using graph-based geometry detection

    NASA Astrophysics Data System (ADS)

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-05-01

    In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.

  13. Eb&D: A new clustering approach for signed social networks based on both edge-betweenness centrality and density of subgraphs

    NASA Astrophysics Data System (ADS)

    Qi, Xingqin; Song, Huimin; Wu, Jianliang; Fuller, Edgar; Luo, Rong; Zhang, Cun-Quan

    2017-09-01

    Clustering algorithms for unsigned social networks which have only positive edges have been studied intensively. However, when a network has like/dislike, love/hate, respect/disrespect, or trust/distrust relationships, unsigned social networks with only positive edges are inadequate. Thus we model such kind of networks as signed networks which can have both negative and positive edges. Detecting the cluster structures of signed networks is much harder than for unsigned networks, because it not only requires that positive edges within clusters are as many as possible, but also requires that negative edges between clusters are as many as possible. Currently, we have few clustering algorithms for signed networks, and most of them requires the number of final clusters as an input while it is actually hard to predict beforehand. In this paper, we will propose a novel clustering algorithm called Eb &D for signed networks, where both the betweenness of edges and the density of subgraphs are used to detect cluster structures. A hierarchically nested system will be constructed to illustrate the inclusion relationships of clusters. To show the validity and efficiency of Eb &D, we test it on several classical social networks and also hundreds of synthetic data sets, and all obtain better results compared with other methods. The biggest advantage of Eb &D compared with other methods is that the number of clusters do not need to be known prior.

  14. Speed of CMEs and the Magnetic Non-Potentiality of Their Source ARs

    NASA Technical Reports Server (NTRS)

    Tiwari, Sanjiv K.; Falconer, David A.; Moore, Ronald L.; Venkatakrishnan, P.

    2014-01-01

    Most fast coronal mass ejections (CMEs) originate from solar active regions (ARs). Non-potentiality of ARs is expected to determine the speed and size of CMEs in the outer corona. Several other unexplored parameters might be important as well. To find out the correlation between the initial speed of CMEs and the non-potentiality of source ARs, we associated over a hundred of CMEs with source ARs via their co-produced flares. The speed of the CMEs are collected from the SOHO LASCO CME catalog. We have used vector magnetograms obtained mainly with HMI/SDO, also with Hinode (SOT/SP) when available within an hour of a CME occurrence, to evaluate various magnetic non-potentiality parameters, e.g. magnetic free-energy proxies, computed magnetic free energy, twist, shear angle, signed shear angle etc. We have also included several other parameters e.g. total unsigned flux, net current, magnetic area of ARs, area of sunspots, to investigate their correlation, if any, with the initial speeds of CMEs. Our preliminary results show that the ARs with larger non-potentiality and area mostly produce fast CMEs but they can also produce slower ones. The ARs with lesser non-potentiality and area generally produce only slower CMEs, however, there are a few exceptions. The total unsigned flux correlate with the non-potentiality parameters and area of ARs but some ARs with large unsigned flux are also found to be least non-potential. A more detailed analysis is underway.

  15. Application of Free Energy Perturbation for the Design of BACE1 Inhibitors.

    PubMed

    Ciordia, Myriam; Pérez-Benito, Laura; Delgado, Francisca; Trabanco, Andrés A; Tresadern, Gary

    2016-09-26

    Novel spiroaminodihydropyrroles probing for optimized interactions at the P3 pocket of β-secretase 1 (BACE1) were designed with the use of free energy perturbation (FEP) calculations. The resulting molecules showed pIC50 potencies in enzymatic BACE1 inhibition assays ranging from approximately 5 to 7. Good correlation was observed between the predicted activity from the FEP calculations and experimental activity. Simulations run with a default 5 ns approach delivered a mean unsigned error (MUE) between prediction and experiment of 0.58 and 0.91 kcal/mol for retrospective and prospective applications, respectively. With longer simulations of 10 and 20 ns, the MUE was in both cases 0.57 kcal/mol for the retrospective application, and 0.69 and 0.59 kcal/mol for the prospective application. Other considerations that impact the quality of the calculations are discussed. This work provides an example of the value of FEP as a computational tool for drug discovery.

  16. Accurate pKa calculation of the conjugate acids of alkanolamines, alkaloids and nucleotide bases by quantum chemical methods.

    PubMed

    Gangarapu, Satesh; Marcelis, Antonius T M; Zuilhof, Han

    2013-04-02

    The pKa of the conjugate acids of alkanolamines, neurotransmitters, alkaloid drugs and nucleotide bases are calculated with density functional methods (B3LYP, M08-HX and M11-L) and ab initio methods (SCS-MP2, G3). Implicit solvent effects are included with a conductor-like polarizable continuum model (CPCM) and universal solvation models (SMD, SM8). G3, SCS-MP2 and M11-L methods coupled with SMD and SM8 solvation models perform well for alkanolamines with mean unsigned errors below 0.20 pKa units, in all cases. Extending this method to the pKa calculation of 35 nitrogen-containing compounds spanning 12 pKa units showed an excellent correlation between experimental and computational pKa values of these 35 amines with the computationally low-cost SM8/M11-L density functional approach. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Simulation of Near-Edge X-ray Absorption Fine Structure with Time-Dependent Equation-of-Motion Coupled-Cluster Theory.

    PubMed

    Nascimento, Daniel R; DePrince, A Eugene

    2017-07-06

    An explicitly time-dependent (TD) approach to equation-of-motion (EOM) coupled-cluster theory with single and double excitations (CCSD) is implemented for simulating near-edge X-ray absorption fine structure in molecular systems. The TD-EOM-CCSD absorption line shape function is given by the Fourier transform of the CCSD dipole autocorrelation function. We represent this transform by its Padé approximant, which provides converged spectra in much shorter simulation times than are required by the Fourier form. The result is a powerful framework for the blackbox simulation of broadband absorption spectra. K-edge X-ray absorption spectra for carbon, nitrogen, and oxygen in several small molecules are obtained from the real part of the absorption line shape function and are compared with experiment. The computed and experimentally obtained spectra are in good agreement; the mean unsigned error in the predicted peak positions is only 1.2 eV. We also explore the spectral signatures of protonation in these molecules.

  18. Standard Operating Procedures and Field Methods Used for Conducting Ecological Risk Assessment Case Studies. Naval Construction Battalion Center, Davisville, Rhode Island, and Naval Shipyard, Portsmouth, Kittery, Maine.

    DTIC Science & Technology

    1992-05-01

    Studies by Craig et al. 20 have chloride extraction/Grignard derivatization method. shown that significant evaporative losses of TBT + Methylene...and Craig . P1J Anahiw. 1987. 112: 1781 available from IEEE Service Center. 445 Hoes tLane. 21/ Aue. W A and Flinn. C G J Cltroinato-er.. 1977, 142 145...Environmental Research Laboratory Narragansett, Narragansett, RI. Keith, L.H., W. Crummett, J. Deegan , Jr, R.A. Libby, J.K. Taylor, and B. Wentler, 1983. Annal

  19. Transactions of the Conference of Army Mathematicians (28th) Held at Bethesda, Maryland on 28-30 June 1982.

    DTIC Science & Technology

    1983-02-01

    real part is the Hilbert transform of its imaginary part. Thus we have o) d4’ . (5.1)+," _ Here r(o) and 6(o) denote respectively T(4’,0-) and 0(o,0...linear operators A in a Hilbert space H, eigenuv.aueA are critical values of the Raqt.igh quo 5ient (5.1) R(y) = (Ay,y)/(y,y), y # 0. An eigenvalue X...Das Gupta Ballistic Research Laboratory Jim Greenberg National Science Foundation Charles Giardina Fairleigh-Dickinson University Frank P. Kuhl U. S

  20. Left-Turn Bays

    DOT National Transportation Integrated Search

    1996-05-01

    The topic of left-turn bays (left-turn lanes) involves the following three issues: 1.) Warrants; 2.) Bay Length and 3.) Design Details. This discussion paper deals with warrants and bay length -- including queue storage at signalized and unsignalized...

  1. Intersection collision warning system

    DOT National Transportation Integrated Search

    1999-04-01

    Safety at unsignalized intersections is a major concern. Intersection collisions are one of the most common types of crash, and in the United States, they account for nearly 2 million accidents and 6,700 fatalities every year. However, a fully signal...

  2. Driver eye-scanning behavior at intersections at night.

    DOT National Transportation Integrated Search

    2009-10-01

    This research project analyzed drivers eye scanning behavior at night when approaching signalized : and unsignalized intersections using the data from a head-mounted eye-tracking system during open road : driving on a prescribed route. During the ...

  3. Left-turn lanes at unsignalized median openings.

    DOT National Transportation Integrated Search

    2014-03-01

    Due to the frequent presence of median openings in urban arterial settings, the requirements for the deceleration and : storage of turning vehicles (e.g. AASHTO Green Book) often exceed the available length between two adjacent : openings which leave...

  4. The effects of mobile phone use on pedestrian crossing behaviour at signalized and unsignalized intersections.

    PubMed

    Hatfield, Julie; Murphy, Susanne

    2007-01-01

    Research amongst drivers suggests that pedestrians using mobile telephones may behave riskily while crossing the road, and casual observation suggests concerning levels of pedestrian mobile-use. An observational field survey of 270 females and 276 males was conducted to compare the safety of crossing behaviours for pedestrians using, versus not using, a mobile phone. Amongst females, pedestrians who crossed while talking on a mobile phone crossed more slowly, and were less likely to look at traffic before starting to cross, to wait for traffic to stop, or to look at traffic while crossing, compared to matched controls. For males, pedestrians who crossed while talking on a mobile phone crossed more slowly at unsignalized crossings. These effects suggest that talking on a mobile phone is associated with cognitive distraction that may undermine pedestrian safety. Messages explicitly suggesting techniques for avoiding mobile-use while road crossing may benefit pedestrian safety.

  5. Fracture process zone in granite

    USGS Publications Warehouse

    Zang, A.; Wagner, F.C.; Stanchits, S.; Janssen, C.; Dresen, G.

    2000-01-01

    In uniaxial compression tests performed on Aue granite cores (diameter 50 mm, length 100 mm), a steel loading plate was used to induce the formation of a discrete shear fracture. A zone of distributed microcracks surrounds the tip of the propagating fracture. This process zone is imaged by locating acoustic emission events using 12 piezoceramic sensors attached to the samples. Propagation velocity of the process zone is varied by using the rate of acoustic emissions to control the applied axial force. The resulting velocities range from 2 mm/s in displacement-controlled tests to 2 ??m/s in tests controlled by acoustic emission rate. Wave velocities and amplitudes are monitored during fault formation. P waves transmitted through the approaching process zone show a drop in amplitude of 26 dB, and ultrasonic velocities are reduced by 10%. The width of the process zone is ???9 times the grain diameter inferred from acoustic data but is only 2 times the grain size from optical crack inspection. The process zone of fast propagating fractures is wider than for slow ones. The density of microcracks and acoustic emissions increases approaching the main fracture. Shear displacement scales linearly with fracture length. Fault plane solutions from acoustic events show similar orientation of nodal planes on both sides of the shear fracture. The ratio of the process zone width to the fault length in Aue granite ranges from 0.01 to 0.1 inferred from crack data and acoustic emissions, respectively. The fracture surface energy is estimated from microstructure analysis to be ???2 J. A lower bound estimate for the energy dissipated by acoustic events is 0.1 J. Copyright 2000 by the American Geophysical Union.

  6. Higher-order aberrations and best-corrected visual acuity in Native American children with a high prevalence of astigmatism

    PubMed Central

    Miller, Joseph M.; Harvey, Erin M.; Schwiegerling, Jim

    2016-01-01

    Purpose To determine whether higher-order aberrations (HOAs) in children from a highly astigmatic population differ from population norms and whether HOAs are associated with astigmatism and reduced best-corrected visual acuity. Methods Subjects were 218 Tohono O’odham Native American children 5–9 years of age. Noncycloplegic HOA measurements were obtained with a handheld Shack-Hartmann sensor (SHS). Signed (z06s to z14s) and unsigned (z06u to z14u) wavefront aberration Zernike coefficients Z(3,−3) to Z(4,4) were rescaled for a 4 mm diameter pupil and compared to adult population norms. Cycloplegic refraction and best-corrected logMAR letter visual acuity (BCVA) were also measured. Regression analyses assessed the contribution of astigmatism (J0) and HOAs to BCVA. Results The mean root-mean-square (RMS) HOA of 0.191 ± 0.072 μm was significantly greater than population norms (0.100 ± 0.044 μm. All unsigned HOA coefficients (z06u to z14u) and all signed coefficients except z09s, z10s, and z11s were significantly larger than population norms. Decreased BCVA was associated with astigmatism (J0) and spherical aberration (z12u) but not RMS coma, with the effect of J0 about 4 times as great as z12u. Conclusions Tohono O’odham children show elevated HOAs compared to population norms. Astigmatism and unsigned spherical aberration are associated with decreased acuity, but the effects of spherical aberration are minimal and not clinically significant. PMID:26239206

  7. The role of 'jackpot' stimuli in maladaptive decision-making: dissociable effects of D1/D2 receptor agonists and antagonists.

    PubMed

    Smith, Aaron P; Hofford, Rebecca S; Zentall, Thomas R; Beckmann, Joshua S

    2018-05-01

    Laboratory experiments often model risk through a choice between a large, uncertain (LU) reward against a small, certain (SC) reward as an index of an individual's risk tolerance. An important factor generally lacking from these procedures are reward-associated cues that may modulate risk preferences. We tested whether the addition of cues signaling 'jackpot' wins to LU choices would modulate risk preferences and if these cue effects were mediated by dopaminergic signaling. Three groups of rats chose between LU and SC rewards for which the LU probability of reward decreased across blocks. The unsignaled group received a non-informative stimulus of trial outcome. The signaled group received a jackpot signal prior to reward delivery and blackout on losses. The signaled-light group received a similar jackpot for wins, but a salient loss signal distinct from the win signal. Presenting win signals decreased the discounting of LU value for both signaled groups regardless of loss signal, while the unsignaled group showed discounting similar to previous research without cues. Pharmacological challenges with D1/D2 agonists and antagonists revealed that D1 antagonism increased and decreased sensitives to the relative probability of reward for unsignaled and signaled groups, respectively, while D2 agonists decreased sensitivities to the relative magnitude of reward. The results highlight how signals predictive of wins can promote maladaptive risk taking in individuals, while loss signals have reduced effect. Additionally, the presence of reward-predictive cues may change the underlying neurobehavioral mechanisms mediating decision-making under risk.

  8. Traffic operational evaluation of traffic impact analysis (TIA) case sites.

    DOT National Transportation Integrated Search

    2010-09-22

    This report summarizes traffic operational evaluation of six select traffic impact analysis (TIA) case sites and the effectiveness of forecasting methods used in TIA studies. Six TIA case sites comprising 15 signalized intersections and 2 unsignalize...

  9. Empirically-based performance assessment & simulation of pedestrian behavior at unsignalized crossings.

    DOT National Transportation Integrated Search

    2014-09-01

    The objective of this research was to provide an improved understanding of pedestrian-vehicle interaction : at mid-block pedestrian crossings and develop methods that can be used in traffic operational analysis and : microsimulation packages. Models ...

  10. DNS Rebinding Attacks

    DTIC Science & Technology

    2009-09-01

    active scripting, file downloads, installation of desktop items, signed and unsigned ActiveX controls, Java permissions, launching applications and...files in an IFRAME, running ActiveX controls and plug-ins, and scripting of Java applets [49]. This security measure is very effective against DNS

  11. Calculating pKa values for substituted phenols and hydration energies for other compounds with the first-order Fuzzy-Border continuum solvation model

    PubMed Central

    Sharma, Ity; Kaminski, George A.

    2012-01-01

    We have computed pKa values for eleven substituted phenol compounds using the continuum Fuzzy-Border (FB) solvation model. Hydration energies for 40 other compounds, including alkanes, alkenes, alkynes, ketones, amines, alcohols, ethers, aromatics, amides, heterocycles, thiols, sulfides and acids have been calculated. The overall average unsigned error in the calculated acidity constant values was equal to 0.41 pH units and the average error in the solvation energies was 0.076 kcal/mol. We have also reproduced pKa values of propanoic and butanoic acids within ca. 0.1 pH units from the experimental values by fitting the solvation parameters for carboxylate ion carbon and oxygen atoms. The FB model combines two distinguishing features. First, it limits the amount of noise which is common in numerical treatment of continuum solvation models by using fixed-position grid points. Second, it employs either second- or first-order approximation for the solvent polarization, depending on a particular implementation. These approximations are similar to those used for solute and explicit solvent fast polarization treatment which we developed previously. This article describes results of employing the first-order technique. This approximation places the presented methodology between the Generalized Born and Poisson-Boltzmann continuum solvation models with respect to their accuracy of reproducing the many-body effects in modeling a continuum solvent. PMID:22815192

  12. Roadway lighting and safety : phase II--monitoring quality, durability and efficiency.

    DOT National Transportation Integrated Search

    2011-10-01

    This Phase II project follows a previous project titled Strategies to Address Nighttime Crashes at Rural, Unsignalized Intersections. Based on the results of the previous study, the Iowa Highway Research Board (IHRB) indicated interest in pursuing fu...

  13. Public roads : a journal of highway research. Vol. 25, No. 10

    DOT National Transportation Integrated Search

    1949-10-01

    In this issue of Public Roads appears the first portion of an important work on highway capacity and its practical applications. The second half of the report, dealing with intersections at grade, weaving sections and unsignalized cross movements, ra...

  14. Safety, operational, and energy impacts of in-vehicle adaptive stop displays using connected vehicle technology.

    DOT National Transportation Integrated Search

    2015-07-01

    Un-signalized intersections create multiple opportunities for missed or misunderstood information. : Stop sign-controlled intersections have also been shown to be a source of delay and emissions due : to their frequent, often inappropriate use. By us...

  15. Effects of a Single Intra-Articular Injection of a Microsphere Formulation of Triamcinolone Acetonide on Knee Osteoarthritis Pain: A Double-Blinded, Randomized, Placebo-Controlled, Multinational Study.

    PubMed

    Conaghan, Philip G; Hunter, David J; Cohen, Stanley B; Kraus, Virginia B; Berenbaum, Francis; Lieberman, Jay R; Jones, Deryk G; Spitzer, Andrew I; Jevsevar, David S; Katz, Nathaniel P; Burgess, Diane J; Lufkin, Joelle; Johnson, James R; Bodick, Neil

    2018-04-18

    Intra-articular corticosteroids relieve osteoarthritis pain, but rapid systemic absorption limits efficacy. FX006, a novel, microsphere-based, extended-release triamcinolone acetonide (TA) formulation, prolongs TA joint residence and reduces systemic exposure compared with standard TA crystalline suspension (TAcs). We assessed symptomatic benefits and safety of FX006 compared with saline-solution placebo and TAcs. In this Phase-3, multicenter, double-blinded, 24-week study, adults ≥40 years of age with knee osteoarthritis (Kellgren-Lawrence grade 2 or 3) and average-daily-pain (ADP)-intensity scores of ≥5 and ≤9 (0 to 10 numeric rating scale) were centrally randomized (1:1:1) to a single intra-articular injection of FX006 (32 mg), saline-solution placebo, or TAcs (40 mg). The primary end point was change from baseline to week 12 in weekly mean ADP-intensity scores for FX006 compared with saline-solution placebo. Secondary end points were area-under-effect (AUE) curves of the change in weekly mean ADP-intensity scores from baseline to week 12 for FX006 compared with saline-solution placebo, AUE curves of the change in weekly mean ADP-intensity scores from baseline to week 12 for FX006 compared with TAcs, change in weekly mean ADP-intensity scores from baseline to week 12 for FX006 compared with TAcs, and AUE curves of the change in weekly mean ADP-intensity scores from baseline to week 24 for FX006 compared with saline-solution placebo. Exploratory end points included week-12 changes in Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Knee Injury and Osteoarthritis Outcome Score Quality of Life (KOOS-QOL) subscale scores for FX006 compared with saline-solution placebo and TAcs. Adverse events were elicited at each inpatient visit. The primary end point was met. Among 484 treated patients (n = 161 for FX006, n = 162 for saline-solution placebo, and n = 161 for TAcs), FX006 provided significant week-12 improvement in ADP intensity compared with that observed for saline-solution placebo (least-squares mean change from baseline: -3.12 versus -2.14; p < 0.0001) indicating ∼50% improvement. FX006 afforded improvements over saline-solution placebo for all secondary and exploratory end points (p < 0.05). Improvements in osteoarthritis pain were not significant for FX006 compared with TAcs using the ADP-based secondary measures. Exploratory analyses of WOMAC-A, B, and C and KOOS-QOL subscales favored FX006 (p ≤ 0.05). Adverse events were generally mild, occurring at similar frequencies across treatments. FX006 provided significant, clinically meaningful pain reduction compared with saline-solution placebo at week 12 (primary end point). Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence.

  16. Effects of a Single Intra-Articular Injection of a Microsphere Formulation of Triamcinolone Acetonide on Knee Osteoarthritis Pain

    PubMed Central

    Conaghan, Philip G.; Hunter, David J.; Cohen, Stanley B.; Kraus, Virginia B.; Berenbaum, Francis; Lieberman, Jay R.; Jones, Deryk G.; Spitzer, Andrew I.; Jevsevar, David S.; Katz, Nathaniel P.; Burgess, Diane J.; Lufkin, Joelle; Johnson, James R.; Bodick, Neil

    2018-01-01

    Background: Intra-articular corticosteroids relieve osteoarthritis pain, but rapid systemic absorption limits efficacy. FX006, a novel, microsphere-based, extended-release triamcinolone acetonide (TA) formulation, prolongs TA joint residence and reduces systemic exposure compared with standard TA crystalline suspension (TAcs). We assessed symptomatic benefits and safety of FX006 compared with saline-solution placebo and TAcs. Methods: In this Phase-3, multicenter, double-blinded, 24-week study, adults ≥40 years of age with knee osteoarthritis (Kellgren-Lawrence grade 2 or 3) and average-daily-pain (ADP)-intensity scores of ≥5 and ≤9 (0 to 10 numeric rating scale) were centrally randomized (1:1:1) to a single intra-articular injection of FX006 (32 mg), saline-solution placebo, or TAcs (40 mg). The primary end point was change from baseline to week 12 in weekly mean ADP-intensity scores for FX006 compared with saline-solution placebo. Secondary end points were area-under-effect (AUE) curves of the change in weekly mean ADP-intensity scores from baseline to week 12 for FX006 compared with saline-solution placebo, AUE curves of the change in weekly mean ADP-intensity scores from baseline to week 12 for FX006 compared with TAcs, change in weekly mean ADP-intensity scores from baseline to week 12 for FX006 compared with TAcs, and AUE curves of the change in weekly mean ADP-intensity scores from baseline to week 24 for FX006 compared with saline-solution placebo. Exploratory end points included week-12 changes in Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) and Knee Injury and Osteoarthritis Outcome Score Quality of Life (KOOS-QOL) subscale scores for FX006 compared with saline-solution placebo and TAcs. Adverse events were elicited at each inpatient visit. Results: The primary end point was met. Among 484 treated patients (n = 161 for FX006, n = 162 for saline-solution placebo, and n = 161 for TAcs), FX006 provided significant week-12 improvement in ADP intensity compared with that observed for saline-solution placebo (least-squares mean change from baseline: −3.12 versus −2.14; p < 0.0001) indicating ∼50% improvement. FX006 afforded improvements over saline-solution placebo for all secondary and exploratory end points (p < 0.05). Improvements in osteoarthritis pain were not significant for FX006 compared with TAcs using the ADP-based secondary measures. Exploratory analyses of WOMAC-A, B, and C and KOOS-QOL subscales favored FX006 (p ≤ 0.05). Adverse events were generally mild, occurring at similar frequencies across treatments. Conclusions: FX006 provided significant, clinically meaningful pain reduction compared with saline-solution placebo at week 12 (primary end point). Level of Evidence: Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence. PMID:29664853

  17. PRODUCTIVITY OF SOLAR FLARES AND MAGNETIC HELICITY INJECTION IN ACTIVE REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sung-hong; Wang Haimin; Chae, Jongchul, E-mail: sp295@njit.ed

    The main objective of this study is to better understand how magnetic helicity injection in an active region (AR) is related to the occurrence and intensity of solar flares. We therefore investigate the magnetic helicity injection rate and unsigned magnetic flux, as a reference. In total, 378 ARs are analyzed using SOHO/MDI magnetograms. The 24 hr averaged helicity injection rate and unsigned magnetic flux are compared with the flare index and the flare-productive probability in the next 24 hr following a measurement. In addition, we study the variation of helicity over a span of several days around the times ofmore » the 19 flares above M5.0 which occurred in selected strong flare-productive ARs. The major findings of this study are as follows: (1) for a sub-sample of 91 large ARs with unsigned magnetic fluxes in the range from (3-5) x 10{sup 22} Mx, there is a difference in the magnetic helicity injection rate between flaring ARs and non-flaring ARs by a factor of 2; (2) the GOES C-flare-productive probability as a function of helicity injection displays a sharp boundary between flare-productive ARs and flare-quiet ones; (3) the history of helicity injection before all the 19 major flares displayed a common characteristic: a significant helicity accumulation of (3-45) x 10{sup 42} Mx{sup 2} during a phase of monotonically increasing helicity over 0.5-2 days. Our results support the notion that helicity injection is important in flares, but it is not effective to use it alone for the purpose of flare forecast. It is necessary to find a way to better characterize the time history of helicity injection as well as its spatial distribution inside ARs.« less

  18. Signed weighted gene co-expression network analysis of transcriptional regulation in murine embryonic stem cells

    PubMed Central

    Mason, Mike J; Fan, Guoping; Plath, Kathrin; Zhou, Qing; Horvath, Steve

    2009-01-01

    Background Recent work has revealed that a core group of transcription factors (TFs) regulates the key characteristics of embryonic stem (ES) cells: pluripotency and self-renewal. Current efforts focus on identifying genes that play important roles in maintaining pluripotency and self-renewal in ES cells and aim to understand the interactions among these genes. To that end, we investigated the use of unsigned and signed network analysis to identify pluripotency and differentiation related genes. Results We show that signed networks provide a better systems level understanding of the regulatory mechanisms of ES cells than unsigned networks, using two independent murine ES cell expression data sets. Specifically, using signed weighted gene co-expression network analysis (WGCNA), we found a pluripotency module and a differentiation module, which are not identified in unsigned networks. We confirmed the importance of these modules by incorporating genome-wide TF binding data for key ES cell regulators. Interestingly, we find that the pluripotency module is enriched with genes related to DNA damage repair and mitochondrial function in addition to transcriptional regulation. Using a connectivity measure of module membership, we not only identify known regulators of ES cells but also show that Mrpl15, Msh6, Nrf1, Nup133, Ppif, Rbpj, Sh3gl2, and Zfp39, among other genes, have important roles in maintaining ES cell pluripotency and self-renewal. We also report highly significant relationships between module membership and epigenetic modifications (histone modifications and promoter CpG methylation status), which are known to play a role in controlling gene expression during ES cell self-renewal and differentiation. Conclusion Our systems biologic re-analysis of gene expression, transcription factor binding, epigenetic and gene ontology data provides a novel integrative view of ES cell biology. PMID:19619308

  19. Productivity of Solar Flares and Magnetic Helicity Injection in Active Regions

    NASA Astrophysics Data System (ADS)

    Park, Sung-hong; Chae, Jongchul; Wang, Haimin

    2010-07-01

    The main objective of this study is to better understand how magnetic helicity injection in an active region (AR) is related to the occurrence and intensity of solar flares. We therefore investigate the magnetic helicity injection rate and unsigned magnetic flux, as a reference. In total, 378 ARs are analyzed using SOHO/MDI magnetograms. The 24 hr averaged helicity injection rate and unsigned magnetic flux are compared with the flare index and the flare-productive probability in the next 24 hr following a measurement. In addition, we study the variation of helicity over a span of several days around the times of the 19 flares above M5.0 which occurred in selected strong flare-productive ARs. The major findings of this study are as follows: (1) for a sub-sample of 91 large ARs with unsigned magnetic fluxes in the range from (3-5) × 1022 Mx, there is a difference in the magnetic helicity injection rate between flaring ARs and non-flaring ARs by a factor of 2; (2) the GOES C-flare-productive probability as a function of helicity injection displays a sharp boundary between flare-productive ARs and flare-quiet ones; (3) the history of helicity injection before all the 19 major flares displayed a common characteristic: a significant helicity accumulation of (3-45) × 1042 Mx2 during a phase of monotonically increasing helicity over 0.5-2 days. Our results support the notion that helicity injection is important in flares, but it is not effective to use it alone for the purpose of flare forecast. It is necessary to find a way to better characterize the time history of helicity injection as well as its spatial distribution inside ARs.

  20. Evaluating Parametrization Protocols for Hydration Free Energy Calculations with the AMOEBA Polarizable Force Field.

    PubMed

    Bradshaw, Richard T; Essex, Jonathan W

    2016-08-09

    Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.

  1. Atomic Charge Parameters for the Finite Difference Poisson-Boltzmann Method Using Electronegativity Neutralization.

    PubMed

    Yang, Qingyi; Sharp, Kim A

    2006-07-01

    An optimization of Rappe and Goddard's charge equilibration (QEq) method of assigning atomic partial charges is described. This optimization is designed for fast and accurate calculation of solvation free energies using the finite difference Poisson-Boltzmann (FDPB) method. The optimization is performed against experimental small molecule solvation free energies using the FDPB method and adjusting Rappe and Goddard's atomic electronegativity values. Using a test set of compounds for which experimental solvation energies are available and a rather small number of parameters, very good agreement was obtained with experiment, with a mean unsigned error of about 0.5 kcal/mol. The QEq atomic partial charge assignment method can reflect the effects of the conformational changes and solvent induction on charge distribution in molecules. In the second section of the paper we examined this feature with a study of the alanine dipeptide conformations in water solvent. The different contributions to the energy surface of the dipeptide were examined and compared with the results from fixed CHARMm charge potential, which is widely used for molecular dynamics studies.

  2. Differential Item Functioning Detection Across Two Methods of Defining Group Comparisons

    PubMed Central

    Sari, Halil Ibrahim

    2014-01-01

    This study compares two methods of defining groups for the detection of differential item functioning (DIF): (a) pairwise comparisons and (b) composite group comparisons. We aim to emphasize and empirically support the notion that the choice of pairwise versus composite group definitions in DIF is a reflection of how one defines fairness in DIF studies. In this study, a simulation was conducted based on data from a 60-item ACT Mathematics test (ACT; Hanson & Béguin). The unsigned area measure method (Raju) was used as the DIF detection method. An application to operational data was also completed in the study, as well as a comparison of observed Type I error rates and false discovery rates across the two methods of defining groups. Results indicate that the amount of flagged DIF or interpretations about DIF in all conditions were not the same across the two methods, and there may be some benefits to using composite group approaches. The results are discussed in connection to differing definitions of fairness. Recommendations for practice are made. PMID:29795837

  3. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model.

    PubMed

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M; Phifer, Jeremy R; Paluch, Andrew S

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of [Formula: see text] log units (ranking 15 out of 62 entries), the correlation coefficient (R) was [Formula: see text] (ranking 35), and [Formula: see text] of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  4. Resistance to Change and Relapse of Observing

    ERIC Educational Resources Information Center

    Thrailkill, Eric A.; Shahan, Timothy A.

    2012-01-01

    Four experiments examined relapse of extinguished observing behavior of pigeons using a two-component multiple schedule of observing-response procedures. In both components, unsignaled periods of variable-interval (VI) food reinforcement alternated with extinction and observing responses produced stimuli associated with the availability of the VI…

  5. Pedestrian crosswalk case studies : Sacramento, California; Richmond, Virginia; Buffalo, New York; Stillwater, Minnesota

    DOT National Transportation Integrated Search

    2001-08-01

    The objective of this research was to determine the effect of crosswalk markings on driver and pedestrian behavior at unsignalized intersections. A before/after evaluation of crosswalk markings was conducted at 11 locations in 4 U.S. cities. Behavior...

  6. Development of left-turn lane guidelines for signalized and unsignalized intersections.

    DOT National Transportation Integrated Search

    2004-01-01

    It is generally accepted that the level of service (LOS) at intersections significantly affects the overall LOS of the road system. It is also known that the LOS at an intersection can be adversely affected by frequently allowing left-turning vehicle...

  7. L&D Manual Turn Lane Storage Validation/Update

    DOT National Transportation Integrated Search

    2012-08-01

    Queuing occurs at intersections mostly due to overflow or inadequacy of turn bays. The ODOT L&D : Manual Volume 1 has storage requirements for both signalized and unsignalized intersections. Figures : 401-9E and 401-10E of the L&D Manual provide the ...

  8. Signed-negabinary-arithmetic-based optical computing by use of a single liquid-crystal-display panel.

    PubMed

    Datta, Asit K; Munshi, Soumika

    2002-03-10

    Based on the negabinary number representation, parallel one-step arithmetic operations (that is, addition and subtraction), logical operations, and matrix-vector multiplication on data have been optically implemented, by use of a two-dimensional spatial-encoding technique. For addition and subtraction, one of the operands in decimal form is converted into the unsigned negabinary form, whereas the other decimal number is represented in the signed negabinary form. The result of operation is obtained in the mixed negabinary form and is converted back into decimal. Matrix-vector multiplication for unsigned negabinary numbers is achieved through the convolution technique. Both of the operands for logical operation are converted to their signed negabinary forms. All operations are implemented by use of a unique optical architecture. The use of a single liquid-crystal-display panel to spatially encode the input data, operational kernels, and decoding masks have simplified the architecture as well as reduced the cost and complexity.

  9. Generalization of Clustering Coefficients to Signed Correlation Networks

    PubMed Central

    Costantini, Giulio; Perugini, Marco

    2014-01-01

    The recent interest in network analysis applications in personality psychology and psychopathology has put forward new methodological challenges. Personality and psychopathology networks are typically based on correlation matrices and therefore include both positive and negative edge signs. However, some applications of network analysis disregard negative edges, such as computing clustering coefficients. In this contribution, we illustrate the importance of the distinction between positive and negative edges in networks based on correlation matrices. The clustering coefficient is generalized to signed correlation networks: three new indices are introduced that take edge signs into account, each derived from an existing and widely used formula. The performances of the new indices are illustrated and compared with the performances of the unsigned indices, both on a signed simulated network and on a signed network based on actual personality psychology data. The results show that the new indices are more resistant to sample variations in correlation networks and therefore have higher convergence compared with the unsigned indices both in simulated networks and with real data. PMID:24586367

  10. Estimation of Critical Gap Based on Raff's Definition

    PubMed Central

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result. PMID:25574160

  11. Estimation of critical gap based on Raff's definition.

    PubMed

    Guo, Rui-jun; Wang, Xiao-jing; Wang, Wan-xiang

    2014-01-01

    Critical gap is an important parameter used to calculate the capacity and delay of minor road in gap acceptance theory of unsignalized intersections. At an unsignalized intersection with two one-way traffic flows, it is assumed that two events are independent between vehicles' arrival of major stream and vehicles' arrival of minor stream. The headways of major stream follow M3 distribution. Based on Raff's definition of critical gap, two calculation models are derived, which are named M3 definition model and revised Raff's model. Both models use total rejected coefficient. Different calculation models are compared by simulation and new models are found to be valid. The conclusion reveals that M3 definition model is simple and valid. Revised Raff's model strictly obeys the definition of Raff's critical gap and its application field is more extensive than Raff's model. It can get a more accurate result than the former Raff's model. The M3 definition model and revised Raff's model can derive accordant result.

  12. Accuracy of free energies of hydration using CM1 and CM3 atomic charges.

    PubMed

    Udier-Blagović, Marina; Morales De Tirado, Patricia; Pearlman, Shoshannah A; Jorgensen, William L

    2004-08-01

    Absolute free energies of hydration (DeltaGhyd) have been computed for 25 diverse organic molecules using partial atomic charges derived from AM1 and PM3 wave functions via the CM1 and CM3 procedures of Cramer, Truhlar, and coworkers. Comparisons are made with results using charges fit to the electrostatic potential surface (EPS) from ab initio 6-31G* wave functions and from the OPLS-AA force field. OPLS Lennard-Jones parameters for the organic molecules were used together with the TIP4P water model in Monte Carlo simulations with free energy perturbation theory. Absolute free energies of hydration were computed for OPLS united-atom and all-atom methane by annihilating the solutes in water and in the gas phase, and absolute DeltaGhyd values for all other molecules were computed via transformation to one of these references. Optimal charge scaling factors were determined by minimizing the unsigned average error between experimental and calculated hydration free energies. The PM3-based charge models do not lead to lower average errors than obtained with the EPS charges for the subset of 13 molecules in the original study. However, improvement is obtained by scaling the CM1A partial charges by 1.14 and the CM3A charges by 1.15, which leads to average errors of 1.0 and 1.1 kcal/mol for the full set of 25 molecules. The scaled CM1A charges also yield the best results for the hydration of amides including the E/Z free-energy difference for N-methylacetamide in water. Copyright 2004 Wiley Periodicals, Inc.

  13. Conditioned Reinforcement Value and Resistance to Change

    ERIC Educational Resources Information Center

    Shahan, Timothy A.; Podlesnik, Christopher A.

    2008-01-01

    Three experiments examined the effects of conditioned reinforcement value and primary reinforcement rate on resistance to change using a multiple schedule of observing-response procedures with pigeons. In the absence of observing responses in both components, unsignaled periods of variable-interval (VI) schedule food reinforcement alternated with…

  14. The Fail-Proof Student

    ERIC Educational Resources Information Center

    Fiamengo, Janice

    2013-01-01

    In this article, the author comments on an unsigned newspaper piece titled "Helping Talent Rise to the Top," printed in Canada's "Globe and Mail" about a new measure to enhance student well-being at Queen's University in Kingston, Ontario. The "Globe" piece lauds Queen's, a top-ranked Canadian undergraduate school,…

  15. PIPSI/Navy - Rapid Evaluation of Propulsion System Effects for the Navy Gas Turbine Engine Code, NEPCOMP

    DTIC Science & Technology

    1979-10-11

    E.TqIE’JI*c*H* rt IaT F : rti5’ :TFL’-1H E’rl-aFiw liSsTFI’JIt1 riSET.Ir’ 11cisar1 tnt .L Qsa 41W HLUTEEST rU I- AUE DF 1 N(..’t’FE. HUIETO T A ’E...TE’STOLIT UScEF.: LJFABF1’ WPi:Sri281TTY F. ILEES(S) READ’,~ 3FILE NAME FILE CONTENTS BOEING1 Dry Turbofan Engine Uninstalled Data BOEING2 Afterburning ...Turbofan Engine Uninstalled Data BONGTJD Dry Turbojet Engine Uninstalled Data £BONGTJW Afterburning Turbojet Engine Uninstalled Data NADC7 Source Deck

  16. Localized orbital corrections applied to thermochemical errors in density functional theory: The role of basis set and application to molecular reactions

    NASA Astrophysics Data System (ADS)

    Goldfeld, Dahlia A.; Bochevarov, Arteum D.; Friesner, Richard A.

    2008-12-01

    This paper is a logical continuation of the 22 parameter, localized orbital correction (LOC) methodology that we developed in previous papers [R. A. Friesner et al., J. Chem. Phys. 125, 124107 (2006); E. H. Knoll and R. A. Friesner, J. Phys. Chem. B 110, 18787 (2006).] This methodology allows one to redress systematic density functional theory (DFT) errors, rooted in DFT's inherent inability to accurately describe nondynamical correlation. Variants of the LOC scheme, in conjunction with B3LYP (denoted as B3LYP-LOC), were previously applied to enthalpies of formation, ionization potentials, and electron affinities and showed impressive reduction in the errors. In this paper, we demonstrate for the first time that the B3LYP-LOC scheme is robust across different basis sets [6-31G∗, 6-311++G(3df,3pd), cc-pVTZ, and aug-cc-pVTZ] and reaction types (atomization reactions and molecular reactions). For example, for a test set of 70 molecular reactions, the LOC scheme reduces their mean unsigned error from 4.7 kcal/mol [obtained with B3LYP/6-311++G(3df,3pd)] to 0.8 kcal/mol. We also verified whether the LOC methodology would be equally successful if applied to the promising M05-2X functional. We conclude that although M05-2X produces better reaction enthalpies than B3LYP, the LOC scheme does not combine nearly as successfully with M05-2X than with B3LYP. A brief analysis of another functional, M06-2X, reveals that it is more accurate than M05-2X but its combination with LOC still cannot compete in accuracy with B3LYP-LOC. Indeed, B3LYP-LOC remains the best method of computing reaction enthalpies.

  17. Evaluating the Effect of Advance Yield Markings and Symbolic Signs on Vehicle-Pedestrian Conflicts at Marked Midblock Crosswalks across Multilane Roads

    DOT National Transportation Integrated Search

    2016-02-01

    The Commonwealth of Massachusetts has made walkable communities a priority. Pedestrian safety is key to the success of this objective. Pedestrians are at high risk when traversing unsignalized, marked crosswalks located either midblock or at T inters...

  18. Evaluating the effect of advance yield markings and symbolic signs on vehicle-pedestrian conflicts at marked midblock crosswalks across multilane roads.

    DOT National Transportation Integrated Search

    2016-02-01

    The Commonwealth of Massachusetts has made walkable communities a priority. Pedestrian safety is key to the success of : this objective. Pedestrians are at high risk when traversing unsignalized, marked crosswalks located either midblock or at Tinter...

  19. On Double-Entry Bookkeeping: The Mathematical Treatment

    ERIC Educational Resources Information Center

    Ellerman, David

    2014-01-01

    Double-entry bookkeeping (DEB) implicitly uses a specific mathematical construction, the group of differences using pairs of unsigned numbers ("T-accounts"). That construction was only formulated abstractly in mathematics in the nineteenth century, even though DEB had been used in the business world for over five centuries. Yet the…

  20. Signed vs. Unsigned Report of Depression and Self-Esteem.

    ERIC Educational Resources Information Center

    Nolan, R. F; And Others

    1994-01-01

    One hundred thirty-five adolescents were administered the Children's Depression Inventory (CDI) and the Coopersmith Self-Esteem Inventory (CSEI). On the CDI, male adolescents responded more severely on an item involving fighting with others when they could be identified. There were no significant differences among responses on CSEI items.…

  1. Do Conditional Reinforcers Count?

    ERIC Educational Resources Information Center

    Davison, Michael; Baum, William M.

    2006-01-01

    Six pigeons were trained on a procedure in which seven components arranged different food-delivery ratios on concurrent variable-interval schedules each session. The components were unsignaled, lasted for 10 food deliveries, and occurred in random order with a 60-s blackout between components. The schedules were arranged using a switching-key…

  2. Electronic couplings for molecular charge transfer: Benchmarking CDFT, FODFT, and FODFTB against high-level ab initio calculations

    NASA Astrophysics Data System (ADS)

    Kubas, Adam; Hoffmann, Felix; Heck, Alexander; Oberhofer, Harald; Elstner, Marcus; Blumberger, Jochen

    2014-03-01

    We introduce a database (HAB11) of electronic coupling matrix elements (Hab) for electron transfer in 11 π-conjugated organic homo-dimer cations. High-level ab inito calculations at the multireference configuration interaction MRCI+Q level of theory, n-electron valence state perturbation theory NEVPT2, and (spin-component scaled) approximate coupled cluster model (SCS)-CC2 are reported for this database to assess the performance of three DFT methods of decreasing computational cost, including constrained density functional theory (CDFT), fragment-orbital DFT (FODFT), and self-consistent charge density functional tight-binding (FODFTB). We find that the CDFT approach in combination with a modified PBE functional containing 50% Hartree-Fock exchange gives best results for absolute Hab values (mean relative unsigned error = 5.3%) and exponential distance decay constants β (4.3%). CDFT in combination with pure PBE overestimates couplings by 38.7% due to a too diffuse excess charge distribution, whereas the economic FODFT and highly cost-effective FODFTB methods underestimate couplings by 37.6% and 42.4%, respectively, due to neglect of interaction between donor and acceptor. The errors are systematic, however, and can be significantly reduced by applying a uniform scaling factor for each method. Applications to dimers outside the database, specifically rotated thiophene dimers and larger acenes up to pentacene, suggests that the same scaling procedure significantly improves the FODFT and FODFTB results for larger π-conjugated systems relevant to organic semiconductors and DNA.

  3. Do Solvated Electrons (e(aq)⁻) Reduce DNA Bases? A Gaussian 4 and Density Functional Theory-Molecular Dynamics Study.

    PubMed

    Kumar, Anil; Adhikary, Amitava; Shamoun, Lance; Sevilla, Michael D

    2016-03-10

    The solvated electron (e(aq)⁻) is a primary intermediate after an ionization event that produces reductive DNA damage. Accurate standard redox potentials (E(o)) of nucleobases and of e(aq)⁻ determine the extent of reaction of e(aq)⁻ with nucleobases. In this work, E(o) values of e(aq)⁻ and of nucleobases have been calculated employing the accurate ab initio Gaussian 4 theory including the polarizable continuum model (PCM). The Gaussian 4-calculated E(o) of e(aq)⁻ (-2.86 V) is in excellent agreement with the experimental one (-2.87 V). The Gaussian 4-calculated E(o) of nucleobases in dimethylformamide (DMF) lie in the range (-2.36 V to -2.86 V); they are in reasonable agreement with the experimental E(o) in DMF and have a mean unsigned error (MUE) = 0.22 V. However, inclusion of specific water molecules reduces this error significantly (MUE = 0.07). With the use of a model of e(aq)⁻ nucleobase complex with six water molecules, the reaction of e(aq)⁻ with the adjacent nucleobase is investigated using approximate ab initio molecular dynamics (MD) simulations including PCM. Our MD simulations show that e(aq)⁻ transfers to uracil, thymine, cytosine, and adenine, within 10 to 120 fs and e(aq)⁻ reacts with guanine only when a water molecule forms a hydrogen bond to O6 of guanine which stabilizes the anion radical.

  4. Effect Size Measures for Differential Item Functioning in a Multidimensional IRT Model

    ERIC Educational Resources Information Center

    Suh, Youngsuk

    2016-01-01

    This study adapted an effect size measure used for studying differential item functioning (DIF) in unidimensional tests and extended the measure to multidimensional tests. Two effect size measures were considered in a multidimensional item response theory model: signed weighted P-difference and unsigned weighted P-difference. The performance of…

  5. 17 CFR 270.8b-11 - Number of copies; signatures; binding.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 17 Commodity and Securities Exchanges 3 2010-04-01 2010-04-01 false Number of copies; signatures... (CONTINUED) RULES AND REGULATIONS, INVESTMENT COMPANY ACT OF 1940 § 270.8b-11 Number of copies; signatures... manner prescribed by the appropriate form. Unsigned copies shall be conformed. If the signature of any...

  6. Kids and Chemistry: Large Event Guide.

    ERIC Educational Resources Information Center

    Tinnesand, Michael

    This guide is intended to provide Kids and Chemistry (K&C) with a variety of age-appropriate, fun, and safe demonstrations. It features information on planning a large event and includes safety guidelines. Several activities are included under each major topic. Topics include: (1) Acids and Bases; (2) Unsigned; (3) Kool Tie-Dye; (4) Secret…

  7. 20 CFR 418.1270 - What modified adjusted gross income evidence will we not accept?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false What modified adjusted gross income evidence will we not accept? 418.1270 Section 418.1270 Employees' Benefits SOCIAL SECURITY ADMINISTRATION... letter from IRS acknowledging the change. We will also not accept illegible or unsigned copies of income...

  8. A Comparison of Lord's Chi Square and Raju's Area Measures in Detection of DIF.

    ERIC Educational Resources Information Center

    Cohen, Allan S.; Kim, Seock-Ho

    1993-01-01

    The effectiveness of two statistical tests of the area between item response functions (exact signed area and exact unsigned area) estimated in different samples, a measure of differential item functioning (DIF), was compared with Lord's chi square. Lord's chi square was found the most effective in determining DIF. (SLD)

  9. Farseer-NMR: automatic treatment, analysis and plotting of large, multi-variable NMR data.

    PubMed

    Teixeira, João M C; Skinner, Simon P; Arbesú, Miguel; Breeze, Alexander L; Pons, Miquel

    2018-05-11

    We present Farseer-NMR ( https://git.io/vAueU ), a software package to treat, evaluate and combine NMR spectroscopic data from sets of protein-derived peaklists covering a range of experimental conditions. The combined advances in NMR and molecular biology enable the study of complex biomolecular systems such as flexible proteins or large multibody complexes, which display a strong and functionally relevant response to their environmental conditions, e.g. the presence of ligands, site-directed mutations, post translational modifications, molecular crowders or the chemical composition of the solution. These advances have created a growing need to analyse those systems' responses to multiple variables. The combined analysis of NMR peaklists from large and multivariable datasets has become a new bottleneck in the NMR analysis pipeline, whereby information-rich NMR-derived parameters have to be manually generated, which can be tedious, repetitive and prone to human error, or even unfeasible for very large datasets. There is a persistent gap in the development and distribution of software focused on peaklist treatment, analysis and representation, and specifically able to handle large multivariable datasets, which are becoming more commonplace. In this regard, Farseer-NMR aims to close this longstanding gap in the automated NMR user pipeline and, altogether, reduce the time burden of analysis of large sets of peaklists from days/weeks to seconds/minutes. We have implemented some of the most common, as well as new, routines for calculation of NMR parameters and several publication-quality plotting templates to improve NMR data representation. Farseer-NMR has been written entirely in Python and its modular code base enables facile extension.

  10. Landscaping of highway medians and roadway safety at unsignalized intersections.

    PubMed

    Chen, Hongyun; Fabregas, Aldo; Lin, Pei-Sung

    2016-05-01

    Well-planted and maintained landscaping can help reduce driving stress, provide better visual quality, and decrease over speeding, thus improving roadway safety. Florida Department of Transportation (FDOT) Standard Index (SI-546) is one of the more demanding standards in the U.S. for landscaping design criteria at highway medians near intersections. The purposes of this study were to (1) empirically evaluate the safety results of SI-546 at unsignalized intersections and (2) quantify the impacts of geometrics, traffic, and landscaping design features on total crashes and injury plus fatal crashes. The studied unsignalized intersections were divided into (1) those without median trees near intersections, (2) those with median trees near intersections that were compliant with SI-546, and (3) those with median trees near intersections that were non-compliant with SI-546. A total of 72 intersections were selected, for which five-year crash data from 2006-2010 were collected. The sites that were compliant with SI-546 showed the best safety performance in terms of the lowest crash counts and crash rates. Four crash predictive models-two for total crashes and two for injury crashes-were developed. The results indicated that improperly planted and maintained median trees near highway intersections can increase the total number of crashes and injury plus fatal crashes at a 90% confidence level; no significant difference could be found in crash rates between sites that were compliant with SI-546 and sites without trees. All other conditions remaining the same, an intersection with trees that was not compliant with SI-546 had 63% more crashes and almost doubled injury plus fatal crashes than those at intersections without trees. The study indicates that appropriate landscaping in highway medians near intersections can be an engineering technology that not only improves roadway environmental quality but also maintains intersection safety. Copyright © 2016. Published by Elsevier Ltd.

  11. An Update on the Role of Serotonin and its Interplay with Dopamine for Reward.

    PubMed

    Fischer, Adrian G; Ullsperger, Markus

    2017-01-01

    The specific role of serotonin and its interplay with dopamine (DA) in adaptive, reward guided behavior as well as drug dependance, still remains elusive. Recently, novel methods allowed cell type specific anatomical, functional and interventional analyses of serotonergic and dopaminergic circuits, promising significant advancement in understanding their functional roles. Furthermore, it is increasingly recognized that co-release of neurotransmitters is functionally relevant, understanding of which is required in order to interpret results of pharmacological studies and their relationship to neural recordings. Here, we review recent animal studies employing such techniques with the aim to connect their results to effects observed in human pharmacological studies and subjective effects of drugs. It appears that the additive effect of serotonin and DA conveys significant reward related information and is subjectively highly euphorizing. Neither DA nor serotonin alone have such an effect. This coincides with optogenetically targeted recordings in mice, where the dopaminergic system codes reward prediction errors (PE), and the serotonergic system mainly unsigned PE. Overall, this pattern of results indicates that joint activity between both systems carries essential reward information and invites parallel investigation of both neurotransmitter systems.

  12. Multiconfiguration Pair-Density Functional Theory and Complete Active Space Second Order Perturbation Theory. Bond Dissociation Energies of FeC, NiC, FeS, NiS, FeSe, and NiSe.

    PubMed

    Sharkas, Kamal; Gagliardi, Laura; Truhlar, Donald G

    2017-12-07

    We investigate the performance of multiconfiguration pair-density functional theory (MC-PDFT) and complete active space second-order perturbation theory for computing the bond dissociation energies of the diatomic molecules FeC, NiC, FeS, NiS, FeSe, and NiSe, for which accurate experimental data have become recently available [Matthew, D. J.; Tieu, E.; Morse, M. D. J. Chem. Phys. 2017, 146, 144310-144320]. We use three correlated participating orbital (CPO) schemes (nominal, moderate, and extended) to define the active spaces, and we consider both the complete active space (CAS) and the separated-pair (SP) schemes to specify the configurations included for a given active space. We found that the moderate SP-PDFT scheme with the tPBE on-top density functional has the smallest mean unsigned error (MUE) of the methods considered. This level of theory provides a balanced treatment of the static and dynamic correlation energies for the studied systems. This is encouraging because the method is low in cost even for much more complicated systems.

  13. Multiconfiguration Pair-Density Functional Theory Spectral Calculations Are Stable to Adding Diffuse Basis Functions.

    PubMed

    Hoyer, Chad E; Gagliardi, Laura; Truhlar, Donald G

    2015-11-05

    Time-dependent Kohn-Sham density functional theory (TD-KS-DFT) is useful for calculating electronic excitation spectra of large systems, but the low-energy spectra are often complicated by artificially lowered higher-energy states. This affects even the lowest energy excited states. Here, by calculating the lowest energy spin-conserving excited state for atoms from H to K and for formaldehyde, we show that this problem does not occur in multiconfiguration pair-density functional theory (MC-PDFT). We use the tPBE on-top density functional, which is a translation of the PBE exchange-correlation functional. We compare to a robust multireference method, namely, complete active space second-order perturbation theory (CASPT2), and to TD-KS-DFT with two popular exchange-correlation functionals, PBE and PBE0. We find for atoms that the mean unsigned error (MUE) of MC-PDFT with the tPBE functional improves from 0.42 to 0.40 eV with a double set of diffuse functions, whereas the MUEs for PBE and PBE0 drastically increase from 0.74 to 2.49 eV and from 0.45 to 1.47 eV, respectively.

  14. Predicting cyclohexane/water distribution coefficients for the SAMPL5 challenge using MOSCED and the SMD solvation model

    NASA Astrophysics Data System (ADS)

    Diaz-Rodriguez, Sebastian; Bozada, Samantha M.; Phifer, Jeremy R.; Paluch, Andrew S.

    2016-11-01

    We present blind predictions using the solubility parameter based method MOSCED submitted for the SAMPL5 challenge on calculating cyclohexane/water distribution coefficients at 298 K. Reference data to parameterize MOSCED was generated with knowledge only of chemical structure by performing solvation free energy calculations using electronic structure calculations in the SMD continuum solvent. To maintain simplicity and use only a single method, we approximate the distribution coefficient with the partition coefficient of the neutral species. Over the final SAMPL5 set of 53 compounds, we achieved an average unsigned error of 2.2± 0.2 log units (ranking 15 out of 62 entries), the correlation coefficient ( R) was 0.6± 0.1 (ranking 35), and 72± 6 % of the predictions had the correct sign (ranking 30). While used here to predict cyclohexane/water distribution coefficients at 298 K, MOSCED is broadly applicable, allowing one to predict temperature dependent infinite dilution activity coefficients in any solvent for which parameters exist, and provides a means by which an excess Gibbs free energy model may be parameterized to predict composition dependent phase-equilibrium.

  15. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    PubMed

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  16. Stimulus Effects on Local Preference: Stimulus-Response Contingencies, Stimulus-Food Pairing, and Stimulus-Food Correlation

    ERIC Educational Resources Information Center

    Davison, Michael; Baum, William M.

    2010-01-01

    Four pigeons were trained in a procedure in which concurrent-schedule food ratios changed unpredictably across seven unsignaled components after 10 food deliveries. Additional green-key stimulus presentations also occurred on the two alternatives, sometimes in the same ratio as the component food ratio, and sometimes in the inverse ratio. In eight…

  17. Evaluation of Fixed Momentary DRO Schedules under Signaled and Unsignaled Arrangements

    ERIC Educational Resources Information Center

    Hammond, Jennifer L.; Iwata, Brian A.; Fritz, Jennifer N.; Dempsey, Carrie M.

    2011-01-01

    Fixed momentary schedules of differential reinforcement of other behavior (FM DRO) generally have been ineffective as treatment for problem behavior. Because most early research on FM DRO included presentation of a signal at the end of the DRO interval, it is unclear whether the limited effects of FM DRO were due to (a) the momentary response…

  18. Relapse of Extinguished Fear after Exposure to a Dangerous Context Is Mitigated by Testing in a Safe Context

    ERIC Educational Resources Information Center

    Goode, Travis D.; Kim, Janice J.; Maren, Stephen

    2015-01-01

    Aversive events can trigger relapse of extinguished fear memories, presenting a major challenge to the long-term efficacy of therapeutic interventions. Here, we examined factors regulating the relapse of extinguished fear after exposure of rats to a dangerous context. Rats received unsignaled shock in a distinct context ("dangerous"…

  19. Automated segmentation of intraretinal layers from macular optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Haeker, Mona; Sonka, Milan; Kardon, Randy; Shah, Vinay A.; Wu, Xiaodong; Abràmoff, Michael D.

    2007-03-01

    Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated series (up to six, when available) were acquired for each eye and were first registered and averaged together, resulting in a composite image for each angular location. The six surfaces defining the five layers were then found on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost closed set in a geometric graph constructed from edge/regional information and a priori-determined surface smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries were independently defined by two human experts on one raw scan of each eye. Using the average of the experts' tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 +/- 4.0 μm, with five of the six surfaces showing significantly lower mean errors than those computed between the two observers (p < 0.05, pixel size of 50 × 2 μm).

  20. Fragment-orbital tunneling currents and electronic couplings for analysis of molecular charge-transfer systems.

    PubMed

    Hwang, Sang-Yeon; Kim, Jaewook; Kim, Woo Youn

    2018-04-04

    In theoretical charge-transfer research, calculation of the electronic coupling element is crucial for examining the degree of the electronic donor-acceptor interaction. The tunneling current (TC), representing the magnitudes and directions of electron flow, provides a way of evaluating electronic couplings, along with the ability of visualizing how electrons flow in systems. Here, we applied the TC theory to π-conjugated organic dimer systems, in the form of our fragment-orbital tunneling current (FOTC) method, which uses the frontier molecular-orbitals of system fragments as diabatic states. For a comprehensive test of FOTC, we assessed how reasonable the computed electronic couplings and the corresponding TC densities are for the hole- and electron-transfer databases HAB11 and HAB7. FOTC gave 12.5% mean relative unsigned error with regard to the high-level ab initio reference. The shown performance is comparable with that of fragment-orbital density functional theory, which gave the same error by 20.6% or 13.9% depending on the formulation. In the test of a set of nucleobase π stacks, we showed that the original TC expression is also applicable to nondegenerate cases under the condition that the overlap between the charge distributions of diabatic states is small enough to offset the energy difference. Lastly, we carried out visual analysis on the FOTC densities of thiophene dimers with different intermolecular alignments. The result depicts an intimate topological connection between the system geometry and electron flow. Our work provides quantitative and qualitative grounds for FOTC, showing it to be a versatile tool in characterization of molecular charge-transfer systems.

  1. Application of molecular dynamics simulations in molecular property prediction II: diffusion coefficient.

    PubMed

    Wang, Junmei; Hou, Tingjun

    2011-12-01

    In this work, we have evaluated how well the general assisted model building with energy refinement (AMBER) force field performs in studying the dynamic properties of liquids. Diffusion coefficients (D) have been predicted for 17 solvents, five organic compounds in aqueous solutions, four proteins in aqueous solutions, and nine organic compounds in nonaqueous solutions. An efficient sampling strategy has been proposed and tested in the calculation of the diffusion coefficients of solutes in solutions. There are two major findings of this study. First of all, the diffusion coefficients of organic solutes in aqueous solution can be well predicted: the average unsigned errors and the root mean square errors are 0.137 and 0.171 × 10(-5) cm(-2) s(-1), respectively. Second, although the absolute values of D cannot be predicted, good correlations have been achieved for eight organic solvents with experimental data (R(2) = 0.784), four proteins in aqueous solutions (R(2) = 0.996), and nine organic compounds in nonaqueous solutions (R(2) = 0.834). The temperature dependent behaviors of three solvents, namely, TIP3P water, dimethyl sulfoxide, and cyclohexane have been studied. The major molecular dynamics (MD) settings, such as the sizes of simulation boxes and with/without wrapping the coordinates of MD snapshots into the primary simulation boxes have been explored. We have concluded that our sampling strategy that averaging the mean square displacement collected in multiple short-MD simulations is efficient in predicting diffusion coefficients of solutes at infinite dilution. Copyright © 2011 Wiley Periodicals, Inc.

  2. Let's talk it over : interagency cooperation facilities success : a case study : the New York, New Jersey, Connecticut metropolitan area TRANSMIT operational test : ensuring integration of intelligent transportation systems products and services

    DOT National Transportation Integrated Search

    1989-01-01

    This manual provides basic background information and step-by-step procedures for conducting traffic conflict surveys at signalized and unsignalized intersections. The manual was prepared as a training aid and reference source for persons who are ass...

  3. NAVAIR Portable Source Initiative (NPSI) Standard for Reusable Source Dataset Metadata (RSDM) V2.4

    DTIC Science & Technology

    2012-09-26

    defining a raster file format: <RasterFileFormat> <FormatName>TIFF</FormatName> <Order>BIP</Order> < DataType >8-BIT_UNSIGNED</ DataType ...interleaved by line (BIL); Band interleaved by pixel (BIP). element RasterFileFormatType/ DataType diagram type restriction of xsd:string facets

  4. Pole-strength of the earth from Magsat and magnetic determination of the core radius

    NASA Technical Reports Server (NTRS)

    Voorhies, G. V.; Benton, E. R.

    1982-01-01

    A model based on two days of Magsat data is used to numerically evaluate the unsigned magnetic flux linking the earth's surface, and a comparison of the 16.054 GWb value calculated with values from earlier geomagnetic field models reveals a smooth, monotonic, and recently-accelerating decrease in the earth's pole strength at a 50-year average rate of 8.3 MWb, or 0.052%/year. Hide's (1978) magnetic technique for determining the radius of the earth's electrically-conducting core is tested by (1) extrapolating main field models for 1960 and 1965 downward through the nearly-insulating mantle, and then separately comparing them to equivalent, extrapolated models of Magsat data. The two unsigned fluxes are found to equal the Magsat values at a radius which is within 2% of the core radius; and (2) the 1960 main field and secular variation and acceleration coefficients are used to derive models of 1930, 1940 and 1950. The same core magnetic radius value, within 2% of the seismic value, is obtained. It is concluded that the mantle is a nearly-perfect insulator, while the core is a perfect conductor, on the decade time scale.

  5. Impact of repeated intravenous cocaine administration on incentive motivation depends on mode of drug delivery.

    PubMed

    LeBlanc, Kimberly H; Maidment, Nigel T; Ostlund, Sean B

    2014-11-01

    The incentive sensitization theory of addiction posits that repeated exposure to drugs of abuse, like cocaine, can lead to long-term adaptations in the neural circuits that support motivated behavior, providing an account of pathological drug-seeking behavior. Although pre-clinical findings provide strong support for this theory, much remains unknown about the conditions that support incentive sensitization. The current study examined whether the mode of cocaine administration is an important factor governing that drug's long-term impact on behavior. Separate groups of rats were allowed either to self-administer intravenous cocaine or were given an equivalent number and distribution of unsignaled cocaine or saline infusions. During the subsequent test of incentive motivation (Pavlovian-to-instrumental transfer), we found that rats with a history of cocaine self-administration showed strong cue-evoked food seeking, in contrast to rats given unsignaled cocaine or saline. This finding indicates that the manner in which cocaine is administered can determine its lasting behavioral effects, suggesting that subjective experiences during drug use play a critical role in the addiction process. Our findings may therefore have important implications for the study and treatment of compulsive drug seeking. © 2013 Society for the Study of Addiction.

  6. GAL-021, a new intravenous BKCa-channel blocker, is well tolerated and stimulates ventilation in healthy volunteers.

    PubMed

    McLeod, J F; Leempoels, J M; Peng, S X; Dax, S L; Myers, L J; Golder, F J

    2014-11-01

    Potassium-channels in the carotid body and the brainstem are important regulators of ventilation. The BKCa-channel contains response elements for CO, O2, and CO2. Its block increases carotid body signalling, phrenic nerve activity, and respiratory drive. GAL-021, a new BKCa-channel blocker, increases minute ventilation in rats and non-human primates. This study assessed the single-dose safety, tolerability, pharmacokinetics (PKs), and pharmacodynamics (PDs) of GAL-021 in healthy volunteers. Thirty subjects participated in a nine-period, randomized, double-blinded, placebo-controlled, crossover, ascending dose, first-in-human study with i.v. infusions of 0.1-0.96 mg kg(-1) h(-1) for 1 h and intermediate doses up to 4 h. Adverse event rates were generally similar among dose levels and between placebo- and GAL-021-treated subjects. At higher GAL-021 doses, a mild/moderate burning sensation at the infusion site occurred during the infusion. No clinically significant changes in vital signs or clinical chemistries were noted. Minute ventilation increased (AUE0-1 h ≈ 16%, P<0.05) and end-tidal carbon dioxide ([Formula: see text]) decreased (AUE0-1 h ≈ 6%, P<0.05) during the first hour at 0.96 mg kg(-1) h(-1) with 1/2-maximal [Formula: see text] and [Formula: see text]-change occurring by 7.5 min. Drug concentration rose rapidly during the infusion and decreased rapidly initially (distribution t1/2 of 30 min) and then more slowly (terminal t1/2 of 5.6 h). GAL-021 was safe and generally well tolerated with adverse events comparable with placebo except for an infusion site burning sensation. GAL-021 stimulated ventilation at the highest doses suggesting that greater infusion rates may be required for maximum PD effects. GAL-021 had PK characteristics consistent with an acute care medication. © The Author 2014. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. Analysis of fast boundary-integral approximations for modeling electrostatic contributions of molecular binding

    PubMed Central

    Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.

    2013-01-01

    We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561

  8. Prediction of the translocon-mediated membrane insertion free energies of protein sequences.

    PubMed

    Park, Yungki; Helms, Volkhard

    2008-05-15

    Helical membrane proteins (HMPs) play crucial roles in a variety of cellular processes. Unlike water-soluble proteins, HMPs need not only to fold but also get inserted into the membrane to be fully functional. This process of membrane insertion is mediated by the translocon complex. Thus, it is of great interest to develop computational methods for predicting the translocon-mediated membrane insertion free energies of protein sequences. We have developed Membrane Insertion (MINS), a novel sequence-based computational method for predicting the membrane insertion free energies of protein sequences. A benchmark test gives a correlation coefficient of 0.74 between predicted and observed free energies for 357 known cases, which corresponds to a mean unsigned error of 0.41 kcal/mol. These results are significantly better than those obtained by traditional hydropathy analysis. Moreover, the ability of MINS to reasonably predict membrane insertion free energies of protein sequences allows for effective identification of transmembrane (TM) segments. Subsequently, MINS was applied to predict the membrane insertion free energies of 316 TM segments found in known structures. An in-depth analysis of the predicted free energies reveals a number of interesting findings about the biogenesis and structural stability of HMPs. A web server for MINS is available at http://service.bioinformatik.uni-saarland.de/mins

  9. Surveying implicit solvent models for estimating small molecule absolute hydration free energies

    PubMed Central

    Knight, Jennifer L.

    2011-01-01

    Implicit solvent models are powerful tools in accounting for the aqueous environment at a fraction of the computational expense of explicit solvent representations. Here, we compare the ability of common implicit solvent models (TC, OBC, OBC2, GBMV, GBMV2, GBSW, GBSW/MS, GBSW/MS2 and FACTS) to reproduce experimental absolute hydration free energies for a series of 499 small neutral molecules that are modeled using AMBER/GAFF parameters and AM1-BCC charges. Given optimized surface tension coefficients for scaling the surface area term in the nonpolar contribution, most implicit solvent models demonstrate reasonable agreement with extensive explicit solvent simulations (average difference 1.0-1.7 kcal/mol and R2=0.81-0.91) and with experimental hydration free energies (average unsigned errors=1.1-1.4 kcal/mol and R2=0.66-0.81). Chemical classes of compounds are identified that need further optimization of their ligand force field parameters and others that require improvement in the physical parameters of the implicit solvent models themselves. More sophisticated nonpolar models are also likely necessary to more effectively represent the underlying physics of solvation and take the quality of hydration free energies estimated from implicit solvent models to the next level. PMID:21735452

  10. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  11. Determination of partial molar volumes from free energy perturbation theory†

    PubMed Central

    Vilseck, Jonah Z.; Tirado-Rives, Julian

    2016-01-01

    Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood–Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm3 mol−1. The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute–solvent interactions. PMID:25589343

  12. Determination of partial molar volumes from free energy perturbation theory.

    PubMed

    Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L

    2015-04-07

    Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood-Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm(3) mol(-1). The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute-solvent interactions.

  13. Accurate prediction of polarised high order electrostatic interactions for hydrogen bonded complexes using the machine learning method kriging.

    PubMed

    Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A

    2015-02-05

    As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Optimal Multiple Surface Segmentation With Shape and Context Priors

    PubMed Central

    Bai, Junjie; Garvin, Mona K.; Sonka, Milan; Buatti, John M.; Wu, Xiaodong

    2014-01-01

    Segmentation of multiple surfaces in medical images is a challenging problem, further complicated by the frequent presence of weak boundary evidence, large object deformations, and mutual influence between adjacent objects. This paper reports a novel approach to multi-object segmentation that incorporates both shape and context prior knowledge in a 3-D graph-theoretic framework to help overcome the stated challenges. We employ an arc-based graph representation to incorporate a wide spectrum of prior information through pair-wise energy terms. In particular, a shape-prior term is used to penalize local shape changes and a context-prior term is used to penalize local surface-distance changes from a model of the expected shape and surface distances, respectively. The globally optimal solution for multiple surfaces is obtained by computing a maximum flow in a low-order polynomial time. The proposed method was validated on intraretinal layer segmentation of optical coherence tomography images and demonstrated statistically significant improvement of segmentation accuracy compared to our earlier graph-search method that was not utilizing shape and context priors. The mean unsigned surface positioning errors obtained by the conventional graph-search approach (6.30 ± 1.58 μm) was improved to 5.14 ± 0.99 μm when employing our new method with shape and context priors. PMID:23193309

  15. Predicting the photoinduced electron transfer thermodynamics in polyfluorinated 1,3,5-triarylpyrazolines based on multiple linear free energy relationships†

    PubMed Central

    Verma, Manjusha; Chaudhry, Aneese F.; Fahrni, Christoph J.

    2010-01-01

    The photophysical properties of 1,3,5-triarylpyrazolines are strongly influenced by the nature and position of substituents attached to the aryl-rings, rendering this fluorophore platform well suited for the design of fluorescent probes utilizing a photoinduced electron transfer (PET) switching mechanism. To explore the tunability of two key parameters that govern the PET thermodynamics, the excited state energy ΔE00 and acceptor potential E(A/A−), a library of polyfluoro-substituted 1,3-diaryl-5-phenyl-pyrazolines was synthesized and characterized. The observed trends for the PET parameters were effectively captured through multiple Hammett linear free energy relationships (LFER) using a set of independent substituent constants for each of the two aryl rings. Given the lack of experimental Hammett constants for polyfluoro substituted aromatics, theoretically derived constants based on the electrostatic potential at the nucleus (EPN) of carbon atoms were employed as quantum chemical descriptors. The performance of the LFER was evaluated with a set of compounds that were not included in the training set, yielding a mean unsigned error of 0.05 eV for the prediction of the combined PET parameters. The outlined LFER approach should be well suited to design and optimize the performance of cation-responsive 1,3,5-triarylpyrazolines. PMID:19343239

  16. The Linear Interaction Energy Method for the Prediction of Protein Stability Changes Upon Mutation

    PubMed Central

    Wickstrom, Lauren; Gallicchio, Emilio; Levy, Ronald M.

    2011-01-01

    The coupling of protein energetics and sequence changes is a critical aspect of computational protein design, as well as for the understanding of protein evolution, human disease, and drug resistance. In order to study the molecular basis for this coupling, computational tools must be sufficiently accurate and computationally inexpensive enough to handle large amounts of sequence data. We have developed a computational approach based on the linear interaction energy (LIE) approximation to predict the changes in the free energy of the native state induced by a single mutation. This approach was applied to a set of 822 mutations in 10 proteins which resulted in an average unsigned error of 0.82 kcal/mol and a correlation coefficient of 0.72 between the calculated and experimental ΔΔG values. The method is able to accurately identify destabilizing hot spot mutations however it has difficulty in distinguishing between stabilizing and destabilizing mutations due to the distribution of stability changes for the set of mutations used to parameterize the model. In addition, the model also performs quite well in initial tests on a small set of double mutations. Based on these promising results, we can begin to examine the relationship between protein stability and fitness, correlated mutations, and drug resistance. PMID:22038697

  17. Differential effects of galvanic vestibular stimulation on arm position sense in right- vs. left-handers.

    PubMed

    Schmidt, Lena; Artinger, Frank; Stumpf, Oliver; Kerkhoff, Georg

    2013-04-01

    The human brain is organized asymmetrically in two hemispheres with different functional specializations. Left- and right-handers differ in many functional capacities and their anatomical representations. Right-handers often show a stronger functional lateralization than left-handers, the latter showing a more bilateral, symmetrical brain organization. Recent functional imaging evidence shows a different lateralization of the cortical vestibular system towards the side of the preferred hand in left- vs. right-handers as well. Since the vestibular system is involved in somatosensory processing and the coding of body position, vestibular stimulation should affect such capacities differentially in left- vs. right-handers. In the present, sham-stimulation-controlled study we explored this hypothesis by studying the effects of galvanic vestibular stimulation (GVS) on proprioception in both forearms in left- and right-handers. Horizontal arm position sense (APS) was measured with an opto-electronic device. Second, the polarity-specific online- and after-effects of subsensory, bipolar GVS on APS were investigated in different sessions separately for both forearms. At baseline, both groups did not differ in their unsigned errors for both arms. However, right-handers showed significant directional errors in APS of both arms towards their own body. Right-cathodal/left-anodal GVS, resulting in right vestibular cortex activation, significantly deteriorated left APS in right-handers, but had no detectable effect on APS in left-handers in either arm. These findings are compatible with a right-hemisphere dominance for vestibular functions in right-handers and a differential vestibular organization in left-handers that compensates for the disturbing effects of GVS on APS. Moreover, our results show superior arm proprioception in left-handers in both forearms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Resolving Transition Metal Chemical Space: Feature Selection for Machine Learning and Structure-Property Relationships.

    PubMed

    Janet, Jon Paul; Kulik, Heather J

    2017-11-22

    Machine learning (ML) of quantum mechanical properties shows promise for accelerating chemical discovery. For transition metal chemistry where accurate calculations are computationally costly and available training data sets are small, the molecular representation becomes a critical ingredient in ML model predictive accuracy. We introduce a series of revised autocorrelation functions (RACs) that encode relationships of the heuristic atomic properties (e.g., size, connectivity, and electronegativity) on a molecular graph. We alter the starting point, scope, and nature of the quantities evaluated in standard ACs to make these RACs amenable to inorganic chemistry. On an organic molecule set, we first demonstrate superior standard AC performance to other presently available topological descriptors for ML model training, with mean unsigned errors (MUEs) for atomization energies on set-aside test molecules as low as 6 kcal/mol. For inorganic chemistry, our RACs yield 1 kcal/mol ML MUEs on set-aside test molecules in spin-state splitting in comparison to 15-20× higher errors for feature sets that encode whole-molecule structural information. Systematic feature selection methods including univariate filtering, recursive feature elimination, and direct optimization (e.g., random forest and LASSO) are compared. Random-forest- or LASSO-selected subsets 4-5× smaller than the full RAC set produce sub- to 1 kcal/mol spin-splitting MUEs, with good transferability to metal-ligand bond length prediction (0.004-5 Å MUE) and redox potential on a smaller data set (0.2-0.3 eV MUE). Evaluation of feature selection results across property sets reveals the relative importance of local, electronic descriptors (e.g., electronegativity, atomic number) in spin-splitting and distal, steric effects in redox potential and bond lengths.

  19. Relapse of extinguished fear after exposure to a dangerous context is mitigated by testing in a safe context

    PubMed Central

    Goode, Travis D.; Kim, Janice J.

    2015-01-01

    Aversive events can trigger relapse of extinguished fear memories, presenting a major challenge to the long-term efficacy of therapeutic interventions. Here, we examined factors regulating the relapse of extinguished fear after exposure of rats to a dangerous context. Rats received unsignaled shock in a distinct context (“dangerous” context) 24 h prior to auditory fear conditioning in another context. Fear to the auditory conditioned stimulus (CS) was subsequently extinguished either in the conditioning context (“ambiguous” context) or in a third novel context (“safe” context). Exposure to the dangerous context 30 min before a CS retention test caused relapse to the CS in the ambiguous and safe test contexts relative to nonextinguished controls. When rats were tested 24 h later (with or without short-term testing), rats tested in the ambiguous context continued to exhibit relapse, whereas rats tested in the safe context did not. Additionally, exposure of rats to the conditioning context—in place of the unsignaled shock context—did not result in relapse of fear to the CS in the safe testing context. Our work highlights the vulnerabilities of extinction recall to interference, and demonstrates the importance of context associations in the relapse of fear after extinction. PMID:25691517

  20. Correlation between solar flare productivity and photospheric vector magnetic fields

    NASA Astrophysics Data System (ADS)

    Cui, Yanmei; Wang, Huaning

    2008-11-01

    Studying the statistical correlation between the solar flare productivity and photospheric magnetic fields is very important and necessary. It is helpful to set up a practical flare forecast model based on magnetic properties and improve the physical understanding of solar flare eruptions. In the previous study ([Cui, Y.M., Li, R., Zhang, L.Y., He, Y.L., Wang, H.N. Correlation between solar flare productivity and photospheric magnetic field properties 1. Maximum horizontal gradient, length of neutral line, number of singular points. Sol. Phys. 237, 45 59, 2006]; from now on we refer to this paper as ‘Paper I’), three measures of the maximum horizontal gradient, the length of the neutral line, and the number of singular points are computed from 23990 SOHO/MDI longitudinal magnetograms. The statistical relationship between the solar flare productivity and these three measures is well fitted with sigmoid functions. In the current work, the three measures of the length of strong-shear neutral line, total unsigned current, and total unsigned current helicity are computed from 1353 vector magnetograms observed at Huairou Solar Observing Station. The relationship between the solar flare productivity and the current three measures can also be well fitted with sigmoid functions. These results are expected to be beneficial to future operational flare forecasting models.

  1. Relapse of extinguished fear after exposure to a dangerous context is mitigated by testing in a safe context.

    PubMed

    Goode, Travis D; Kim, Janice J; Maren, Stephen

    2015-03-01

    Aversive events can trigger relapse of extinguished fear memories, presenting a major challenge to the long-term efficacy of therapeutic interventions. Here, we examined factors regulating the relapse of extinguished fear after exposure of rats to a dangerous context. Rats received unsignaled shock in a distinct context ("dangerous" context) 24 h prior to auditory fear conditioning in another context. Fear to the auditory conditioned stimulus (CS) was subsequently extinguished either in the conditioning context ("ambiguous" context) or in a third novel context ("safe" context). Exposure to the dangerous context 30 min before a CS retention test caused relapse to the CS in the ambiguous and safe test contexts relative to nonextinguished controls. When rats were tested 24 h later (with or without short-term testing), rats tested in the ambiguous context continued to exhibit relapse, whereas rats tested in the safe context did not. Additionally, exposure of rats to the conditioning context--in place of the unsignaled shock context--did not result in relapse of fear to the CS in the safe testing context. Our work highlights the vulnerabilities of extinction recall to interference, and demonstrates the importance of context associations in the relapse of fear after extinction. © 2015 Goode et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Universal solvation model based on solute electron density and on a continuum model of the solvent defined by the bulk dielectric constant and atomic surface tensions.

    PubMed

    Marenich, Aleksandr V; Cramer, Christopher J; Truhlar, Donald G

    2009-05-07

    We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the "D" stands for "density" to denote that the full solute electron density is used without defining partial atomic charges. "Continuum" denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where "universal" denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which a few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous organic solvents and water), and 143 transfer free energies for 93 neutral solutes between water and 15 organic solvents. The elements present in the solutes are H, C, N, O, F, Si, P, S, Cl, and Br. The SMD model employs a single set of parameters (intrinsic atomic Coulomb radii and atomic surface tension coefficients) optimized over six electronic structure methods: M05-2X/MIDI!6D, M05-2X/6-31G, M05-2X/6-31+G, M05-2X/cc-pVTZ, B3LYP/6-31G, and HF/6-31G. Although the SMD model has been parametrized using the IEF-PCM protocol for bulk electrostatics, it may also be employed with other algorithms for solving the nonhomogeneous Poisson equation for continuum solvation calculations in which the solute is represented by its electron density in real space. This includes, for example, the conductor-like screening algorithm. With the 6-31G basis set, the SMD model achieves mean unsigned errors of 0.6-1.0 kcal/mol in the solvation free energies of tested neutrals and mean unsigned errors of 4 kcal/mol on average for ions with either Gaussian03 or GAMESS.

  3. Universal Solvation Model Based on Solute Electron Density and on a Continuum Model of the Solvent Defined by the Bulk Dielectric Constant and Atomic Surface Tensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marenich, Aleksandr; Cramer, Christopher J; Truhlar, Donald G

    2009-04-30

    We present a new continuum solvation model based on the quantum mechanical charge density of a solute molecule interacting with a continuum description of the solvent. The model is called SMD, where the “D” stands for “density” to denote that the full solute electron density is used without defining partial atomic charges. “Continuum” denotes that the solvent is not represented explicitly but rather as a dielectric medium with surface tension at the solute-solvent boundary. SMD is a universal solvation model, where “universal” denotes its applicability to any charged or uncharged solute in any solvent or liquid medium for which amore » few key descriptors are known (in particular, dielectric constant, refractive index, bulk surface tension, and acidity and basicity parameters). The model separates the observable solvation free energy into two main components. The first component is the bulk electrostatic contribution arising from a self-consistent reaction field treatment that involves the solution of the nonhomogeneous Poisson equation for electrostatics in terms of the integral-equation-formalism polarizable continuum model (IEF-PCM). The cavities for the bulk electrostatic calculation are defined by superpositions of nuclear-centered spheres. The second component is called the cavity-dispersion-solvent-structure term and is the contribution arising from short-range interactions between the solute and solvent molecules in the first solvation shell. This contribution is a sum of terms that are proportional (with geometry-dependent proportionality constants called atomic surface tensions) to the solvent-accessible surface areas of the individual atoms of the solute. The SMD model has been parametrized with a training set of 2821 solvation data including 112 aqueous ionic solvation free energies, 220 solvation free energies for 166 ions in acetonitrile, methanol, and dimethyl sulfoxide, 2346 solvation free energies for 318 neutral solutes in 91 solvents (90 nonaqueous organic solvents and water), and 143 transfer free energies for 93 neutral solutes between water and 15 organic solvents. The elements present in the solutes are H, C, N, O, F, Si, P, S, Cl, and Br. The SMD model employs a single set of parameters (intrinsic atomic Coulomb radii and atomic surface tension coefficients) optimized over six electronic structure methods: M05-2X/MIDI!6D, M05-2X/6-31G*, M05-2X/6-31+G**, M05-2X/cc-pVTZ, B3LYP/6-31G*, and HF/6-31G*. Although the SMD model has been parametrized using the IEF-PCM protocol for bulk electrostatics, it may also be employed with other algorithms for solving the nonhomogeneous Poisson equation for continuum solvation calculations in which the solute is represented by its electron density in real space. This includes, for example, the conductor-like screening algorithm. With the 6-31G* basis set, the SMD model achieves mean unsigned errors of 0.6-1.0 kcal/mol in the solvation free energies of tested neutrals and mean unsigned errors of 4 kcal/mol on average for ions with either Gaussian03 or GAMESS.« less

  4. One Time Passwords in Everything (OPIE): Experiences with Building and Using Stringer Authentication

    DTIC Science & Technology

    1995-01-01

    opiepasswd(1). The name change brings it more in line with its UNIX counterpart passwd (1), which should make both programs easier to remember for users. This...char * passwd ) int opiehash(char *x, unsigned algorithm) The one-time password schemes implemented in OPIE, as rst described in [Hal94], compute a...seed, passwd ); while (sequence-- != 0) opiehash(result, algorithm); opiebtoe(result,words); Send words. : : : 6 Deployment Every machine that has

  5. Automated 3-D method for the correction of axial artifacts in spectral-domain optical coherence tomography images

    PubMed Central

    Antony, Bhavna; Abràmoff, Michael D.; Tang, Li; Ramdas, Wishal D.; Vingerling, Johannes R.; Jansonius, Nomdo M.; Lee, Kyungmoo; Kwon, Young H.; Sonka, Milan; Garvin, Mona K.

    2011-01-01

    The 3-D spectral-domain optical coherence tomography (SD-OCT) images of the retina often do not reflect the true shape of the retina and are distorted differently along the x and y axes. In this paper, we propose a novel technique that uses thin-plate splines in two stages to estimate and correct the distinct axial artifacts in SD-OCT images. The method was quantitatively validated using nine pairs of OCT scans obtained with orthogonal fast-scanning axes, where a segmented surface was compared after both datasets had been corrected. The mean unsigned difference computed between the locations of this artifact-corrected surface after the single-spline and dual-spline correction was 23.36 ± 4.04 μm and 5.94 ± 1.09 μm, respectively, and showed a significant difference (p < 0.001 from two-tailed paired t-test). The method was also validated using depth maps constructed from stereo fundus photographs of the optic nerve head, which were compared to the flattened top surface from the OCT datasets. Significant differences (p < 0.001) were noted between the artifact-corrected datasets and the original datasets, where the mean unsigned differences computed over 30 optic-nerve-head-centered scans (in normalized units) were 0.134 ± 0.035 and 0.302 ± 0.134, respectively. PMID:21833377

  6. Absolute binding free energy calculations of CBClip host–guest systems in the SAMPL5 blind challenge

    PubMed Central

    Tofoleanu, Florentina; Pickard, Frank C.; König, Gerhard; Huang, Jing; Damjanović, Ana; Baek, Minkyung; Seok, Chaok; Brooks, Bernard R.

    2016-01-01

    Herein, we report the absolute binding free energy calculations of CBClip complexes in the SAMPL5 blind challenge. Initial conformations of CBClip complexes were obtained using docking and molecular dynamics simulations. Free energy calculations were performed using thermodynamic integration (TI) with soft-core potentials and Bennett’s acceptance ratio (BAR) method based on a serial insertion scheme. We compared the results obtained with TI simulations with soft-core potentials and Hamiltonian replica exchange simulations with the serial insertion method combined with the BAR method. The results show that the difference between the two methods can be mainly attributed to the van der Waals free energies, suggesting that either the simulations used for TI or the simulations used for BAR, or both are not fully converged and the two sets of simulations may have sampled difference phase space regions. The penalty scores of force field parameters of the 10 guest molecules provided by CHARMM Generalized Force Field can be an indicator of the accuracy of binding free energy calculations. Among our submissions, the combination of docking and TI performed best, which yielded the root mean square deviation of 2.94 kcal/mol and an average unsigned error of 3.41 kcal/mol for the ten guest molecules. These values were best overall among all participants. However, our submissions had little correlation with experiments. PMID:27677749

  7. Protein-ligand binding free energy estimation using molecular mechanics and continuum electrostatics. Application to HIV-1 protease inhibitors

    NASA Astrophysics Data System (ADS)

    Zoete, V.; Michielin, O.; Karplus, M.

    2003-12-01

    A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SAS bur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, ΔGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC 50 without reparametrization.

  8. Accurate Ionization Energies for Mononuclear Copper Complexes Remain a Challenge for Density Functional Theory.

    PubMed

    Dereli, Büsra; Ortuño, Manuel A; Cramer, Christopher J

    2018-04-17

    Copper is ubiquitous and its one-electron redox chemistry is central to many catalytic processes. Modeling such chemistry requires electronic structure methods capable of the accurate prediction of ionization energies (IEs) for compounds including copper in different oxidation states and supported by various ligands. Herein, we estimate IEs for 12 mononuclear Cu species previously reported in the literature by using 21 modern density functionals and the DLPNO-CCSD(T) wave function theory model; we consider extrapolated values of the latter to provide reference values of acceptable accuracy. Our results reveal a considerable diversity in functional performance. Although there is nearly always at least one functional that performs well for any given species, there are none that do so for every member of the test set, and certain cases are particularly pathological. Over the entire test set, the SOGGA11-X functional performs best with a mean unsigned error (MUE) of 0.22 eV. PBE0, ωB97X-D, CAM-B3LYP, M11-L, B3LYP, and M11 exhibit MUEs ranging between 0.23 and 0.34 eV. When including relativistic effects with the zero-order regular approximation, ωB97X-D, CAM-B3LYP, and PBE0 are found to provide the best accuracy. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Impact of mutations on the allosteric conformational equilibrium

    PubMed Central

    Weinkam, Patrick; Chen, Yao Chi; Pons, Jaume; Sali, Andrej

    2012-01-01

    Allostery in a protein involves effector binding at an allosteric site that changes the structure and/or dynamics at a distant, functional site. In addition to the chemical equilibrium of ligand binding, allostery involves a conformational equilibrium between one protein substate that binds the effector and a second substate that less strongly binds the effector. We run molecular dynamics simulations using simple, smooth energy landscapes to sample specific ligand-induced conformational transitions, as defined by the effector-bound and unbound protein structures. These simulations can be performed using our web server: http://salilab.org/allosmod/. We then develop a set of features to analyze the simulations and capture the relevant thermodynamic properties of the allosteric conformational equilibrium. These features are based on molecular mechanics energy functions, stereochemical effects, and structural/dynamic coupling between sites. Using a machine-learning algorithm on a dataset of 10 proteins and 179 mutations, we predict both the magnitude and sign of the allosteric conformational equilibrium shift by the mutation; the impact of a large identifiable fraction of the mutations can be predicted with an average unsigned error of 1 kBT. With similar accuracy, we predict the mutation effects for an 11th protein that was omitted from the initial training and testing of the machine-learning algorithm. We also assess which calculated thermodynamic properties contribute most to the accuracy of the prediction. PMID:23228330

  10. Prediction suppression and surprise enhancement in monkey inferotemporal cortex.

    PubMed

    Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R

    2017-07-01

    Exposing monkeys, over the course of days and weeks, to pairs of images presented in fixed sequence, so that each leading image becomes a predictor for the corresponding trailing image, affects neuronal visual responsiveness in area TE. At the end of the training period, neurons respond relatively weakly to a trailing image when it appears in a trained sequence and, thus, confirms prediction, whereas they respond relatively strongly to the same image when it appears in an untrained sequence and, thus, violates prediction. This effect could arise from prediction suppression (reduced firing in response to the occurrence of a probable event) or surprise enhancement (elevated firing in response to the omission of a probable event). To identify its cause, we compared firing under the prediction-confirming and prediction-violating conditions to firing under a prediction-neutral condition. The results provide strong evidence for prediction suppression and limited evidence for surprise enhancement. NEW & NOTEWORTHY In predictive coding models of the visual system, neurons carry signed prediction error signals. We show here that monkey inferotemporal neurons exhibit prediction-modulated firing, as posited by these models, but that the signal is unsigned. The response to a prediction-confirming image is suppressed, and the response to a prediction-violating image may be enhanced. These results are better explained by a model in which the visual system emphasizes unpredicted events than by a predictive coding model. Copyright © 2017 the American Physiological Society.

  11. Investigation of geomagnetic field forecasting and fluid dynamics of the core

    NASA Technical Reports Server (NTRS)

    Benton, E. R. (Principal Investigator)

    1981-01-01

    The magnetic determination of the depth of the core-mantle boundary using MAGSAT data is discussed. Refinements to the approach of using the pole-strength of Earth to evaluate the radius of the Earth's core-mantle boundary are reported. The downward extrapolation through the electrically conducting mantle was reviewed. Estimates of an upper bound for the time required for Earth's liquid core to overturn completely are presented. High order analytic approximations to the unsigned magnetic flux crossing the Earth's surface are also presented.

  12. Far Forward Life Support System (FFLSS) Phase II

    DTIC Science & Technology

    2001-05-01

    button will clear audible, flashing , or constant red alarms. Pushing the Config button drops down to the Level 2 menu (bottom line of Figure 8). In...disabled. Again, with testing this audible noise can be disabled. The visual flashing or constant red cannot be disabled. Pushing Return goes back to Level...Calling PLOTPIXEL( x ,y) and PLOTLINE(xl,x2,yl,y2) is as simple as pushing the appropriate bytes (unsigned integers) onto the stack and jumping to the

  13. How Do Vision and Hearing Impact Pedestrian Time-to-Arrival Judgments?

    PubMed Central

    Roper, JulieAnne M.; Hassan, Shirin E.

    2014-01-01

    Purpose To determine how accurate normally-sighted male and female pedestrians were at making time-to-arrival (TTA) judgments of approaching vehicles when using just their hearing or both their hearing and vision. Methods Ten male and 14 female subjects with confirmed normal vision and hearing estimated the TTA of approaching vehicles along an unsignalized street under two sensory conditions: (i) using both habitual vision and hearing; and (ii) using habitual hearing only. All subjects estimated how long the approaching vehicle would take to reach them (ie the TTA). The actual TTA of vehicles was also measured using custom made sensors. The error in TTA judgments for each subject under each sensory condition was calculated as the difference between the actual and estimated TTA. A secondary timing experiment was also conducted to adjust each subject’s TTA judgments for their “internal metronome”. Results Error in TTA judgments changed significantly as a function of both the actual TTA (p<0.0001) and sensory condition (p<0.0001). While no main effect for gender was found (p=0.19), the way the TTA judgments varied within each sensory condition for each gender was different (p<0.0001). Females tended to be as accurate under either condition (p≥0.01) with the exception of TTA judgments made when the actual TTA was two seconds or less and eight seconds or longer, during which the vision and hearing condition was more accurate (p≤0.002). Males made more accurate TTA judgments under the hearing only condition for actual TTA values five seconds or less (p<0.0001), after which there were no significant differences between the two conditions (p≥0.01). Conclusions Our data suggests that males and females use visual and auditory information differently when making TTA judgments. While the sensory condition did not affect the females’ accuracy in judgments, males initially tended to be more accurate when using their hearing only. PMID:24509543

  14. An extended car-following model with consideration of vehicle to vehicle communication of two conflicting streams

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Li, Peng

    2017-05-01

    In this paper, we propose a car-following model to explore the influences of V2V communication on the driving behavior at un-signalized intersections with two crossing streams and to explore how the speed guidance strategy affects the operation efficiency. The numerical results illustrate that the benefits of the guidance strategy could be enhanced by lengthening the guiding space range and increasing the maximum speed limitation, and that the guidance strategy is more suitable under low to medium traffic density and small safety interval condition.

  15. Linear modeling of steady-state behavioral dynamics.

    PubMed Central

    Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert

    2002-01-01

    The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782

  16. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging.

    PubMed

    Jaferzadeh, Keyvan; Gholami, Samaneh; Moon, Inkyu

    2016-12-20

    In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.

  17. Simulation of Reversible Protein–Protein Binding and Calculation of Binding Free Energies Using Perturbed Distance Restraints

    PubMed Central

    2017-01-01

    Virtually all biological processes depend on the interaction between proteins at some point. The correct prediction of biomolecular binding free-energies has many interesting applications in both basic and applied pharmaceutical research. While recent advances in the field of molecular dynamics (MD) simulations have proven the feasibility of the calculation of protein–protein binding free energies, the large conformational freedom of proteins and complex free energy landscapes of binding processes make such calculations a difficult task. Moreover, convergence and reversibility of resulting free-energy values remain poorly described. In this work, an easy-to-use, yet robust approach for the calculation of standard-state protein–protein binding free energies using perturbed distance restraints is described. In the binding process the conformations of the proteins were restrained, as suggested earlier. Two approaches to avoid end-state problems upon release of the conformational restraints were compared. The method was evaluated by practical application to a small model complex of ubiquitin and the very flexible ubiquitin-binding domain of human DNA polymerase ι (UBM2). All computed free energy differences were closely monitored for convergence, and the calculated binding free energies had a mean unsigned deviation of only 1.4 or 2.5 kJ·mol–1 from experimental values. Statistical error estimates were in the order of thermal noise. We conclude that the presented method has promising potential for broad applicability to quantitatively describe protein–protein and various other kinds of complex formation. PMID:28898077

  18. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  19. Study of magnetic helicity injection in the active region NOAA 9236 producing multiple flare-associated coronal mass ejection events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Sung-Hong; Cho, Kyung-Suk; Bong, Su-Chan

    To better understand a preferred magnetic field configuration and its evolution during coronal mass ejection (CME) events, we investigated the spatial and temporal evolution of photospheric magnetic fields in the active region NOAA 9236 that produced eight flare-associated CMEs during the time period of 2000 November 23-26. The time variations of the total magnetic helicity injection rate and the total unsigned magnetic flux are determined and examined not only in the entire active region but also in some local regions such as the main sunspots and the CME-associated flaring regions using SOHO/MDI magnetogram data. As a result, we found thatmore » (1) in the sunspots, a large amount of positive (right-handed) magnetic helicity was injected during most of the examined time period, (2) in the flare region, there was a continuous injection of negative (left-handed) magnetic helicity during the entire period, accompanied by a large increase of the unsigned magnetic flux, and (3) the flaring regions were mainly composed of emerging bipoles of magnetic fragments in which magnetic field lines have substantially favorable conditions for making reconnection with large-scale, overlying, and oppositely directed magnetic field lines connecting the main sunspots. These observational findings can also be well explained by some MHD numerical simulations for CME initiation (e.g., reconnection-favored emerging flux models). We therefore conclude that reconnection-favored magnetic fields in the flaring emerging flux regions play a crucial role in producing the multiple flare-associated CMEs in NOAA 9236.« less

  20. Indication of the Hanle Effect by Comparing the Scattering Polarization Observed by CLASP in the Ly α and Si iii 120.65 nm Lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, R.; Kubo, M.; Kano, R.

    The Chromospheric Lyman-Alpha Spectro-Polarimeter is a sounding rocket experiment that has provided the first successful measurement of the linear polarization produced by scattering processes in the hydrogen Ly α line (121.57 nm) radiation of the solar disk. In this paper, we report that the Si iii line at 120.65 nm also shows scattering polarization and we compare the scattering polarization signals observed in the Ly α and Si iii lines in order to search for observational signatures of the Hanle effect. We focus on four selected bright structures and investigate how the U / I spatial variations vary between themore » Ly α wing, the Ly α core, and the Si iii line as a function of the total unsigned photospheric magnetic flux estimated from Solar Dynamics Observatory /Helioseismic and Magnetic Imager observations. In an internetwork region, the Ly α core shows an antisymmetric spatial variation across the selected bright structure, but it does not show it in other more magnetized regions. In the Si iii line, the spatial variation of U / I deviates from the above-mentioned antisymmetric shape as the total unsigned photospheric magnetic flux increases. A plausible explanation of this difference is the operation of the Hanle effect. We argue that diagnostic techniques based on the scattering polarization observed simultaneously in two spectral lines with very different sensitivities to the Hanle effect, like Ly α and Si iii, are of great potential interest for exploring the magnetism of the upper solar chromosphere and transition region.« less

  1. PHYSICAL PROPERTIES OF LARGE AND SMALL GRANULES IN SOLAR QUIET REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu Daren; Xie Zongxia; Hu Qinghua

    The normal mode observations of seven quiet regions obtained by the Hinode spacecraft are analyzed to study the physical properties of granules. An artificial intelligence technique is introduced to automatically find the spatial distribution of granules in feature spaces. In this work, we investigate the dependence of granular continuum intensity, mean Doppler velocity, and magnetic fields on granular diameter. We recognized 71,538 granules by an automatic segmentation technique and then extracted five properties: diameter, continuum intensity, Doppler velocity, and longitudinal and transverse magnetic flux density to describe the granules. To automatically explore the intrinsic structures of the granules in themore » five-dimensional parameter space, the X-means clustering algorithm and one-rule classifier are introduced to define the rules for classifying the granules. It is found that diameter is a dominating parameter in classifying the granules and two families of granules are derived: small granules with diameters smaller than 1.''44, and large granules with diameters larger than 1.''44. Based on statistical analysis of the detected granules, the following results are derived: (1) the averages of diameter, continuum intensity, and Doppler velocity in the upward direction of large granules are larger than those of small granules; (2) the averages of absolute longitudinal, transverse, and unsigned flux density of large granules are smaller than those of small granules; (3) for small granules, the average of continuum intensity increases with their diameters, while the averages of Doppler velocity, transverse, absolute longitudinal, and unsigned magnetic flux density decrease with their diameters. However, the mean properties of large granules are stable; (4) the intensity distributions of all granules and small granules do not satisfy Gaussian distribution, while that of large granules almost agrees with normal distribution with a peak at 1.04 I{sub 0}.« less

  2. Application of an Artificial Neural Network to the Prediction of OH Radical Reaction Rate Constants for Evaluating Global Warming Potential.

    PubMed

    Allison, Thomas C

    2016-03-03

    Rate constants for reactions of chemical compounds with hydroxyl radical are a key quantity used in evaluating the global warming potential of a substance. Experimental determination of these rate constants is essential, but it can also be difficult and time-consuming to produce. High-level quantum chemistry predictions of the rate constant can suffer from the same issues. Therefore, it is valuable to devise estimation schemes that can give reasonable results on a variety of chemical compounds. In this article, the construction and training of an artificial neural network (ANN) for the prediction of rate constants at 298 K for reactions of hydroxyl radical with a diverse set of molecules is described. Input to the ANN consists of counts of the chemical bonds and bends present in the target molecule. The ANN is trained using 792 (•)OH reaction rate constants taken from the NIST Chemical Kinetics Database. The mean unsigned percent error (MUPE) for the training set is 12%, and the MUPE of the testing set is 51%. It is shown that the present methodology yields rate constants of reasonable accuracy for a diverse set of inputs. The results are compared to high-quality literature values and to another estimation scheme. This ANN methodology is expected to be of use in a wide range of applications for which (•)OH reaction rate constants are required. The model uses only information that can be gathered from a 2D representation of the molecule, making the present approach particularly appealing, especially for screening applications.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuechler, Erich R.; Department of Chemistry, University of Minnesota, Minneapolis, Minnesota 55455-0431; Giese, Timothy J.

    To better represent the solvation effects observed along reaction pathways, and of ionic species in general, a charge-dependent variable-radii smooth conductor-like screening model (VR-SCOSMO) is developed. This model is implemented and parameterized with a third order density-functional tight binding quantum model, DFTB3/3OB-OPhyd, a quantum method which was developed for organic and biological compounds, utilizing a specific parameterization for phosphate hydrolysis reactions. Unlike most other applications with the DFTB3/3OB model, an auxiliary set of atomic multipoles is constructed from the underlying DFTB3 density matrix which is used to interact the solute with the solvent response surface. The resulting method is variational,more » produces smooth energies, and has analytic gradients. As a baseline, a conventional SCOSMO model with fixed radii is also parameterized. The SCOSMO and VR-SCOSMO models shown have comparable accuracy in reproducing neutral-molecule absolute solvation free energies; however, the VR-SCOSMO model is shown to reduce the mean unsigned errors (MUEs) of ionic compounds by half (about 2-3 kcal/mol). The VR-SCOSMO model presents similar accuracy as a charge-dependent Poisson-Boltzmann model introduced by Hou et al. [J. Chem. Theory Comput. 6, 2303 (2010)]. VR-SCOSMO is then used to examine the hydrolysis of trimethylphosphate and seven other phosphoryl transesterification reactions with different leaving groups. Two-dimensional energy landscapes are constructed for these reactions and calculated barriers are compared to those obtained from ab initio polarizable continuum calculations and experiment. Results of the VR-SCOSMO model are in good agreement in both cases, capturing the rate-limiting reaction barrier and the nature of the transition state.« less

  4. [Use of personal computers by diplomats of anesthesiology in Japan].

    PubMed

    Yamamoto, K; Ohmura, S; Tsubokawa, T; Kita, M; Kushida, Y; Kobayashi, T

    1999-04-01

    Use of personal computers by diplomats of the Japanese Board of Anesthesiology working in Japanese university hospitals was investigated. Unsigned questionnaires were returned from 232 diplomats of 18 anesthesia departments. The age of responders ranged from twenties to sixties. Personal computer systems are used by 223 diplomats (96.1%), while nine (3.9%) do not use them. The computer systems used are: Apple Macintosh 77%, IBM compatible PC 21% and UNIX 2%. Although 197 diplomats have e-mail addresses, only 162 of them actually send and receive e-mails. Diplomats in fifties use e-mail most actively and those in sixties come second.

  5. Gradient-based multiconfiguration Shepard interpolation for generating potential energy surfaces for polyatomic reactions.

    PubMed

    Tishchenko, Oksana; Truhlar, Donald G

    2010-02-28

    This paper describes and illustrates a way to construct multidimensional representations of reactive potential energy surfaces (PESs) by a multiconfiguration Shepard interpolation (MCSI) method based only on gradient information, that is, without using any Hessian information from electronic structure calculations. MCSI, which is called multiconfiguration molecular mechanics (MCMM) in previous articles, is a semiautomated method designed for constructing full-dimensional PESs for subsequent dynamics calculations (classical trajectories, full quantum dynamics, or variational transition state theory with multidimensional tunneling). The MCSI method is based on Shepard interpolation of Taylor series expansions of the coupling term of a 2 x 2 electronically diabatic Hamiltonian matrix with the diagonal elements representing nonreactive analytical PESs for reactants and products. In contrast to the previously developed method, these expansions are truncated in the present version at the first order, and, therefore, no input of electronic structure Hessians is required. The accuracy of the interpolated energies is evaluated for two test reactions, namely, the reaction OH+H(2)-->H(2)O+H and the hydrogen atom abstraction from a model of alpha-tocopherol by methyl radical. The latter reaction involves 38 atoms and a 108-dimensional PES. The mean unsigned errors averaged over a wide range of representative nuclear configurations (corresponding to an energy range of 19.5 kcal/mol in the former case and 32 kcal/mol in the latter) are found to be within 1 kcal/mol for both reactions, based on 13 gradients in one case and 11 in the other. The gradient-based MCMM method can be applied for efficient representations of multidimensional PESs in cases where analytical electronic structure Hessians are too expensive or unavailable, and it provides new opportunities to employ high-level electronic structure calculations for dynamics at an affordable cost.

  6. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-01-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively. PMID:23837966

  7. A full Bayes before-after study accounting for temporal and spatial effects: Evaluating the safety impact of new signal installations.

    PubMed

    Sacchi, Emanuele; Sayed, Tarek; El-Basyouny, Karim

    2016-09-01

    Recently, important advances in road safety statistics have been brought about by methods able to address issues other than the choice of the best error structure for modeling crash data. In particular, accounting for spatial and temporal interdependence, i.e., the notion that the collision occurrence of a site or unit times depend on those of others, has become an important issue that needs further research. Overall, autoregressive models can be used for this purpose as they can specify that the output variable depends on its own previous values and on a stochastic term. Spatial effects have been investigated and applied mostly in the context of developing safety performance functions (SPFs) to relate crash occurrence to highway characteristics. Hence, there is a need for studies that attempt to estimate the effectiveness of safety countermeasures by including the spatial interdependence of road sites within the context of an observational before-after (BA) study. Moreover, the combination of temporal dynamics and spatial effects on crash frequency has not been explored in depth for SPF development. Therefore, the main goal of this research was to carry out a BA study accounting for spatial effects and temporal dynamics in evaluating the effectiveness of a road safety treatment. The countermeasure analyzed was the installation of traffic signals at unsignalized urban/suburban intersections in British Columbia (Canada). The full Bayes approach was selected as the statistical framework to develop the models. The results demonstrated that zone variation was a major component of total crash variability and that spatial effects were alleviated by clustering intersections together. Finally, the methodology used also allowed estimation of the treatment's effectiveness in the form of crash modification factors and functions with time trends. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.

    PubMed

    Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M

    2004-04-05

    In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.

  9. Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.

    PubMed

    Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M

    2006-01-01

    The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.

  10. MN15-L: A New Local Exchange-Correlation Functional for Kohn-Sham Density Functional Theory with Broad Accuracy for Atoms, Molecules, and Solids.

    PubMed

    Yu, Haoyu S; He, Xiao; Truhlar, Donald G

    2016-03-08

    Kohn-Sham density functional theory is widely used for applications of electronic structure theory in chemistry, materials science, and condensed-matter physics, but the accuracy depends on the quality of the exchange-correlation functional. Here, we present a new local exchange-correlation functional called MN15-L that predicts accurate results for a broad range of molecular and solid-state properties including main-group bond energies, transition metal bond energies, reaction barrier heights, noncovalent interactions, atomic excitation energies, ionization potentials, electron affinities, total atomic energies, hydrocarbon thermochemistry, and lattice constants of solids. The MN15-L functional has the same mathematical form as a previous meta-nonseparable gradient approximation exchange-correlation functional, MN12-L, but it is improved because we optimized it against a larger database, designated 2015A, and included smoothness restraints; the optimization has a much better representation of transition metals. The mean unsigned error on 422 chemical energies is 2.32 kcal/mol, which is the best among all tested functionals, with or without nonlocal exchange. The MN15-L functional also provides good results for test sets that are outside the training set. A key issue is that the functional is local (no nonlocal exchange or nonlocal correlation), which makes it relatively economical for treating large and complex systems and solids. Another key advantage is that medium-range correlation energy is built in so that one does not need to add damped dispersion by molecular mechanics in order to predict accurate noncovalent binding energies. We believe that the MN15-L functional should be useful for a wide variety of applications in chemistry, physics, materials science, and molecular biology.

  11. Multiconfiguration pair-density functional theory: barrier heights and main group and transition metal energetics.

    PubMed

    Carlson, Rebecca K; Li Manni, Giovanni; Sonnenberger, Andrew L; Truhlar, Donald G; Gagliardi, Laura

    2015-01-13

    Kohn-Sham density functional theory, resting on the representation of the electronic density and kinetic energy by a single Slater determinant, has revolutionized chemistry, but for open-shell systems, the Kohn-Sham Slater determinant has the wrong symmetry properties as compared to an accurate wave function. We have recently proposed a theory, called multiconfiguration pair-density functional theory (MC-PDFT), in which the electronic kinetic energy and classical Coulomb energy are calculated from a multiconfiguration wave function with the correct symmetry properties, and the rest of the energy is calculated from a density functional, called the on-top density functional, that depends on the density and the on-top pair density calculated from this wave function. We also proposed a simple way to approximate the on-top density functional by translation of Kohn-Sham exchange-correlation functionals. The method is much less expensive than other post-SCF methods for calculating the dynamical correlation energy starting with a multiconfiguration self-consistent-field wave function as the reference wave function, and initial tests of the theory were quite encouraging. Here, we provide a broader test of the theory by applying it to bond energies of main-group molecules and transition metal complexes, barrier heights and reaction energies for diverse chemical reactions, proton affinities, and the water dimerization energy. Averaged over 56 data points, the mean unsigned error is 3.2 kcal/mol for MC-PDFT, as compared to 6.9 kcal/mol for Kohn-Sham theory with a comparable density functional. MC-PDFT is more accurate on average than complete active space second-order perturbation theory (CASPT2) for main-group small-molecule bond energies, alkyl bond dissociation energies, transition-metal-ligand bond energies, proton affinities, and the water dimerization energy.

  12. On the elimination of the electronic structure bottleneck in on the fly nonadiabatic dynamics for small to moderate sized (10-15 atom) molecules using fit diabatic representations based solely on ab initio electronic structure data: The photodissociation of phenol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiaolei, E-mail: virtualzx@gmail.com; Yarkony, David R., E-mail: yarkony@jhu.edu

    2016-01-14

    In this work, we demonstrate that for moderate sized systems, here a system with 13 atoms, global coupled potential energy surfaces defined for several electronic states over a wide energy range and for distinct regions of nuclear coordinate space characterized by distinct electron configurations, can be constructed with precise energetics and an excellent description of non-adiabatic interactions in all regions. This is accomplished using a recently reported algorithm for constructing quasi-diabatic representations, H{sup d}, of adiabatic electronic states coupled by conical intersections. In this work, the algorithm is used to construct an H{sup d} to describe the photodissociation of phenolmore » from its first and second excited electronic states. The representation treats all 33 internal degrees of freedom in an even handed manner. The ab initio adiabatic electronic structure data used to construct the fit are obtained exclusively from multireference configuration interaction with single and double excitation wave functions comprised of 88 × 10{sup 6} configuration state functions, at geometries determined by quasi-classical trajectories. Since the algorithm uses energy gradients and derivative couplings in addition to electronic energies to construct H{sup d}, data at only 7379 nuclear configurations are required to construct a representation, which describes all nuclear configurations involved in H atom photodissociation to produce the phenoxyl radical in its ground or first excited electronic state, with a mean unsigned energy error of 202.9 cm{sup −1} for electronic energies <60 000 cm{sup −1}.« less

  13. VR-SCOSMO: A smooth conductor-like screening model with charge-dependent radii for modeling chemical reactions.

    PubMed

    Kuechler, Erich R; Giese, Timothy J; York, Darrin M

    2016-04-28

    To better represent the solvation effects observed along reaction pathways, and of ionic species in general, a charge-dependent variable-radii smooth conductor-like screening model (VR-SCOSMO) is developed. This model is implemented and parameterized with a third order density-functional tight binding quantum model, DFTB3/3OB-OPhyd, a quantum method which was developed for organic and biological compounds, utilizing a specific parameterization for phosphate hydrolysis reactions. Unlike most other applications with the DFTB3/3OB model, an auxiliary set of atomic multipoles is constructed from the underlying DFTB3 density matrix which is used to interact the solute with the solvent response surface. The resulting method is variational, produces smooth energies, and has analytic gradients. As a baseline, a conventional SCOSMO model with fixed radii is also parameterized. The SCOSMO and VR-SCOSMO models shown have comparable accuracy in reproducing neutral-molecule absolute solvation free energies; however, the VR-SCOSMO model is shown to reduce the mean unsigned errors (MUEs) of ionic compounds by half (about 2-3 kcal/mol). The VR-SCOSMO model presents similar accuracy as a charge-dependent Poisson-Boltzmann model introduced by Hou et al. [J. Chem. Theory Comput. 6, 2303 (2010)]. VR-SCOSMO is then used to examine the hydrolysis of trimethylphosphate and seven other phosphoryl transesterification reactions with different leaving groups. Two-dimensional energy landscapes are constructed for these reactions and calculated barriers are compared to those obtained from ab initio polarizable continuum calculations and experiment. Results of the VR-SCOSMO model are in good agreement in both cases, capturing the rate-limiting reaction barrier and the nature of the transition state.

  14. The radiology digital dashboard: effects on report turnaround time.

    PubMed

    Morgan, Matthew B; Branstetter, Barton F; Lionetti, David M; Richardson, Jeremy S; Chang, Paul J

    2008-03-01

    As radiology departments transition to near-complete digital information management, work flows and their supporting informatics infrastructure are becoming increasingly complex. Digital dashboards can integrate separate computerized information systems and summarize key work flow metrics in real time to facilitate informed decision making. A PACS-integrated digital dashboard function designed to alert radiologists to their unsigned report queue status, coupled with an actionable link to the report signing application, resulted in a 24% reduction in the time between transcription and report finalization. The dashboard was well received by radiologists who reported high usage for signing reports. Further research is needed to identify and evaluate other potentially useful work flow metrics for inclusion in a radiology clinical dashboard.

  15. Empirical Behavioral Models to Support Alternative Tools for the Analysis of Mixed-Priority Pedestrian-Vehicle Interaction in a Highway Capacity Context

    PubMed Central

    Rouphail, Nagui M.

    2011-01-01

    This paper presents behavioral-based models for describing pedestrian gap acceptance at unsignalized crosswalks in a mixed-priority environment, where some drivers yield and some pedestrians cross in gaps. Logistic regression models are developed to predict the probability of pedestrian crossings as a function of vehicle dynamics, pedestrian assertiveness, and other factors. In combination with prior work on probabilistic yielding models, the results can be incorporated in a simulation environment, where they can more fully describe the interaction of these two modes. The approach is intended to supplement HCM analytical procedure for locations where significant interaction occurs between drivers and pedestrians, including modern roundabouts. PMID:21643488

  16. The effects of extinction-aroused attention on context conditioning.

    PubMed

    Nelson, James Byron; Fabiano, Andrew M; Lamoureux, Jeffrey A

    2018-04-01

    Two experiments assessed the effects of extinguishing a conditioned cue on subsequent context conditioning. Each experiment used a different video-game method where sensors predicted attacking spaceships and participants responded to the sensor in a way that prepared them for the upcoming attack. In Experiment 1 extinction of a cue which signaled a spaceship-attack outcome facilitated subsequent learning when the attack occurred unsignaled. In Experiment 2 extinction of a cue facilitated subsequent learning, regardless of whether the spaceship outcome was the same or different as used in the earlier training. In neither experiment did the extinction context become inhibitory. Results are discussed in terms of current associative theories of attention and conditioning. © 2018 Nelson et al.; Published by Cold Spring Harbor Laboratory Press.

  17. Effect of planar cuts' orientation on the perceived surface layout and object's shape.

    PubMed

    Bocheva, Nadejda

    2009-07-01

    The effect of the orientation of the cutting planes producing planar curves over the surface of an object on its perceived pose and shape was investigated for line drawings representing three-dimensional objects. The results suggest that the orientational flow produced by the surface curves introduces an apparent object rotation in depth and in the image plane and changes in its perceived elongation. The apparent location of the nearest points is determined by the points of maximal view-dependent unsigned curvature of the surface curves. The data are discussed in relation to the interaction of the shape-from-silhouette system and shape-from-contour system and its effect on the interpretation of the surface contours with respect to the surface geometry.

  18. Nonparametric weighted stochastic block models

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2018-01-01

    We present a Bayesian formulation of weighted stochastic block models that can be used to infer the large-scale modular structure of weighted networks, including their hierarchical organization. Our method is nonparametric, and thus does not require the prior knowledge of the number of groups or other dimensions of the model, which are instead inferred from data. We give a comprehensive treatment of different kinds of edge weights (i.e., continuous or discrete, signed or unsigned, bounded or unbounded), as well as arbitrary weight transformations, and describe an unsupervised model selection approach to choose the best network description. We illustrate the application of our method to a variety of empirical weighted networks, such as global migrations, voting patterns in congress, and neural connections in the human brain.

  19. Adinkra (in)equivalence from Coxeter group representations: A case study

    NASA Astrophysics Data System (ADS)

    Chappell, Isaac; Gates, S. James; Hübsch, T.

    2014-02-01

    Using a MathematicaTM code, we present a straightforward numerical analysis of the 384-dimensional solution space of signed permutation 4×4 matrices, which in sets of four, provide representations of the 𝒢ℛ(4, 4) algebra, closely related to the 𝒩 = 1 (simple) supersymmetry algebra in four-dimensional space-time. Following after ideas discussed in previous papers about automorphisms and classification of adinkras and corresponding supermultiplets, we make a new and alternative proposal to use equivalence classes of the (unsigned) permutation group S4 to define distinct representations of higher-dimensional spin bundles within the context of adinkras. For this purpose, the definition of a dual operator akin to the well-known Hodge star is found to partition the space of these 𝒢ℛ(4, 4) representations into three suggestive classes.

  20. Automated flare forecasting using a statistical learning technique

    NASA Astrophysics Data System (ADS)

    Yuan, Yuan; Shih, Frank Y.; Jing, Ju; Wang, Hai-Min

    2010-08-01

    We present a new method for automatically forecasting the occurrence of solar flares based on photospheric magnetic measurements. The method is a cascading combination of an ordinal logistic regression model and a support vector machine classifier. The predictive variables are three photospheric magnetic parameters, i.e., the total unsigned magnetic flux, length of the strong-gradient magnetic polarity inversion line, and total magnetic energy dissipation. The output is true or false for the occurrence of a certain level of flares within 24 hours. Experimental results, from a sample of 230 active regions between 1996 and 2005, show the accuracies of a 24-hour flare forecast to be 0.86, 0.72, 0.65 and 0.84 respectively for the four different levels. Comparison shows an improvement in the accuracy of X-class flare forecasting.

  1. Successful application of the DBLOC method to the hydroxylation of camphor by cytochrome p450

    PubMed Central

    Jerome, Steven V.; Hughes, Thomas F.

    2015-01-01

    Abstract The activation barrier for the hydroxylation of camphor by cytochrome P450 was computed using a mixed quantum mechanics/molecular mechanics (QM/MM) model of the full protein‐ligand system and a fully QM calculation using a cluster model of the active site at the B3LYP/LACVP*/LACV3P** level of theory, which consisted of B3LYP/LACV3P** single point energies computed at B3LYP/LACVP* optimized geometries. From the QM/MM calculation, a barrier height of 17.5 kcal/mol was obtained, while the experimental value was known to be less than or equal to 10 kcal/mol. This process was repeated using the D3 correction for hybrid DFT in order to investigate whether the inadequate treatment of dispersion interaction was responsible for the overestimation of the barrier. While the D3 correction does reduce the computed barrier to 13.3 kcal/mol, it was still in disagreement with experiment. After application of a series of transition metal optimized localized orbital corrections (DBLOC) and without any refitting of parameters, the barrier was further reduced to 10.0 kcal/mol, which was consistent with the experimental results. The DBLOC method to C—H bond activation in methane monooxygenase (MMO) was also applied, as a second, independent test. The barrier in MMO was known, by experiment, to be 15.4 kcal/mol.1 After application of the DBLOC corrections to the MMO barrier compute by B3LYP, in a previous study, and accounting for dispersion with Grimme's D3 method, the unsigned deviation from experiment was improved from 3.2 to 2.3 kcal/mol. These results suggested that the combination of dispersion plus localized orbital corrections could yield significant quantitative improvements in modeling the catalytic chemistry of transition‐metal containing enzymes, within the limitations of the statistical errors of the model, which appear to be on the order of approximately 2 kcal/mole. PMID:26441133

  2. Semiempirical Quantum Chemistry Model for the Lanthanides: RM1 (Recife Model 1) Parameters for Dysprosium, Holmium and Erbium

    PubMed Central

    Filho, Manoel A. M.; Dutra, José Diogo L.; Rocha, Gerd B.; Simas, Alfredo M.; Freire, Ricardo O.

    2014-01-01

    Complexes of dysprosium, holmium, and erbium find many applications as single-molecule magnets, as contrast agents for magnetic resonance imaging, as anti-cancer agents, in optical telecommunications, etc. Therefore, the development of tools that can be proven helpful to complex design is presently an active area of research. In this article, we advance a major improvement to the semiempirical description of lanthanide complexes: the Recife Model 1, RM1, model for the lanthanides, parameterized for the trications of Dy, Ho, and Er. By representing such lanthanide in the RM1 calculation as a three-electron atom with a set of 5 d, 6 s, and 6 p semiempirical orbitals, the accuracy of the previous sparkle models, mainly concentrated on lanthanide-oxygen and lanthanide-nitrogen distances, is extended to other types of bonds in the trication complexes’ coordination polyhedra, such as lanthanide-carbon, lanthanide-chlorine, etc. This is even more important as, for example, lanthanide-carbon atom distances in the coordination polyhedra of the complexes comprise about 30% of all distances for all complexes of Dy, Ho, and Er considered. Our results indicate that the average unsigned mean error for the lanthanide-carbon distances dropped from an average of 0.30 Å, for the sparkle models, to 0.04 Å for the RM1 model for the lanthanides; for a total of 509 such distances for the set of all Dy, Ho, and Er complexes considered. A similar behavior took place for the other distances as well, such as lanthanide-chlorine, lanthanide-bromine, lanthanide, phosphorus and lanthanide-sulfur. Thus, the RM1 model for the lanthanides, being advanced in this article, broadens the range of application of semiempirical models to lanthanide complexes by including comprehensively many other types of bonds not adequately described by the previous models. PMID:24497945

  3. Reaction of SO2 with OH in the atmosphere.

    PubMed

    Long, Bo; Bao, Junwei Lucas; Truhlar, Donald G

    2017-03-15

    The OH + SO 2 reaction plays a critical role in understanding the oxidation of SO 2 in the atmosphere, and its rate constant is critical for clarifying the fate of SO 2 in the atmosphere. The rate constant of the OH + SO 2 reaction is calculated here by using beyond-CCSDT correlation energy calculations for a benchmark, validated density functional methods for direct dynamics, canonical variational transition state theory with anharmonicity and multidimensional tunneling for the high-pressure rate constant, and system-specific quantum RRK theory for pressure effects; the combination of these methods can compete in accuracy with experiments. There has been a long-term debate in the literature about whether the OH + SO 2 reaction is barrierless, but our calculations indicate a positive barrier with an transition structure that has an enthalpy of activation of 0.27 kcal mol -1 at 0 K. Our results show that the high-pressure limiting rate constant of the OH + SO 2 reaction has a positive temperature dependence, but the rate constant at low pressures has a negative temperature dependence. The computed high-pressure limiting rate constant at 298 K is 1.25 × 10 -12 cm 3 molecule -1 s -1 , which agrees excellently with the value (1.3 × 10 -12 cm 3 molecule -1 s -1 ) recommended in the most recent comprehensive evaluation for atmospheric chemistry. We show that the atmospheric lifetime of SO 2 with respect to oxidation by OH depends strongly on altitude (in the range 0-50 km) due to the falloff effect. We introduce a new interpolation procedure for fitting the combined temperature and pressure dependence of the rate constant, and it fits the calculated rate constants over the whole range with a mean unsigned error of only 7%. The present results provide reliable kinetics data for this specific reaction, and also they demonstrate convenient theoretical methods that can be reliable for predicting rate constants of other gas-phase reactions.

  4. Evaluation of B3LYP, X3LYP, and M06-Class Density Functionals for Predicting the Binding Energies of Neutral, Protonated, and Deprotonated Water Clusters.

    PubMed

    Bryantsev, Vyacheslav S; Diallo, Mamadou S; van Duin, Adri C T; Goddard, William A

    2009-04-14

    In this paper we assess the accuracy of the B3LYP, X3LYP, and newly developed M06-L, M06-2X, and M06 functionals to predict the binding energies of neutral and charged water clusters including (H2O)n, n = 2-8, 20), H3O(+)(H2O)n, n = 1-6, and OH(-)(H2O)n, n = 1-6. We also compare the predicted energies of two ion hydration and neutralization reactions on the basis of the calculated binding energies. In all cases, we use as benchmarks calculated binding energies of water clusters extrapolated to the complete basis set limit of the second-order Møller-Plesset perturbation theory with the effects of higher order correlation estimated at the coupled-cluster theory with single, double, and perturbative triple excitations in the aug-cc-pVDZ basis set. We rank the accuracy of the functionals on the basis of the mean unsigned error (MUE) between calculated benchmark and density functional theory energies. The corresponding MUE (kcal/mol) for each functional is listed in parentheses. We find that M06-L (0.73) and M06 (0.84) give the most accurate binding energies using very extended basis sets such as aug-cc-pV5Z. For more affordable basis sets, the best methods for predicting the binding energies of water clusters are M06-L/aug-cc-pVTZ (1.24), B3LYP/6-311++G(2d,2p) (1.29), and M06/aug-cc-PVTZ (1.33). M06-L/aug-cc-pVTZ also gives more accurate energies for the neutralization reactions (1.38), whereas B3LYP/6-311++G(2d,2p) gives more accurate energies for the ion hydration reactions (1.69).

  5. New Parameters for Higher Accuracy in the Computation of Binding Free Energy Differences upon Alanine Scanning Mutagenesis on Protein-Protein Interfaces.

    PubMed

    Simões, Inês C M; Costa, Inês P D; Coimbra, João T S; Ramos, Maria J; Fernandes, Pedro A

    2017-01-23

    Knowing how proteins make stable complexes enables the development of inhibitors to preclude protein-protein (P:P) binding. The identification of the specific interfacial residues that mostly contribute to protein binding, denominated as hot spots, is thus critical. Here, we refine an in silico alanine scanning mutagenesis protocol, based on a residue-dependent dielectric constant version of the Molecular Mechanics/Poisson-Boltzmann Surface Area method. We have used a large data set of structurally diverse P:P complexes to redefine the residue-dependent dielectric constants used in the determination of binding free energies. The accuracy of the method was validated through comparison with experimental data, considering the per-residue P:P binding free energy (ΔΔG binding ) differences upon alanine mutation. Different protocols were tested, i.e., a geometry optimization protocol and three molecular dynamics (MD) protocols: (1) one using explicit water molecules, (2) another with an implicit solvation model, and (3) a third where we have carried out an accelerated MD with explicit water molecules. Using a set of protein dielectric constants (within the range from 1 to 20) we showed that the dielectric constants of 7 for nonpolar and polar residues and 11 for charged residues (and histidine) provide optimal ΔΔG binding predictions. An overall mean unsigned error (MUE) of 1.4 kcal mol -1 relative to the experiment was achieved in 210 mutations only with geometry optimization, which was further reduced with MD simulations (MUE of 1.1 kcal mol -1 for the MD employing explicit solvent). This recalibrated method allows for a better computational identification of hot spots, avoiding expensive and time-consuming experiments or thermodynamic integration/ free energy perturbation/ uBAR calculations, and will hopefully help new drug discovery campaigns in their quest of searching spots of interest for binding small drug-like molecules at P:P interfaces.

  6. Differential effects of adding and removing components of a context on the generalization of conditional freezing.

    PubMed

    González, Felisa; Quinn, Jennifer J; Fanselow, Michael S

    2003-01-01

    Rats were conditioned across 2 consecutive days where a single unsignaled footshock was presented in the presence of specific contextual cues. Rats were tested with contexts that had additional stimulus components either added or subtracted. Using freezing as a measure of conditioning, removal but not addition of a cue from the training context produced significant generalization decrement. The results are discussed in relation to the R. A. Rescorla and A. R. Wagner (1972), J. M. Pearce (1994), and A. R. Wagner and S. E. Brandon (2001) accounts of generalization. Although the present data are most consistent with elemental models such as Rescorla and Wagner, a slight modification of the Wagner-Brandon replaced-elements model that can account for differences in the pattern of generalization obtained with contexts and discrete conditional stimuli is proposed.

  7. Post-Extinction Conditional Stimulus Valence Predicts Reinstatement Fear: Relevance for Long Term Outcomes of Exposure Therapy

    PubMed Central

    Zbozinek, Tomislav D.; Hermans, Dirk; Prenoveau, Jason M.; Liao, Betty; Craske, Michelle G.

    2014-01-01

    Exposure therapy for anxiety disorders is translated from fear conditioning and extinction. While exposure therapy is effective in treating anxiety, fear sometimes returns after exposure. One pathway for return of fear is reinstatement: unsignaled unconditional stimuli following completion of extinction. The present study investigated the extent to which valence of the conditional stimulus (CS+) after extinction predicts return of CS+ fear after reinstatement. Participants (N = 84) engaged in a differential fear conditioning paradigm and were randomized to reinstatement or non-reinstatement. We hypothesized that more negative post-extinction CS+ valence would predict higher CS+ fear after reinstatement relative to non-reinstatement and relative to extinction retest. Results supported the hypotheses and suggest that strategies designed to decrease negative valence of the CS+ may reduce the return of fear via reinstatement following exposure therapy. PMID:24957680

  8. Novel electrochemical aptasensor for ultrasensitive detection of sulfadimidine based on covalently linked multi-walled carbon nanotubes and in situ synthesized gold nanoparticle composites.

    PubMed

    He, Baoshan; Du, Gengan

    2018-05-01

    In the current study, a sensitive electrochemical sensing strategy based on aptamer (APT) for detection of sulfadimidine (SM 2 ) was developed. A bare gold electrode (AuE) was first modified with 2-aminoethanethiol (2-AET) through self-assembly, used as linker for the subsequent immobilization of multi-walled carbon nanotubes and gold nanoparticle composites (MWCNTs/AuNPs). Then, the thiolated APT was assembled onto the electrode via sulfur-gold affinity. When SM 2 existed, the APT combined with SM 2 and formed a complex structure. The specific binding of SM 2 and APT increased the impedance, leading to hard electron transfer between the electrode surface and the redox probe [Fe(CN) 6 ] 3-/4- and producing a significant reduction of the signal. The SM 2 concentration could be reflected by the current difference of the peak currents before and after target binding. Under optimized conditions, the linear dynamic range is from 0.1 to 50 ng mL -1 , with a detection limit of 0.055 ng mL -1 . The sensor exhibited desirable selectivity against other sulfonamides and performs successfully when analyzing SM 2 in pork samples. Graphical abstract A new electrochemical biosensor for ultrasensitive detection of sulfadimidine (SM 2 ) by using a gold electrode modified with MWCNTs/AuNPs for signal amplification and aptamer (APT) for selectivity improvement.

  9. Integrated electrochemical gluconic acid biosensor based on self-assembled monolayer-modified gold electrodes. Application to the analysis of gluconic acid in musts and wines.

    PubMed

    Campuzano, S; Gamella, M; Serra, B; Reviejo, A J; Pingarrón, J M

    2007-03-21

    An integrated amperometric gluconic acid biosensor constructed using a gold electrode (AuE) modified with a self-assembled monolayer (SAM) of 3-mercaptopropionic acid (MPA) on which gluconate dehydrogenase (GADH, 0.84 U) and the mediator tetrathiafulvalene (TTF, 1.5 micromol) were coimmobilized by covering the electrode surface with a dialysis membrane is reported. The working conditions selected were Eapp=+0.15 V and 25+/-1 degrees C. The useful lifetime of one single TTF-GADH-MPA-AuE was surprisingly long. After 53 days of continuous use, the biosensor exhibited 86% of the original sensitivity. A linear calibration plot was obtained for gluconic acid over the 6.0x10(-7) to 2.0x10(-5) M concentration range, with a limit of detection of 1.9x10(-7) M. The effect of potential interferents (glucose, fructose, galactose, arabinose, and tartaric, citric, malic, ascorbic, gallic, and caffeic acids) on the biosensor response was evaluated. The behavior of the biosensor in a flow-injection system in connection with amperometric detection was tested. The analytical usefulness of the biosensor was evaluated by determining gluconic acid in wine and must samples, and the results obtained were validated by comparison with those provided by using a commercial enzyme test kit.

  10. The Relationship Between X-Ray Radiance and Magnetic Flux

    NASA Astrophysics Data System (ADS)

    Pevtsov, Alexei A.; Fisher, George H.; Acton, Loren W.; Longcope, Dana W.; Johns-Krull, Christopher M.; Kankelborg, Charles C.; Metcalf, Thomas R.

    2003-12-01

    We use soft X-ray and magnetic field observations of the Sun (quiet Sun, X-ray bright points, active regions, and integrated solar disk) and active stars (dwarf and pre-main-sequence) to study the relationship between total unsigned magnetic flux, Φ, and X-ray spectral radiance, LX. We find that Φ and LX exhibit a very nearly linear relationship over 12 orders of magnitude, albeit with significant levels of scatter. This suggests a universal relationship between magnetic flux and the power dissipated through coronal heating. If the relationship can be assumed linear, it is consistent with an average volumetric heating rate Q~B/L, where B is the average field strength along a closed field line and L is its length between footpoints. The Φ-LX relationship also indicates that X-rays provide a useful proxy for the magnetic flux on stars when magnetic measurements are unavailable.

  11. The Relationship Analysis between Motorcycle Emission and Road Facilities under Heterogeneous Traffic Situation

    NASA Astrophysics Data System (ADS)

    Ramli, M. I.; Hanami, Z. A.; Aly, S. H.; Pasra, M.; Hustim, M.

    2018-04-01

    Motor vehicles have long been a source of pollution in many major cities in the world, including Indonesia. The increasing of the motor vehicle on the road leads to the rising of air pollution that exhausted by the vehicles as consequently. This research is intended to analyze the relationship between motorcycle emission and road facilities for each kind of road facilities in four different arterial road types. This study is quantitative research in which data collection is done directly in 4 types of road such as 2/1 UD, 4/1 UD, 4/2 D, and 6/2 UD in Makassar using motorcycle the Gas Analyzer Portable Measurement System and GPS emission test for speed tracking. The results are the emission tend to increase in road facilities where JS3TB (unsignalized junction) has the highest amount of CO and CO2 emission compared to other types.

  12. When good news leads to bad choices.

    PubMed

    McDevitt, Margaret A; Dunn, Roger M; Spetch, Marcia L; Ludvig, Elliot A

    2016-01-01

    Pigeons and other animals sometimes deviate from optimal choice behavior when given informative signals for delayed outcomes. For example, when pigeons are given a choice between an alternative that always leads to food after a delay and an alternative that leads to food only half of the time after a delay, preference changes dramatically depending on whether the stimuli during the delays are correlated with (signal) the outcomes or not. With signaled outcomes, pigeons show a much greater preference for the suboptimal alternative than with unsignaled outcomes. Key variables and research findings related to this phenomenon are reviewed, including the effects of durations of the choice and delay periods, probability of reinforcement, and gaps in the signal. We interpret the available evidence as reflecting a preference induced by signals for good news in a context of uncertainty. Other explanations are briefly summarized and compared. © 2016 Society for the Experimental Analysis of Behavior.

  13. Largely reduced grid densities in a vibrational self-consistent field treatment do not significantly impact the resultingwavenumbers.

    PubMed

    Lutz, Oliver M D; Rode, Bernd M; Bonn, Günther K; Huck, Christian W

    2014-12-17

    Especially for larger molecules relevant to life sciences, vibrational self-consistent field (VSCF) calculations can become unmanageably demanding even when only first and second order potential coupling terms are considered. This paper investigates to what extent the grid density of the VSCF's underlying potential energy surface can be reduced without sacrificing accuracy of the resulting wavenumbers. Including single-mode and pair contributions, a reduction to eight points per mode did not introduce a significant deviation but improved the computational efficiency by a factor of four. A mean unsigned deviation of 1.3% from the experiment could be maintained for the fifteen molecules under investigation and the approach was found to be applicable to rigid, semi-rigid and soft vibrational problems likewise. Deprotonated phosphoserine, stabilized by two intramolecular hydrogen bonds, was investigated as an exemplary application.

  14. Does High Plasma-β Dynamics ``Load'' Active Regions?

    NASA Astrophysics Data System (ADS)

    McIntosh, Scott W.

    2007-03-01

    Using long-duration observations in the He II 304 Å passband of SOHO EIT, we investigate the spatial and temporal appearance of impulsive intensity fluctuations in the pixel light curves. These passband intensity fluctuations come from plasma emitting in the chromosphere, in the transition region, and in the lowest portions of the corona. We see that they are spatially tied to the supergranular scale and that their rate of occurrence is tied to the unsigned imbalance of the magnetic field in which they are observed. The signature of the fluctuations (in space and time) is consistent with their creation by magnetoconvection-forced reconnection, which is driven by the flow field in the high-β plasma. The signature of the intensity fluctuations around an active region suggests that the bulk of the mass and energy going into the active region complex observed in the hotter coronal plasma is supplied by this process, dynamically forcing the looped structure from beneath.

  15. Sorting permutations by prefix and suffix rearrangements.

    PubMed

    Lintzmayer, Carla Negri; Fertin, Guillaume; Dias, Zanoni

    2017-02-01

    Some interesting combinatorial problems have been motivated by genome rearrangements, which are mutations that affect large portions of a genome. When we represent genomes as permutations, the goal is to transform a given permutation into the identity permutation with the minimum number of rearrangements. When they affect segments from the beginning (respectively end) of the permutation, they are called prefix (respectively suffix) rearrangements. This paper presents results for rearrangement problems that involve prefix and suffix versions of reversals and transpositions considering unsigned and signed permutations. We give 2-approximation and ([Formula: see text])-approximation algorithms for these problems, where [Formula: see text] is a constant divided by the number of breakpoints (pairs of consecutive elements that should not be consecutive in the identity permutation) in the input permutation. We also give bounds for the diameters concerning these problems and provide ways of improving the practical results of our algorithms.

  16. An Exploration of the Emission Properties of X-Ray Bright Points Seen with SDO

    NASA Technical Reports Server (NTRS)

    Saar, S. H.; Elsden, T.; Muglach, K.

    2012-01-01

    We present preliminary results of a study of X-ray Bright Point (XBP) EUV emission and its dependence on other properties. The XBPs were located using a new, automated XBP finder for AlA developed as part of the Feature Finding Team for SDO Computer Vision. We analyze XBPs near disk center, comparing AlA EUV fluxes, HMI LOS magnetic fields, and photospheric flow fields (derived from HMI data) to look for relationships between XBP emission, magnetic flux, velocity fields, and XBP local environment. We find some evidence for differences in the mean XBP temperature with environment. Unsigned magnetic flux is correlated with XBP emission, though other parameters play a role. The majority of XBP footpoints are approaching each other, though at a slight angle from head-on on average. We discuss the results in the context of XBP heating.

  17. Sparkle model for the calculation of lanthanide complexes: AM1 parameters for Eu(III), Gd(III), and Tb(III).

    PubMed

    Freire, Ricardo O; Rocha, Gerd B; Simas, Alfredo M

    2005-05-02

    Our previously defined Sparkle model (Inorg. Chem. 2004, 43, 2346) has been reparameterized for Eu(III) as well as newly parameterized for Gd(III) and Tb(III). The parameterizations have been carried out in a much more extensive manner, aimed at producing a new, more accurate model called Sparkle/AM1, mainly for the vast majority of all Eu(III), Gd(III), and Tb(III) complexes, which possess oxygen or nitrogen as coordinating atoms. All such complexes, which comprise 80% of all geometries present in the Cambridge Structural Database for each of the three ions, were classified into seven groups. These were regarded as a "basis" of chemical ambiance around a lanthanide, which could span the various types of ligand environments the lanthanide ion could be subjected to in any arbitrary complex where the lanthanide ion is coordinated to nitrogen or oxygen atoms. From these seven groups, 15 complexes were selected, which were defined as the parameterization set and then were used with a numerical multidimensional nonlinear optimization to find the best parameter set for reproducing chemical properties. The new parameterizations yielded an unsigned mean error for all interatomic distances between the Eu(III) ion and the ligand atoms of the first sphere of coordination (for the 96 complexes considered in the present paper) of 0.09 A, an improvement over the value of 0.28 A for the previous model and the value of 0.68 A for the first model (Chem. Phys. Lett. 1994, 227, 349). Similar accuracies have been achieved for Gd(III) (0.07 A, 70 complexes) and Tb(III) (0.07 A, 42 complexes). Qualitative improvements have been obtained as well; nitrates now coordinate correctly as bidentate ligands. The results, therefore, indicate that Eu(III), Gd(III), and Tb(III) Sparkle/AM1 calculations possess geometry prediction accuracies for lanthanide complexes with oxygen or nitrogen atoms in the coordination polyhedron that are competitive with present day ab initio/effective core potential calculations, while being hundreds of times faster.

  18. Water 16-mers and hexamers: assessment of the three-body and electrostatically embedded many-body approximations of the correlation energy or the nonlocal energy as ways to include cooperative effects.

    PubMed

    Qi, Helena W; Leverentz, Hannah R; Truhlar, Donald G

    2013-05-30

    This work presents a new fragment method, the electrostatically embedded many-body expansion of the nonlocal energy (EE-MB-NE), and shows that it, along with the previously proposed electrostatically embedded many-body expansion of the correlation energy (EE-MB-CE), produces accurate results for large systems at the level of CCSD(T) coupled cluster theory. We primarily study water 16-mers, but we also test the EE-MB-CE method on water hexamers. We analyze the distributions of two-body and three-body terms to show why the many-body expansion of the electrostatically embedded correlation energy converges faster than the many-body expansion of the entire electrostatically embedded interaction potential. The average magnitude of the dimer contributions to the pairwise additive (PA) term of the correlation energy (which neglects cooperative effects) is only one-half of that of the average dimer contribution to the PA term of the expansion of the total energy; this explains why the mean unsigned error (MUE) of the EE-PA-CE approximation is only one-half of that of the EE-PA approximation. Similarly, the average magnitude of the trimer contributions to the three-body (3B) term of the EE-3B-CE approximation is only one-fourth of that of the EE-3B approximation, and the MUE of the EE-3B-CE approximation is one-fourth that of the EE-3B approximation. Finally, we test the efficacy of two- and three-body density functional corrections. One such density functional correction method, the new EE-PA-NE method, with the OLYP or the OHLYP density functional (where the OHLYP functional is the OptX exchange functional combined with the LYP correlation functional multiplied by 0.5), has the best performance-to-price ratio of any method whose computational cost scales as the third power of the number of monomers and is competitive in accuracy in the tests presented here with even the electrostatically embedded three-body approximation.

  19. Heavy-Atom Tunneling Calculations in Thirteen Organic Reactions: Tunneling Contributions are Substantial, and Bell's Formula Closely Approximates Multidimensional Tunneling at ≥250 K.

    PubMed

    Doubleday, Charles; Armas, Randy; Walker, Dana; Cosgriff, Christopher V; Greer, Edyta M

    2017-10-09

    Multidimensional tunneling calculations are carried out for 13 reactions, to test the scope of heavy-atom tunneling in organic chemistry, and to check the accuracy of one-dimensional tunneling models. The reactions include pericyclic, cycloaromatization, radical cyclization and ring opening, and S N 2. When compared at the temperatures that give the same effective rate constant of 3×10 -5  s -1 , tunneling accounts for 25-95 % of the rate in 8 of the 13 reactions. Values of transmission coefficients predicted by Bell's formula, κ Bell  , agree well with multidimensional tunneling (canonical variational transition state theory with small curvature tunneling), κ SCT . Mean unsigned deviations of κ Bell vs. κ SCT are 0.08, 0.04, 0.02 at 250, 300 and 400 K. This suggests that κ Bell is a useful first choice for predicting transmission coefficients in heavy-atom tunnelling. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Statistical study of free magnetic energy and flare productivity of solar active regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, J. T.; Jing, J.; Wang, S.

    Photospheric vector magnetograms from the Helioseismic and Magnetic Imager on board the Solar Dynamic Observatory are utilized as the boundary conditions to extrapolate both nonlinear force-free and potential magnetic fields in solar corona. Based on the extrapolations, we are able to determine the free magnetic energy (FME) stored in active regions (ARs). Over 3000 vector magnetograms in 61 ARs were analyzed. We compare FME with the ARs' flare index (FI) and find that there is a weak correlation (<60%) between FME and FI. FME shows slightly improved flare predictability relative to the total unsigned magnetic flux of ARs in themore » following two aspects: (1) the flare productivity predicted by FME is higher than that predicted by magnetic flux and (2) the correlation between FI and FME is higher than that between FI and magnetic flux. However, this improvement is not significant enough to make a substantial difference in time-accumulated FI, rather than individual flare, predictions.« less

  1. Spectral analysis of epicardial 60-lead electrograms in dogs with 4-week-old myocardial infarction.

    PubMed

    Hosoya, Y; Ikeda, K; Komatsu, T; Yamaki, M; Kubota, I

    2001-01-01

    There were few studies on the spectral analysis of multiple-lead epicardial electrograms in chronic myocardial infarction. Spectral analysis of multi-lead epicardial electrograms was performed in 6 sham-operated dogs (N group) and 8 dogs with 4-week-old myocardial infarction (MI group). Four weeks after the ligation of left anterior descending coronary artery, fast Fourier transform was performed on 60-lead epicardial electrograms, and then inverse transform was performed on 5 frequency ranges from 0 to 250 Hz. From the QRS onset to QRS offset, the time integration of unsigned value of reconstructed waveform was calculated and displayed as AQRS maps. On 0-25 Hz AQRS map, there was no significant difference between the 2 groups. In the frequency ranges of 25-250 Hz, MI group had significantly smaller AQRS values than N group solely in the infarct zone. It was shown that high frequency potentials (25-250 Hz) within QRS complex were reduced in the infarct zone.

  2. Multiphase Interface Tracking with Fast Semi-Lagrangian Contouring.

    PubMed

    Li, Xiaosheng; He, Xiaowei; Liu, Xuehui; Zhang, Jian J; Liu, Baoquan; Wu, Enhua

    2016-08-01

    We propose a semi-Lagrangian method for multiphase interface tracking. In contrast to previous methods, our method maintains an explicit polygonal mesh, which is reconstructed from an unsigned distance function and an indicator function, to track the interface of arbitrary number of phases. The surface mesh is reconstructed at each step using an efficient multiphase polygonization procedure with precomputed stencils while the distance and indicator function are updated with an accurate semi-Lagrangian path tracing from the meshes of the last step. Furthermore, we provide an adaptive data structure, multiphase distance tree, to accelerate the updating of both the distance function and the indicator function. In addition, the adaptive structure also enables us to contour the distance tree accurately with simple bisection techniques. The major advantage of our method is that it can easily handle topological changes without ambiguities and preserve both the sharp features and the volume well. We will evaluate its efficiency, accuracy and robustness in the results part with several examples.

  3. The role of contextual associations in producing the partial reinforcement acquisition deficit.

    PubMed

    Miguez, Gonzalo; Witnauer, James E; Miller, Ralph R

    2012-01-01

    Three conditioned suppression experiments with rats as subjects assessed the contributions of the conditioned stimulus (CS)-context and context-unconditioned stimulus (US) associations to the degraded stimulus control by the CS that is observed following partial reinforcement relative to continuous reinforcement training. In Experiment 1, posttraining associative deflation (i.e., extinction) of the training context after partial reinforcement restored responding to a level comparable to the one produced by continuous reinforcement. In Experiment 2, posttraining associative inflation of the context (achieved by administering unsignaled outcome presentations in the context) enhanced the detrimental effect of partial reinforcement. Experiment 3 found that the training context must be an effective competitor to produce the partial reinforcement acquisition deficit. When the context was down-modulated, the target regained behavioral control thereby demonstrating higher-order retrospective revaluation. The results are discussed in terms of retrospective revaluation, and are used to contrast the predictions of a performance-focused model with those of an acquisition-focused model. (c) 2012 APA, all rights reserved.

  4. Behavioral effects of delayed timeouts from reinforcement.

    PubMed

    Byrne, Tom; Poling, Alan

    2017-03-01

    Timeouts are sometimes used in applied settings to reduce target responses, and in some circumstances delays are unavoidably imposed between the onset of a timeout and the offset of the response that produces it. The present study examined the effects of signaled and unsignaled timeouts in rats exposed to concurrent fixed-ratio 1 fixed-ratio 1 schedules of food delivery, where each response on one lever, the location of which changed across conditions, produced both food and a delayed 10-s timeout. Delays of 0 to 38 s were examined. Delayed timeouts often, but not always, substantially reduced the number of responses emitted on the lever that produced timeouts relative to the number emitted on the lever that did not produce timeouts. In general, greater sensitivity was observed to delayed timeouts when they were signaled. These results demonstrate that delayed timeouts, like other delayed consequences, can affect behavior, albeit less strongly than immediate consequences. © 2017 Society for the Experimental Analysis of Behavior.

  5. An integrated bienzyme glucose oxidase-fructose dehydrogenase-tetrathiafulvalene-3-mercaptopropionic acid-gold electrode for the simultaneous determination of glucose and fructose.

    PubMed

    Campuzano, Susana; Loaiza, Oscar A; Pedrero, María; de Villena, F Javier Manuel; Pingarrón, José M

    2004-06-01

    A bienzyme biosensor for the simultaneous determination of glucose and fructose was developed by coimmobilising glucose oxidase (GOD), fructose dehydrogenase (FDH), and the mediator, tetrathiafulvalene (TTF), by cross-linking with glutaraldehyde atop a 3-mercaptopropionic acid (MPA) self-assembled monolayer (SAM) on a gold disk electrode (AuE). The performance of this bienzyme electrode under batch and flow injection (FI) conditions, as well as an amperometric detection in high-performance liquid chromatography (HPLC), are reported. The order of enzyme immobilisation atop the MPA-SAM affected the biosensor amperometric response in terms of sensitivity, with the immobilisation order GOD, FDH, TTF being selected. Similar analytical characteristics to those obtained with single GOD or FDH SAM-based biosensors for glucose and fructose were achieved with the bienzyme electrode, indicating that no noticeable changes in the biosensor responses to the analytes occurred as a consequence of the coimmobilisation of both enzymes on the same MPA-AuE. The suitability of the bienzyme biosensor for the analysis of real samples under flow injection conditions was tested by determining glucose in two certified serum samples. The simultaneous determination of glucose and fructose in the same sample cannot be performed without a separation step because at the detection potential used (+0.10 V), both sugars show amperometric response. Consequently, HPLC with amperometric detection at the TTF-FDH-GOD-MPA-AuE was accomplished. Glucose and fructose were simultaneously determined in honey, cola softdrink, and commercial apple juice, and the results were compared with those obtained by using other reference methods.

  6. Synchronization unveils the organization of ecological networks with positive and negative interactions

    NASA Astrophysics Data System (ADS)

    Girón, Andrea; Saiz, Hugo; Bacelar, Flora S.; Andrade, Roberto F. S.; Gómez-Gardeñes, Jesús

    2016-06-01

    Network science has helped to understand the organization principles of the interactions among the constituents of large complex systems. However, recently, the high resolution of the data sets collected has allowed to capture the different types of interactions coexisting within the same system. A particularly important example is that of systems with positive and negative interactions, a usual feature appearing in social, neural, and ecological systems. The interplay of links of opposite sign presents natural difficulties for generalizing typical concepts and tools applied to unsigned networks and, moreover, poses some questions intrinsic to the signed nature of the network, such as how are negative interactions balanced by positive ones so to allow the coexistence and survival of competitors/foes within the same system? Here, we show that synchronization phenomenon is an ideal benchmark for uncovering such balance and, as a byproduct, to assess which nodes play a critical role in the overall organization of the system. We illustrate our findings with the analysis of synthetic and real ecological networks in which facilitation and competitive interactions coexist.

  7. Resolving Magnetic Flux Patches at the Surface of the Core

    NASA Technical Reports Server (NTRS)

    OBrien, Michael S.

    1996-01-01

    The geomagnetic field at a given epoch can be used to partition the surface of the liquid outer core into a finite number of contiguous regions in which the radial component of the magnetic flux density, B (sub r), is of one sign. These flux patches are instrumental in providing detail to surface fluid flows inferred from the changing geomagnetic field and in evaluating the validity of the frozen-flux approximation on which such inferences rely. Most of the flux patches in models of the modem field are small and enclose little flux compared to the total unsigned flux emanating from the core. To demonstrate that such patches are not required to explain the most spatially complete and accurate data presently available, those from the Magsat mission, I have constructed a smooth core field model that fits the Magsat data but does not possess small flux patches. I conclude that our present knowledge of the geomagnetic field does not allow us to resolve these features reliably at the core-mantle boundary; thus we possess less information about core flow than previously believed.

  8. Graph partitions and cluster synchronization in networks of oscillators

    PubMed Central

    Schaub, Michael T.; O’Clery, Neave; Billeh, Yazan N.; Delvenne, Jean-Charles; Lambiotte, Renaud; Barahona, Mauricio

    2017-01-01

    Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators. PMID:27781454

  9. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    PubMed

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  10. High-order time-marching reinitialization for regional level-set functions

    NASA Astrophysics Data System (ADS)

    Pan, Shucheng; Lyu, Xiuxiu; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-02-01

    In this work, the time-marching reinitialization method is extended to compute the unsigned distance function in multi-region systems involving arbitrary number of regions. High order and interface preservation are achieved by applying a simple mapping that transforms the regional level-set function to the level-set function and a high-order two-step reinitialization method which is a combination of the closest point finding procedure and the HJ-WENO scheme. The convergence failure of the closest point finding procedure in three dimensions is addressed by employing a proposed multiple junction treatment and a directional optimization algorithm. Simple test cases show that our method exhibits 4th-order accuracy for reinitializing the regional level-set functions and strictly satisfies the interface-preserving property. The reinitialization results for more complex cases with randomly generated diagrams show the capability our method for arbitrary number of regions N, with a computational effort independent of N. The proposed method has been applied to dynamic interfaces with different types of flows, and the results demonstrate high accuracy and robustness.

  11. Speed of CMEs and the Magnetic Non-Potentiality of their Source Active Regions

    NASA Technical Reports Server (NTRS)

    Tiwari, Sanjiv Kumar; Falconer, David Allen; Moore, Ronald L.; Venkatakrishnan, P.; Winebarger, Amy R.; Khazanov, Igor G.

    2014-01-01

    Most fast coronal mass ejections (CMEs) originate from solar active regions (ARs). Non-potentiality of ARs plausibly determines the speed of CMEs in the outer corona. Several other unexplored parameters might be important as well. To find out the relation between the intial speed of CMEs and the non-potentiality of source ARs, we identified over a hundred of CMEs with source ARs via their co-produced flares. The speed of the CMEs are collected from the SOHO LASCO CME catalog. We have used vector magnetograms obtained with HMI/SDO, to evaluate various magnetic non-potentiality parameters, e.g. magnetic free-energy proxies, twist, shear angle, signed shear angle, net current etc. We have also included several other parameters e.g. total unsigned flux, magnetic area of ARs, area of sunspots, to investigate their correlation, if any, with the initial speeds of CMEs. Our preliminary results show that the ARs with larger non-potentiality and area produce faster CMEs but they can also produce slow ones. The ARs with lesser non-potentiality and area generally produce only slower CMEs.

  12. Maximum saliency bias in binocular fusion

    NASA Astrophysics Data System (ADS)

    Lu, Yuhao; Stafford, Tom; Fox, Charles

    2016-07-01

    Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.

  13. Electroactive chitosan nanoparticles for the detection of single-nucleotide polymorphisms using peptide nucleic acids.

    PubMed

    Kerman, Kagan; Saito, Masato; Tamiya, Eiichi

    2008-08-01

    Here we report an electrochemical biosensor that would allow for simple and rapid analysis of nucleic acids in combination with nuclease activity on nucleic acids and electroactive bionanoparticles. The detection of single-nucleotide polymorphisms (SNPs) using PNA probes takes advantage of the significant structural and physicochemical differences between the full hybrids and SNPs in PNA/DNA and DNA/DNA duplexes. Ferrocene-conjugated chitosan nanoparticles (Chi-Fc) were used as the electroactive indicator of hybridization. Chi-Fc had no affinity towards the neutral PNA probe immobilized on a gold electrode (AuE) surface. When the PNA probe on the electrode surface hybridized with a full-complementary target DNA, Chi-Fc electrostatically attached to the negatively-charged phosphate backbone of DNA on the surface and gave rise to a high electrochemical oxidation signal from ferrocene at approximately 0.30 V. Exposing the surface to a single-stranded DNA specific nuclease, Nuclease S1, was found to be very effective for removing the nonspecifically adsorbed SNP DNA. An SNP in the target DNA to PNA made it susceptible to the enzymatic digestion. After the enzymatic digestion and subsequent exposure to Chi-Fc, the presence of SNPs was determined by monitoring the changes in the electrical current response of Chi-Fc. The method provided a detection limit of 1 fM (S/N = 3) for the target DNA oligonucleotide. Additionally, asymmetric PCR was employed to detect the presence of genetically modified organism (GMO) in standard Roundup Ready soybean samples. PNA-mediated PCR amplification of real DNA samples was performed to detect SNPs related to alcohol dehydrogenase (ALDH). Chitosan nanoparticles are promising biomaterials for various analytical and pharmaceutical applications.

  14. Electroactive crown ester-Cu2+ complex with in-situ modification at molecular beacon probe serving as a facile electrochemical DNA biosensor for the detection of CaMV 35s.

    PubMed

    Zhan, Fengping; Liao, Xiaolei; Gao, Feng; Qiu, Weiwei; Wang, Qingxiang

    2017-06-15

    A novel electrochemical DNA biosensor has been facilely constructed by in-situ assembly of electroactive 4'-aminobenzo-18-crown-6-copper(II) complex (AbC-Cu 2+ ) on the free terminal of the hairpin-structured molecule beacon. The 3'-SH modified molecule beacon probe was first immobilized on the gold electrode (AuE) surface through self-assembly chemistry of Au-S bond. Then the crow ester of AbC was covalently coupled with 5'-COOH on the molecule beacon, and served as a platform to attach the Cu 2+ by coordination with ether bond (-O-) of the crown cycle. Thus, an electroactive molecule beacon-based biosensing interface was constructed. In comparison with conventional methods for preparation of electroactive molecule beacon, the approach presented in this work is much simpler, reagent- and labor-saving. Selectivity study shows that the in-situ fabricated electroactive molecule beacon remains excellent recognition ability of pristine molecule beacon probe to well differentiate various DNA fragments. The target DNA can be quantatively determined over the range from 0.10pM to 0.50nM. The detection limit of 0.060pM was estimated based on signal-to-noise ratio of 3. When the biosensor was applied for the detection cauliflower mosaic virus 35s (CaMV 35s) in soybean extraction samples, satisfactory results are achieved. This work opens a new strategy for facilely fabricating electrochemical sensing interface, which also shows great potential in aptasensor and immurosensor fabrication. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Data Encoding using Periodic Nano-Optical Features

    NASA Astrophysics Data System (ADS)

    Vosoogh-Grayli, Siamack

    Successful trials have been made through a designed algorithm to quantize, compress and optically encode unsigned 8 bit integer values in the form of images using Nano optical features. The periodicity of the Nano-scale features (Nano-gratings) have been designed and investigated both theoretically and experimentally to create distinct states of variation (three on states and one off state). The use of easy to manufacture and machine readable encoded data in secured authentication media has been employed previously in bar-codes for bi-state (binary) models and in color barcodes for multiple state models. This work has focused on implementing 4 states of variation for unit information through periodic Nano-optical structures that separate an incident wavelength into distinct colors (variation states) in order to create an encoding system. Compared to barcodes and magnetic stripes in secured finite length storage media the proposed system encodes and stores more data. The benefits of multiple states of variation in an encoding unit are 1) increased numerically representable range 2) increased storage density and 3) decreased number of typical set elements for any ergodic or semi-ergodic source that emits these encoding units. A thorough investigation has targeted the effects of the use of multi-varied state Nano-optical features on data storage density and consequent data transmission rates. The results show that use of Nano-optical features for encoding data yields a data storage density of circa 800 Kbits/in2 via the implementation of commercially available high resolution flatbed scanner systems for readout. Such storage density is far greater than commercial finite length secured storage media such as Barcode family with maximum practical density of 1kbits/in2 and highest density magnetic stripe cards with maximum density circa 3 Kbits/in2. The numerically representable range of the proposed encoding unit for 4 states of variation is [0 255]. The number of typical set elements for an ergodic source emitting the optical encoding units compared to a bi-state encoding unit (bit) shows a 36 orders of magnitude decrease for the error probability interval of [0 0.01]. The algorithms for the proposed encoding system have been implemented in MATLAB and the Nano-optical structures have been fabricated using Electron Beam Lithography on optical medium.

  16. Automated intraretinal segmentation of SD-OCT images in normal and age-related macular degeneration eyes

    PubMed Central

    de Sisternes, Luis; Jonna, Gowtham; Moss, Jason; Marmor, Michael F.; Leng, Theodore; Rubin, Daniel L.

    2017-01-01

    This work introduces and evaluates an automated intra-retinal segmentation method for spectral-domain optical coherence (SD-OCT) retinal images. While quantitative assessment of retinal features in SD-OCT data is important, manual segmentation is extremely time-consuming and subjective. We address challenges that have hindered prior automated methods, including poor performance with diseased retinas relative to healthy retinas, and data smoothing that obscures image features such as small retinal drusen. Our novel segmentation approach is based on the iterative adaptation of a weighted median process, wherein a three-dimensional weighting function is defined according to image intensity and gradient properties, and a set of smoothness constraints and pre-defined rules are considered. We compared the segmentation results for 9 segmented outlines associated with intra-retinal boundaries to those drawn by hand by two retinal specialists and to those produced by an independent state-of-the-art automated software tool in a set of 42 clinical images (from 14 patients). These images were obtained with a Zeiss Cirrus SD-OCT system, including healthy, early or intermediate AMD, and advanced AMD eyes. As a qualitative evaluation of accuracy, a highly experienced third independent reader blindly rated the quality of the outlines produced by each method. The accuracy and image detail of our method was superior in healthy and early or intermediate AMD eyes (98.15% and 97.78% of results not needing substantial editing) to the automated method we compared against. While the performance was not as good in advanced AMD (68.89%), it was still better than the manual outlines or the comparison method (which failed in such cases). We also tested our method’s performance on images acquired with a different SD-OCT manufacturer, collected from a large publicly available data set (114 healthy and 255 AMD eyes), and compared the data quantitatively to reference standard markings of the internal limiting membrane and inner boundary of retinal pigment epithelium, producing a mean unsigned positioning error of 6.04 ± 7.83µm (mean under 2 pixels). Our automated method should be applicable to data from different OCT manufacturers and offers detailed layer segmentations in healthy and AMD eyes. PMID:28663874

  17. A Survey of Nanoflare Properties in Solar Active Regions

    NASA Astrophysics Data System (ADS)

    Viall, N. M.; Klimchuk, J. A.

    2013-12-01

    We investigate coronal heating using a systematic technique to analyze the properties of nanoflares in active regions (AR). Our technique computes cooling times, or time-lags, on a pixel-by-pixel basis using data taken with the Atmospheric Imaging Assembly onboard the Solar Dynamics Observatory. Our technique has the advantage that it allows us to analyze all of the coronal AR emission, including the so-called diffuse emission. We recently presented results using this time-lag analysis on NOAA AR 11082 (Viall & Klimchuk 2012) and found that the majority of the pixels contained cooling plasma along their line of sight, consistent with impulsive coronal nanoflare heating. Additionally, our results showed that the nanoflare energy is stronger in the AR core and weaker in the active region periphery. Are these results representative of the nanoflare properties exhibited in the majority of ARs, or is AR 11082 unique? Here we present the time-lag results for a survey of ARs and show that these nanoflare patterns are born out in other active regions, for a range of ages, magnetic complexity, and total unsigned magnetic flux. Other aspects of the nanoflare properties, however, turn out to be dependent on certain AR characteristics.

  18. The effect of scopolamine on matching behavior and the estimation of relative reward magnitude.

    PubMed

    Leon, Matthew I; Rodriguez-Barrera, Vanessa; Amaya, Aldo

    2017-10-01

    We investigated the behavioral effects of scopolamine on rats that bar pressed for trains of electrically stimulating pulses under concurrent variable interval schedules of reward. For the first half of the session (30 min) a 1:4 ratio in the programmed number of stimulation trains delivered at each option was in effect. At the start of the second half of the session, an unsignaled reversal in the relative train number (4:1) occurred. We tracked the relative magnitude of reward estimated for each contiguous pair of reinforced visits to competing options. Scopolamine hydrobromide led to a reduction in the relative magnitude of reward. A similar result was obtained in a follow-up test in which relative magnitude was manipulated by varying the pulse frequency of stimulation, while equating the train number at each option. The effect of scopolamine hydrobromide could not be attributed to undermatching, side bias, nor to an effect of scopolamine on the reward integration process. When the same rats were treated with scopolamine methylbromide, no effects on matching behavior were observed. Our results suggest a cholinergic basis for the computation of choice variables related to matching behavior. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Community Detection in Signed Networks: the Role of Negative ties in Different Scales

    PubMed Central

    Esmailian, Pouya; Jalili, Mahdi

    2015-01-01

    Extracting community structure of complex network systems has many applications from engineering to biology and social sciences. There exist many algorithms to discover community structure of networks. However, it has been significantly under-explored for networks with positive and negative links as compared to unsigned ones. Trying to fill this gap, we measured the quality of partitions by introducing a Map Equation for signed networks. It is based on the assumption that negative relations weaken positive flow from a node towards a community, and thus, external (internal) negative ties increase the probability of staying inside (escaping from) a community. We further extended the Constant Potts Model, providing a map spectrum for signed networks. Accordingly, a partition is selected through balancing between abridgment and expatiation of a signed network. Most importantly, multi-scale spectrum of signed networks revealed how informative are negative ties in different scales, and quantified the topological placement of negative ties between dense positive ones. Moreover, an inconsistency was found in the signed Modularity: as the number of negative ties increases, the density of positive ties is neglected more. These results shed lights on the community structure of signed networks. PMID:26395815

  20. MAGNETIC FLUX TRANSPORT AND THE LONG-TERM EVOLUTION OF SOLAR ACTIVE REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ugarte-Urra, Ignacio; Upton, Lisa; Warren, Harry P.

    2015-12-20

    With multiple vantage points around the Sun, Solar Terrestrial Relations Observatory (STEREO) and Solar Dynamics Observatory imaging observations provide a unique opportunity to view the solar surface continuously. We use He ii 304 Å data from these observatories to isolate and track ten active regions and study their long-term evolution. We find that active regions typically follow a standard pattern of emergence over several days followed by a slower decay that is proportional in time to the peak intensity in the region. Since STEREO does not make direct observations of the magnetic field, we employ a flux-luminosity relationship to infermore » the total unsigned magnetic flux evolution. To investigate this magnetic flux decay over several rotations we use a surface flux transport model, the Advective Flux Transport model, that simulates convective flows using a time-varying velocity field and find that the model provides realistic predictions when information about the active region's magnetic field strength and distribution at peak flux is available. Finally, we illustrate how 304 Å images can be used as a proxy for magnetic flux measurements when magnetic field data is not accessible.« less

  1. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  2. Imprinting of molecular recognition sites combined with π-donor-acceptor interactions using bis-aniline-crosslinked Au-CdSe/ZnS nanoparticles array on electrodes: Development of electrochemiluminescence sensor for the ultrasensitive and selective detection of 2-methyl-4-chlorophenoxyacetic acid.

    PubMed

    Yang, Yukun; Fang, Guozhen; Wang, Xiaomin; Liu, Guiyang; Wang, Shuo

    2016-03-15

    A novel strategy is reported for the fabrication of bis-aniline-crosslinked Au nanoparticles (NPs)-CdSe/ZnS quantum dots (QDs) array composite by facil one-step co-electropolymerization of thioaniline-functionalized AuNPs and thioaniline-functionalized CdSe/ZnS QDs onto thioaniline-functionalized Au elctrodes (AuE). Stable and enhanced cathodic electrochemiluminescence (ECL) of CdSe/ZnS QDs is observed on the modified electrode in neutral solution, suggesting promising applications in ECL sensing. An advanced ECL sensor is explored for detection of 2-methyl-4-chlorophenoxyacetic acid (MCPA) which quenches the ECL signal through electron-transfer pathway. The sensitive determination of MCPA with limit of detection (LOD) of 2.2 nmolL(-1) (S/N=3) is achieved by π-donor-acceptor interactions between MCPA and the bis-aniline bridging units. Impressively, the imprinting of molecular recognition sites into the bis-aniline-crosslinked AuNPs-CdSe/ZnS QDs array yields a functionalized electrode with an extremely sensitive response to MCPA in a linear range of 10 pmolL(-1)-50 μmolL(-1) with a LOD of 4.3 pmolL(-1 ()S/N=3). The proposed ECL sensor with high sensitivity, good selectivity, reproducibility and stability has been successfully applied for the determination of MCPA in real samples with satisfactory recoveries. In this study, ECL sensor combined the merits of QDs-ECL and molecularly imprinting technology is reported for the first time. The developed ECL sensor holds great promise for the fabrication of QDs-based ECL sensors with improved sensitivity and furthermore opens the door to wide applications of QDs-based ECL in food safety and environmental monitoring. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Initiation and Activation of Faults in Dry and Wet Rock by Fluid Injection

    NASA Astrophysics Data System (ADS)

    Stanchits, S.; Mayr, S.; Shapiro, S. A.; Dresen, G.

    2008-12-01

    We studied fracturing of rock samples induced by water injection in axial compression tests on cylindrical specimens of Flechtingen sandstone and Aue granite of 50 mm diameter and 105-125 mm length. Samples were intact solid rock cylinders and cylinders with central boreholes of 5 mm diameter and 52 mm length or through-boreholes of 2.5 mm diameter. To monitor acoustic emissions (AE) and ultrasonic velocities, twelve P-wave and six polarized S-wave sensors were glued to the cylindrical surface of the rock. Full waveforms were stored in a 12 channel transient recording system (PROEKEL, Germany). Polarity of AE first motion was used to discriminate source types associated with tensile, shear and pore-collapse cracking. To monitor strain, two pairs of orthogonally oriented strain-gages were glued onto the specimen surface. Samples were deformed in two consecutive loading steps: 1) Initial triaxial loading was performed at 20-50 MPa confining pressure on dry (under vacuum) or fully saturated samples until the yield point was reached. 2) In a second stage distilled water was injected into the samples with pore pressure increasing up to 20 MPa. For saturated samples the pore pressure was increased in steps and in periodic pulses. Injection of water into dry porous sandstone resulted in propagation of an AE hypocenter cloud closely linked to propagation of the water front. Position of the migrating water front was estimated from ultrasonic velocity measurements and measurements of the injected water volume. Propagation rate of AE-induced cloud parallel to bedding was higher than perpendicular to bedding, possibly related to permeability anisotropy. Nucleation of a brittle shear fault occurred at a critical pore pressure level with a nucleation patch located at the central borehole. Micro-structural analysis of fractured samples shows excellent agreement between location of AE hypocenters and macroscopic faults.

  4. Gold-superheavy-element interaction in diatomics and cluster adducts: A combined four-component Dirac-Kohn-Sham/charge-displacement study.

    PubMed

    Rampino, Sergio; Storchi, Loriano; Belpassi, Leonardo

    2015-07-14

    The chemistry of superheavy elements (Z ≥ 104) is actively investigated in atom-at-a-time experiments of volatility through adsorption on gold surfaces. In this context, common guidelines for interpretation based on group trends in the periodic table should be used cautiously, because relativistic effects play a central role and may cause predictions to fall short. In this paper, we present an all-electron four-component Dirac-Kohn-Sham comparative study of the interaction of gold with Cn (Z = 112), Fl (Z = 114), and Uuo (Z = 118) versus their lighter homologues of the 6th period, Hg, Pb, and Rn plus the noble gas Xe. Calculations were carried out for Au-E (E = Hg, Cn, Pb, Fl, Xe, Rn, Uuo), Au7- and Au20-E (E = Hg, Cn, Pb, Fl, Rn) complexes, where Au7 (planar) and Au20 (pyramidal) are experimentally determined clusters having structures of increasing complexity. Results are analysed both in terms of the energetics of the complexes and of the electron charge rearrangement accompanying their formation. In line with the available experimental data, Cn and more markedly Fl are found to be less reactive than their lighter homologues. On the contrary, Uuo is found to be more reactive than Rn and Xe. Cn forms the weakest bond with the gold atom, compared to Fl and Uuo. The reactivity of Fl decreases with increasing gold-fragment size more rapidly than that of Cn and, as a consequence, the order of the reactivity of these two elements is inverted upon reaching the Au20-cluster adduct. Density difference maps between adducts and fragments reveal similarities in the behaviour of Cn and Xe, and in that of Uuo and the more reactive species Hg and Pb. These findings are given a quantitative ground via charge-displacement analysis.

  5. Photospheric Magnetic Evolution in the WHI Active Regions

    NASA Technical Reports Server (NTRS)

    Welsch, B. T.; McTiernan, J. M.; Christe, S.

    2012-01-01

    Sequences of line-of-sight (LOS) magnetograms recorded by the Michelson Doppler Imager are used to quantitatively characterize photospheric magnetic structure and evolution in three active regions that rotated across the Sun s disk during the Whole Heliosphere Interval (WHI), in an attempt to relate the photospheric magnetic properties of these active regions to flares and coronal mass ejections (CMEs). Several approaches are used in our analysis, on scales ranging from whole active regions, to magnetic features, to supergranular scales, and, finally, to individual pixels. We calculated several parameterizations of magnetic structure and evolution that have previously been associated with flare and CME activity, including total unsigned magnetic flux, magnetic flux near polarity-inversion lines, amount of canceled flux, the "proxy Poynting flux," and helicity flux. To catalog flare events, we used flare lists derived from both GOES and RHESSI observations. By most such measures, AR 10988 should have been the most flare- and CME-productive active region, and AR 10989 the least. Observations, however, were not consistent with this expectation: ARs 10988 and 10989 produced similar numbers of flares, and AR 10989 also produced a few CMEs. These results highlight present limitations of statistics-based flare and CME forecasting tools that rely upon line-of-sight photospheric magnetic data alone.

  6. Analysis of discriminative control by social behavioral stimuli

    PubMed Central

    Hake, Don F.; Donaldson, Tom; Hyten, Cloyd

    1983-01-01

    Visual discriminative control of the behavior of one rat by the behavior of another was studied in a two-compartment chamber. Each rat's compartment had a food cup and two response keys arranged vertically next to the clear partition that separated the two rats. Illumination of the leader's key lights signaled a “search” period when a response by the leader on the unsignaled and randomly selected correct key for that trial illuminated the follower's keys. Then, a response by the follower on the corresponding key was reinforced, or a response on the incorrect key terminated the trial without reinforcement. Accuracy of following the leader increased to 85% within 15 sessions. Blocking the view of the leader reduced accuracy but not to chance levels. Apparent control by visual behavioral stimuli was also affected by auditory stimuli and a correction procedure. When white noise eliminated auditory cues, social learning was not acquired as fast nor as completely. A reductionistic position holds that behavioral stimuli are the same as nonsocial stimuli; however, that does not mean that they do not require any separate treatment. Behavioral stimuli are usually more variable than nonsocial stimuli, and further study is required to disentangle behavioral and nonsocial contributions to the stimulus control of social interactions. PMID:16812313

  7. SENSITIVITY OF CONDITIONAL-DISCRIMINATION PERFORMANCE TO WITHIN-SESSION VARIATION OF REINFORCER FREQUENCY

    PubMed Central

    Ward, Ryan D; Odum, Amy L

    2008-01-01

    The present experiment developed a methodology for assessing sensitivity of conditional-discrimination performance to within-session variation of reinforcer frequency. Four pigeons responded under a multiple schedule of matching-to-sample components in which the ratio of reinforcers for correct S1 and S2 responses was varied across components within session. Initially, five components, each arranging a different reinforcer-frequency ratio (from 1∶9 to 9∶1), were presented randomly within a session. Under this condition, sensitivity to reinforcer frequency was low. Sensitivity failed to improve after extended exposure to this condition, and under a condition in which only three reinforcer-frequency ratios were varied within session. In a later condition, three reinforcer-frequency ratios were varied within session, but the reinforcer-frequency ratio in effect was differentially signaled within each component. Under this condition, values of sensitivity were similar to those traditionally obtained when reinforcer-frequency ratios for correct responses are varied across conditions. The effects of signaled vs. unsignaled reinforcer-frequency ratios were replicated in two subsequent conditions. The present procedure could provide a practical alternative to parametric variation of reinforcer frequency across conditions and may be useful in characterizing the effects of a variety of manipulations on steady-state sensitivity to reinforcer frequency. PMID:19070338

  8. Polarity related influence maximization in signed social networks.

    PubMed

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.

  9. Polarity Related Influence Maximization in Signed Social Networks

    PubMed Central

    Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng

    2014-01-01

    Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986

  10. The MODIS reprojection tool

    USGS Publications Warehouse

    Dwyer, John L.; Schmidt, Gail L.; Qu, J.J.; Gao, W.; Kafatos, M.; Murphy , R.E.; Salomonson, V.V.

    2006-01-01

    The MODIS Reprojection Tool (MRT) is designed to help individuals work with MODIS Level-2G, Level-3, and Level-4 land data products. These products are referenced to a global tiling scheme in which each tile is approximately 10° latitude by 10° longitude and non-overlapping (Fig. 9.1). If desired, the user may reproject only selected portions of the product (spatial or parameter subsetting). The software may also be used to convert MODIS products to file formats (generic binary and GeoTIFF) that are more readily compatible with existing software packages. The MODIS land products distributed by the Land Processes Distributed Active Archive Center (LP DAAC) are in the Hierarchical Data Format - Earth Observing System (HDF-EOS), developed by the National Center for Supercomputing Applications at the University of Illinois at Urbana Champaign for the NASA EOS Program. Each HDF-EOS file is comprised of one or more science data sets (SDSs) corresponding to geophysical or biophysical parameters. Metadata are embedded in the HDF file as well as contained in a .met file that is associated with each HDF-EOS file. The MRT supports 8-bit, 16-bit, and 32-bit integer data (both signed and unsigned), as well as 32-bit float data. The data type of the output is the same as the data type of each corresponding input SDS.

  11. Traffic flow behavior at un-signalized intersection with crossings pedestrians

    NASA Astrophysics Data System (ADS)

    Khallouk, A.; Echab, H.; Ez-Zahraouy, H.; Lakouari, N.

    2018-02-01

    Mixed traffic flux composed of crossing pedestrians and vehicles extensively exists in cities. To study the characteristics of the interference traffic flux, we develop a pedestrian-vehicle cellular automata model to present the interaction behaviors on a simple cross road. By realizing the fundamental parameters (i.e. injecting rates α1, α2, the extracting rate β and the pedestrian arrival rate αP), simulations are carried out. The vehicular traffic flux is calculated in terms of rates. The effect of the crosswalk can be regarded as a dynamic impurity. The system phase diagrams in the (α1 ,αP) plane are built. It is found that the phase diagrams consist essentially of four phases namely Free Flow, Congested, Maximal Current and Gridlock. The value of the Maximal current phase depends on the extracting rate β, while the Gridlock phase is achieved only when the pedestrians generating rate is higher than a critical value. Furthermore, the effect of vehicles changing lane (Pch1 ,Pch2) and the location of the crosswalk XP on the dynamic characteristics of vehicles flow are investigated. It is found that traffic situation in the system is slightly enhanced if the location of the crosswalks XP is far from the intersection. However, when Pch1, Pch2 increase, the traffic becomes congested and the Gridlock phase enlarges.

  12. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    PubMed

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  13. MAGNETIC ENERGY SPECTRA IN SOLAR ACTIVE REGIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramenko, Valentyna; Yurchyshyn, Vasyl

    Line-of-sight magnetograms for 217 active regions (ARs) with different flare rates observed at the solar disk center from 1997 January until 2006 December are utilized to study the turbulence regime and its relationship to flare productivity. Data from the SOHO/MDI instrument recorded in the high-resolution mode and data from the BBSO magnetograph were used. The turbulence regime was probed via magnetic energy spectra and magnetic dissipation spectra. We found steeper energy spectra for ARs with higher flare productivity. We also report that both the power index, {alpha}, of the energy spectrum, E(k) {approx} k{sup -}{alpha}, and the total spectral energy,more » W = {integral}E(k)dk, are comparably correlated with the flare index, A, of an AR. The correlations are found to be stronger than those found between the flare index and the total unsigned flux. The flare index for an AR can be estimated based on measurements of {alpha} and W as A = 10{sup b}({alpha}W){sup c}, with b = -7.92 {+-} 0.58 and c = 1.85 {+-} 0.13. We found that the regime of the fully developed turbulence occurs in decaying ARs and in emerging ARs (at the very early stage of emergence). Well-developed ARs display underdeveloped turbulence with strong magnetic dissipation at all scales.« less

  14. A HELIOSEISMIC SURVEY OF NEAR-SURFACE FLOWS AROUND ACTIVE REGIONS AND THEIR ASSOCIATION WITH FLARES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braun, D. C., E-mail: dbraun@cora.nwra.com

    We use helioseismic holography to study the association of shallow flows with solar flare activity in about 250 large sunspot groups observed between 2010 and 2014 with the Helioseismic and Magnetic Imager on the Solar Dynamics Observatory. Four basic flow parameters: horizontal speed, horizontal component of divergence, vertical component of vorticity, and a vertical kinetic helicity proxy, are mapped for each active region (AR) during its passage across the solar disk. Flow indices are derived representing the mean and standard deviation of these parameters over magnetic masks and compared with contemporary measures of flare X-ray flux. A correlation exists formore » several of the flow indices, especially those based on the speed and the standard deviation of all flow parameters. However, their correlation with X-ray flux is similar to that observed with the mean unsigned magnetic flux density over the same masks. The temporal variation of the flow indices are studied, and a superposed epoch analysis with respect to the occurrence to 70 M and X-class flares is made. While flows evolve with the passage of the ARs across the disk, no discernible precursors or other temporal changes specifically associated with flares are detected.« less

  15. Incubation of ethanol reinstatement depends on test conditions and how ethanol consumption is reduced.

    PubMed

    Ginsburg, Brett C; Lamb, R J

    2015-04-01

    In reinstatement studies (a common preclinical procedure for studying relapse), incubation occurs (longer abstinence periods result in more responding). This finding is discordant with the clinical literature. Identifying determinants of incubation could aid in interpreting reinstatement and identifying processes involved in relapse. Reinstated responding was examined in rats trained to respond for ethanol and food under a multiple concurrent schedule (Component 1: ethanol FR5, food FR150; Component 2: ethanol FR5, food FR5-alternating across the 30-min session). Ethanol consumption was then reduced for 1 or 16 sessions either by suspending training (rats remained in home cage) or by providing alternative reinforcement (only Component 2 stimuli and contingencies were presented throughout the session). In the next session, stimuli associated with Component 1 were presented and responses recorded but ethanol and food were never delivered. Two test conditions were studied: fixed-ratio completion either produced ethanol- or food-associated stimuli (signaled) or had no programmed consequence (unsignaled). Incubation of ethanol responding was observed only after suspended training during signaled test sessions. Incubation of food responding was also observed after suspended training. These results are most consistent with incubation resulting from a degradation of feedback functions limiting extinction responding, rather than from increased motivation. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Effects of restricted feeding schedules on circadian organization in squirrel monkeys

    NASA Technical Reports Server (NTRS)

    Boulos, Z.; Frim, D. M.; Dewey, L. K.; Moore-Ede, M. C.

    1989-01-01

    Free running circadian rhythms of motor activity, food-motivated lever-pressing, and either drinking (N = 7) or body temperature (N = 3) were recorded from 10 squirrel monkeys maintained in constant illumination with unlimited access to food. Food availability was then restricted to a single unsignaled 3-hour interval each day. The feeding schedule failed to entrain the activity rhythms of 8 monkeys, which continued to free-run. Drinking was almost completely synchronized by the schedule, while body temperature showed a feeding-induced rise superimposed on a free-running rhythm. Nonreinforced lever-pressing showed both a free-running component and a 24-hour component that anticipated the time of feeding. At the termination of the schedule, all recorded variables showed free-running rhythms, but in 3 animals the initial phase of the postschedule rhythms was advanced by several hours, suggesting relative coordination. Of the remaining 2 animals, one exhibited stable entrainment of all 3 recorded rhythms, while the other appeared to entrain temporarily to the feeding schedule. These results indicate that restricted feeding schedules are only a weak zeitgeber for the circadian pacemaker generating free-running rhythms in the squirrel monkey. Such schedules, however, may entrain a separate circadian system responsible for the timing of food-anticipatory changes in behavior and physiology.

  17. Universal method to calculate the stability, electronegativity, and hardness of dianions.

    PubMed

    von Szentpály, László

    2010-10-14

    The electronic stability of gas-phase dianions of arbitrary size, X(2-), is determined by the first universal method to calculate second electron affinities, A(2). The model expresses A(2,calc) = A(1) - (7/6)η(0) by the first electron affinity, A(1), and chemical hardness, η(0), of the neutral "grandparent" species. A comparison with 37 reference data of atoms, molecules, superatoms, and clusters yields A(2,ref) = 1.004A(2,calc) - 0.023 eV, with a mean unsigned deviation of MUD = 0.095 eV and a correlation coefficient of R = 0.9987. Predictions of second electron affinities are given for a further 24 species. The universality of the model is apparent from the broad variety of compounds formed by 30 diverse elements. The electronegativity and hardness of dianions are determined for the first time as χ(X(2-)) = A(2) and η(X(2-)) = (7/12)η(0), respectively. Pearson and Parr's operational assumption regarding the hardness of anionic bases for the hard-soft acid-base (HSAB) principle is rationalized, and predictions for hard and soft dianionic bases are presented. For trianions, first criteria and predictions for electronic stability are given and require A(1) > (7/4)η(0).

  18. Contextual Fear Conditioning in Humans: Cortical-Hippocampal and Amygdala Contributions

    PubMed Central

    Alvarez, Ruben P.; Biggs, Arter; Chen, Gang; Pine, Daniel S.; Grillon, Christian

    2008-01-01

    Functional imaging studies of cued fear conditioning in humans have largely confirmed findings in animals, but it is unclear whether the brain mechanisms that underlie contextual fear conditioning in animals are also preserved in humans. We investigated this issue using fMRI and virtual reality contexts. Subjects underwent differential context conditioning in which they were repeatedly exposed to two contexts (CXT+ and CXT-) in semi-random order, with contexts counterbalanced across participants. An un-signaled footshock was consistently paired with the CXT+, and no shock was ever delivered in the CXT-. Evidence for context conditioning was established using skin conductance and anxiety ratings. Consistent with animal models centrally implicating the hippocampus and amygdala in a network supporting context conditioning, CXT+ compared to CXT- significantly activated right anterior hippocampus and bilateral amygdala. In addition, context conditioning was associated with activation in posterior orbitofrontal cortex, medial dorsal thalamus, anterior insula, subgenual anterior cingulate, and parahippocampal, inferior frontal and parietal cortices. Structural equation modeling was used to assess interactions among the core brain regions mediating context conditioning. The derived model indicated that medial amygdala was the source of key efferent and afferent connections including input from orbitofrontal cortex. These results provide evidence that similar brain mechanisms may underlie contextual fear conditioning across species. PMID:18550763

  19. Neural predictors of evaluative attitudes toward celebrities

    PubMed Central

    Shibata, Kazuhisa; Matsumoto, Kenji; Adolphs, Ralph

    2017-01-01

    Abstract Our attitudes toward others influence a wide range of everyday behaviors and have been the most extensively studied concept in the history of social psychology. Yet they remain difficult to measure reliably and objectively, since both explicit and implicit measures are typically confounded by other psychological processes. We here address the feasibility of decoding incidental attitudes based on brain activations. Participants were presented with pictures of members of a Japanese idol group inside an functional magnetic resonance imaging scanner while performing an unrelated detection task, and subsequently (outside the scanner) performed an incentive-compatible choice task that revealed their attitude toward each celebrity. We used a real-world election scheme that exists for this idol group, which confirmed both strongly negative and strongly positive attitudes toward specific individuals. Whole-brain multivariate analyses (searchlight-based support vector regression) showed that activation patterns in the anterior striatum predicted each participant’s revealed attitudes (choice behavior) using leave-one-out (as well as 4-fold) cross-validation across participants. In contrast, attitude extremity (unsigned magnitude) could be decoded from a distinct region in the posterior striatum. The findings demonstrate dissociable striatal representations of valenced attitude and attitude extremity and constitute a first step toward an objective and process-pure neural measure of attitudes. PMID:27651542

  20. On the Detection of Coronal Dimmings and the Extraction of Their Characteristic Properties

    NASA Astrophysics Data System (ADS)

    Dissauer, K.; Veronig, A. M.; Temmer, M.; Podladchikova, T.; Vanninathan, K.

    2018-03-01

    Coronal dimmings are distinct phenomena associated with coronal mass ejections (CMEs). The study of coronal dimmings and the extraction of their characteristic parameters help us to obtain additional information regarding CMEs, especially on the initiation and early evolution of Earth-directed CMEs. We present a new approach to detect coronal dimming regions based on a thresholding technique applied on logarithmic base-ratio images. Characteristic dimming parameters describing the dynamics, morphology, magnetic properties, and the brightness of coronal dimming regions are extracted by cumulatively summing newly dimmed pixels over time. It is also demonstrated how core dimming regions are identified as a subset of the overall identified dimming region. We successfully apply our method to two well-observed coronal dimming events. For both events, the core dimming regions are identified and the spatial evolution of the dimming area reveals the expansion of the dimming region around these footpoints. We also show that in the early impulsive phase of the dimming expansion the total unsigned magnetic flux involved in the dimming regions is balanced and that up to 30% of this flux results from the localized core dimming regions. Furthermore, the onset in the profile of the area growth rate is cotemporal with the start of the associated flares and in one case also with the fast rise of the CME, indicating a strong relationship of coronal dimmings with both flares and CMEs.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gošić, M.; Rubio, L. R. Bellot; Iniesta, J. C. del Toro

    Small-scale internetwork magnetic fields are important ingredients of the quiet Sun. In this paper we analyze how they appear and disappear on the solar surface. Using high resolution Hinode magnetograms, we follow the evolution of individual magnetic elements in the interior of two supergranular cells at the disk center. From up to 38 hr of continuous measurements, we show that magnetic flux appears in internetwork regions at a rate of 120 ± 3 Mx cm{sup −2} day{sup −1} (3.7 ± 0.4 × 10{sup 24} Mx day{sup −1} over the entire solar surface). Flux disappears from the internetwork at a rate of 125 ± 6 Mx cm{sup −2} day{supmore » −1} (3.9 ± 0.5 × 10{sup 24} Mx day{sup −1}) through fading of magnetic elements, cancelation between opposite-polarity features, and interactions with network patches, which converts internetwork elements into network features. Most of the flux is lost through fading and interactions with the network, at nearly the same rate of about 50 Mx cm{sup −2} day{sup −1}. Our results demonstrate that the sources and sinks of internetwork magnetic flux are well balanced. Using the instantaneous flux appearance and disappearance rates, we successfully reproduce the time evolution of the total unsigned flux in the two supergranular cells.« less

  2. Reinstatement of an extinguished fear conditioned response in infant rats.

    PubMed

    Revillo, Damian A; Trebucq, Gastón; Paglini, Maria G; Arias, Carlos

    2016-01-01

    Although it is currently accepted that the extinction effect reflects new context-dependent learning, this is not so clear during infancy, because some studies did not find recovery of the extinguished conditioned response (CR) in rodents during this ontogenetic stage. However, recent studies have shown the return of an extinguished CR in infant rats. The present study analyzes the possibility of recovering an extinguished CR with a reinstatement procedure in a fear conditioning paradigm, on PD17 (Experiments 1-4) and on PD24 (Experiment 5), while exploring the role of the olfactory content of the context upon the reinstatement effect during the preweanling period. Preweanling rats expressed a previously extinguished CR after a single experience with an unsignaled US. Furthermore, this result was only found when subjects were trained and tested in contexts that included an explicit odor, but not in standard experimental cages. Finally, Experiment 5 demonstrated the reinstatement effect on PD24 in a standard context. These results support the notion that extinction during infancy has the same characteristics as those described for extinction that occurs in adulthood. Instead of postulating a different mechanism for extinction during infancy, we propose that it may be more accurate to view the problem in terms of the variables that may differentially modulate the extinction effect according to the stages of ontogeny. © 2015 Revillo et al.; Published by Cold Spring Harbor Laboratory Press.

  3. Development of an aversive Pavlovian-to-instrumental transfer task in rat

    PubMed Central

    Campese, Vincent; McCue, Margaret; Lázaro-Muñoz, Gabriel; LeDoux, Joseph E.; Cain, Christopher K.

    2013-01-01

    Pavlovian-to-instrumental transfer (PIT) is an effect whereby a classically conditioned stimulus (CS) enhances ongoing instrumental responding. PIT has been extensively studied with appetitive conditioning but barely at all with aversive conditioning. Although it's been argued that conditioned suppression is a form of aversive PIT, this effect is fundamentally different from appetitive PIT because the CS suppresses, instead of facilitates, responding. Five experiments investigated the importance of a variety of factors on aversive PIT in a rodent Sidman avoidance paradigm in which ongoing shuttling behavior (unsignaled active avoidance or USAA) was facilitated by an aversive CS. Experiment 1 demonstrated a basic PIT effect. Experiment 2 found that a moderate amount of USAA extinction produces the strongest PIT with shuttling rates best at around 2 responses per minute prior to the CS. Experiment 3 tested a protocol in which the USAA behavior was required to reach the 2-response per minute mark in order to trigger the CS presentation and found that this produced robust and reliable PIT. Experiment 4 found that the Pavlovian conditioning US intensity was not a major determinant of PIT strength. Experiment 5 demonstrated that if the CS and US were not explicitly paired during Pavlovian conditioning, PIT did not occur, showing that CS-US learning is required. Together, these studies demonstrate a robust, reliable and stable aversive PIT effect that is amenable to analysis of neural circuitry. PMID:24324417

  4. Evolution of structure-reactivity correlations for the hydrogen abstraction reaction by chlorine atom.

    PubMed

    Poutsma, Marvin L

    2013-01-31

    Empirical structure-reactivity correlations are developed for log k(298), the gas-phase rate constants for the reaction (Cl(•) + HCR(3) → ClH + CR(3)(•)). It has long been recognized that correlation with Δ(r)H is weak. The poor performance of the linear Evans-Polanyi formulation is illustrated and was little improved by adding a quadratic term, for example, by making its slope smoothly dependent on Δ(r)H [η ≡ (Δ(r)H - Δ(r)H(min))/(Δ(r)H(max) - Δ(r)H(min))]. The "polar effect" ((δ-)Cl---H---CR(3)(δ+))(++) has also been long discussed, but there is no formalization of this dependence based on widely available independent variable(s). Using the sum of Hammett constants for the R substituents also gave at best modest correlations, either for σ(para) or for its dissection into F (field/inductive) and R (resonance) effects. Much greater success was achieved by combining these approaches with the preferred independent variable set being either [(Δ(r)H)(2), Δ(r)H, ΣF, and ΣR] or [η, Δ(r)H, ΣF, and ΣR]. For 64 rate constants that span 7 orders of magnitude, these correlation formulations give r(2) > 0.87 and a mean unsigned deviation of <0.5 log k units, with even better performance if primary, secondary, and tertiary reaction centers are treated separately.

  5. Free-Operant Avoidance Behavior by Rats after Reinforcer Revaluation Using Opioid Agonists and d-Amphetamine

    PubMed Central

    Urcelay, Gonzalo; Mar, Adam; Dickinson, Anthony; Robbins, Trevor

    2014-01-01

    The associative processes that support free-operant instrumental avoidance behavior are still unknown. We used a revaluation procedure to determine whether the performance of an avoidance response is sensitive to the current value of the aversive, negative reinforcer. Rats were trained on an unsignaled, free-operant lever press avoidance paradigm in which each response avoided or escaped shock and produced a 5 s feedback stimulus. The revaluation procedure consisted of noncontingent presentations of the shock in the absence of the lever either paired or unpaired with systemic morphine and in a different cohort with systemic d-amphetamine. Rats were then tested drug free during an extinction test. In both the d-amphetamine and morphine groups, pairing of the drug and shock decreased subsequent avoidance responding during the extinction test, suggesting that avoidance behavior was sensitive to the current incentive value of the aversive negative reinforcer. Experiment 2 used central infusions of D-Ala2, NMe-Phe4, Gly-ol5]-enkephalin (DAMGO), a mu-opioid receptor agonist, in the periacqueductal gray and nucleus accumbens shell to revalue the shock. Infusions of DAMGO in both regions replicated the effects seen with systemic morphine. These results are the first to demonstrate the impact of revaluation of an aversive reinforcer on avoidance behavior using pharmacological agents, thereby providing potential therapeutic targets for the treatment of avoidance behavior symptomatic of anxiety disorders. PMID:24790199

  6. Enhanced extinction of contextual fear conditioning in ClockΔ19 mutant mice.

    PubMed

    Bernardi, Rick E; Spanagel, Rainer

    2014-08-01

    Clock genes have been implicated in several disorders, such as schizophrenia, bipolar disorder, autism spectrum disorders, and drug dependence. However, few studies to date have examined the role of clock genes in fear-related behaviors. The authors used mice with the ClockΔ19 mutation to assess the involvement of this gene in contextual fear conditioning. Male wild-type (WT) and ClockΔ19 mutant mice underwent a single session of contextual fear conditioning (12 min, 4 unsignaled shocks), followed by daily 12-min retention trials. There were no differences between mutant and WT mice in the acquisition of contextual fear, and WT and mutant mice demonstrated similar freezing during the first retention session. However, extinction of contextual fear was accelerated in mutant mice across the remaining retention sessions, as compared to WT mice, suggesting a role for Clock in extinction following aversive learning. Because the ClockΔ19 mutation has previously been demonstrated to result in an increase in dopamine signaling, the authors confirmed the role of dopamine in extinction learning using preretention session administration of a low dose of the dopamine transport reuptake inhibitor modafinil (0.75 mg/kg), which resulted in decreased freezing across retention sessions. These findings are consistent with an emerging portrayal of the importance of Clock genes in noncircadian functions, as well as the important role of dopamine in extinction learning.

  7. S-R associations, their extinction, and recovery in an animal model of anxiety: a new associative account of phobias without recall of original trauma.

    PubMed

    Laborda, Mario A; Miller, Ralph R

    2011-06-01

    Associative accounts of the etiology of phobias have been criticized because of numerous cases of phobias in which the client does not remember a relevant traumatic event (i.e., Pavlovian conditioning trial), instructions, or vicarious experience with the phobic object. In three lick suppression experiments with rats as subjects, we modeled an associative account of such fears. Experiment 1 assessed stimulus-response (S-R) associations in first-order fear conditioning. After behaviorally complete devaluation of the unconditioned stimulus, the target stimulus still produced strong conditioned responses, suggesting that an S-R association had been formed and that this association was not significantly affected when the outcome was devalued through unsignaled presentations of the unconditioned stimulus. Experiments 2 and 3 examined extinction and recovery of S-R associations. Experiment 2 showed that extinguished S-R associations returned when testing occurred outside of the extinction context (i.e., renewal) and Experiment 3 found that a long delay between extinction and testing also produced a return of the extinguished S-R associations (i.e., spontaneous recovery). These experiments suggest that fears for which people cannot recall a cause are explicable in an associative framework, and indicate that those fears are susceptible to relapse after extinction treatment just like stimulus-outcome (S-O) associations. Copyright © 2010. Published by Elsevier Ltd.

  8. Neural predictors of evaluative attitudes toward celebrities.

    PubMed

    Izuma, Keise; Shibata, Kazuhisa; Matsumoto, Kenji; Adolphs, Ralph

    2017-03-01

    Our attitudes toward others influence a wide range of everyday behaviors and have been the most extensively studied concept in the history of social psychology. Yet they remain difficult to measure reliably and objectively, since both explicit and implicit measures are typically confounded by other psychological processes. We here address the feasibility of decoding incidental attitudes based on brain activations. Participants were presented with pictures of members of a Japanese idol group inside an functional magnetic resonance imaging scanner while performing an unrelated detection task, and subsequently (outside the scanner) performed an incentive-compatible choice task that revealed their attitude toward each celebrity. We used a real-world election scheme that exists for this idol group, which confirmed both strongly negative and strongly positive attitudes toward specific individuals. Whole-brain multivariate analyses (searchlight-based support vector regression) showed that activation patterns in the anterior striatum predicted each participant's revealed attitudes (choice behavior) using leave-one-out (as well as 4-fold) cross-validation across participants. In contrast, attitude extremity (unsigned magnitude) could be decoded from a distinct region in the posterior striatum. The findings demonstrate dissociable striatal representations of valenced attitude and attitude extremity and constitute a first step toward an objective and process-pure neural measure of attitudes. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Characterization of domain-peptide interaction interface: a case study on the amphiphysin-1 SH3 domain.

    PubMed

    Hou, Tingjun; Zhang, Wei; Case, David A; Wang, Wei

    2008-02-29

    Many important protein-protein interactions are mediated by peptide recognition modular domains, such as the Src homology 3 (SH3), SH2, PDZ, and WW domains. Characterizing the interaction interface of domain-peptide complexes and predicting binding specificity for modular domains are critical for deciphering protein-protein interaction networks. Here, we propose the use of an energetic decomposition analysis to characterize domain-peptide interactions and the molecular interaction energy components (MIECs), including van der Waals, electrostatic, and desolvation energy between residue pairs on the binding interface. We show a proof-of-concept study on the amphiphysin-1 SH3 domain interacting with its peptide ligands. The structures of the human amphiphysin-1 SH3 domain complexed with 884 peptides were first modeled using virtual mutagenesis and optimized by molecular mechanics (MM) minimization. Next, the MIECs between domain and peptide residues were computed using the MM/generalized Born decomposition analysis. We conducted two types of statistical analyses on the MIECs to demonstrate their usefulness for predicting binding affinities of peptides and for classifying peptides into binder and non-binder categories. First, combining partial least squares analysis and genetic algorithm, we fitted linear regression models between the MIECs and the peptide binding affinities on the training data set. These models were then used to predict binding affinities for peptides in the test data set; the predicted values have a correlation coefficient of 0.81 and an unsigned mean error of 0.39 compared with the experimentally measured ones. The partial least squares-genetic algorithm analysis on the MIECs revealed the critical interactions for the binding specificity of the amphiphysin-1 SH3 domain. Next, a support vector machine (SVM) was employed to build classification models based on the MIECs of peptides in the training set. A rigorous training-validation procedure was used to assess the performances of different kernel functions in SVM and different combinations of the MIECs. The best SVM classifier gave satisfactory predictions for the test set, indicated by average prediction accuracy rates of 78% and 91% for the binding and non-binding peptides, respectively. We also showed that the performance of our approach on both binding affinity prediction and binder/non-binder classification was superior to the performances of the conventional MM/Poisson-Boltzmann solvent-accessible surface area and MM/generalized Born solvent-accessible surface area calculations. Our study demonstrates that the analysis of the MIECs between peptides and the SH3 domain can successfully characterize the binding interface, and it provides a framework to derive integrated prediction models for different domain-peptide systems.

  10. [Chastity and healing power].

    PubMed

    Kudlien, F

    1984-01-01

    The idea of a connection between chastity and healing power is to be found as well in primitive cultures as in ethnomedicine and in the medicine of Antiquity. What is meant in the Hippocratic Oath, though, is not the sexual restraint of the physician himself, but moreover in the attitude towards his patient and all the people living in the same house with the patient. This is different, though, in ethnomedicine, where the healer is demanded to live sexually abstinent for a certain time and the persons assisting him are often expected to stay untouched, i.e. chaste. "Chastity", however, as conceived by the Ancient Greek and Roman meant the integrity of marital faith. The urine of women who were chaste in this sense was believed to have healing powers. Furthermore mention is made of an African tribe, of which it was said that its members originating definitely from this tribe only, were immune against the bite of snakes and therefore had a special capacity of healing bitten persons. Ethnomedicine, on the other side, attributes great power to those who lead an unchaste life as well as to illegitimately born children. We are here confronted with an ambiguity inherent in popular imagination of disease and healing. A figure of German poetry of the Middle Ages "Der Arme Heinrich" by Hartmann von Aue, f.i., is in search of the heartblood from a young, freeborn maiden which is to cure him from leprosy. In many cases objects used by virgins or unmarried men were believed to have healing powers. Especially the imagination of epilepsy and orgasm being related to each other, lead to the claim for sexual abstinence of the patient, yes, even for his castration. On the other hand, however, epileptics were recommended to have sexual intercourse. Fasting, sometimes together with sexual abstinence, was considered to be important for the healer and as well for the patient to be cured. Plants were given in order to lead to sexual abstinence. In primitive cultures the imagination of sexual activity leading to a loss of the powers that are needed for healing, is to be found very often.(ABSTRACT TRUNCATED AT 400 WORDS)

  11. Statistical learning of an auditory sequence and reorganization of acquired knowledge: A time course of word segmentation and ordering.

    PubMed

    Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato

    2017-01-27

    Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present study provides the first neurophysiological evidence that the correction of statistical knowledge requires more time than the acquisition of new statistical knowledge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain.

    PubMed

    Panel, Nicolas; Sun, Young Joo; Fuentes, Ernesto J; Simonson, Thomas

    2017-01-01

    PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB) continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or "PB/LIE" free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α 2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo . The overall performance of the model should allow its use in the design of new PDZ ligands in the future.

  13. A Simple PB/LIE Free Energy Function Accurately Predicts the Peptide Binding Specificity of the Tiam1 PDZ Domain

    PubMed Central

    Panel, Nicolas; Sun, Young Joo; Fuentes, Ernesto J.; Simonson, Thomas

    2017-01-01

    PDZ domains generally bind short amino acid sequences at the C-terminus of target proteins, and short peptides can be used as inhibitors or model ligands. Here, we used experimental binding assays and molecular dynamics simulations to characterize 51 complexes involving the Tiam1 PDZ domain and to test the performance of a semi-empirical free energy function. The free energy function combined a Poisson-Boltzmann (PB) continuum electrostatic term, a van der Waals interaction energy, and a surface area term. Each term was empirically weighted, giving a Linear Interaction Energy or “PB/LIE” free energy. The model yielded a mean unsigned deviation of 0.43 kcal/mol and a Pearson correlation of 0.64 between experimental and computed free energies, which was superior to a Null model that assumes all complexes have the same affinity. Analyses of the models support several experimental observations that indicate the orientation of the α2 helix is a critical determinant for peptide specificity. The models were also used to predict binding free energies for nine new variants, corresponding to point mutants of the Syndecan1 and Caspr4 peptides. The predictions did not reveal improved binding; however, they suggest that an unnatural amino acid could be used to increase protease resistance and peptide lifetimes in vivo. The overall performance of the model should allow its use in the design of new PDZ ligands in the future. PMID:29018806

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chang; Deng, Na; Wang, Haimin

    Adverse space-weather effects can often be traced to solar flares, the prediction of which has drawn significant research interests. The Helioseismic and Magnetic Imager (HMI) produces full-disk vector magnetograms with continuous high cadence, while flare prediction efforts utilizing this unprecedented data source are still limited. Here we report results of flare prediction using physical parameters provided by the Space-weather HMI Active Region Patches (SHARP) and related data products. We survey X-ray flares that occurred from 2010 May to 2016 December and categorize their source regions into four classes (B, C, M, and X) according to the maximum GOES magnitude ofmore » flares they generated. We then retrieve SHARP-related parameters for each selected region at the beginning of its flare date to build a database. Finally, we train a machine-learning algorithm, called random forest (RF), to predict the occurrence of a certain class of flares in a given active region within 24 hr, evaluate the classifier performance using the 10-fold cross-validation scheme, and characterize the results using standard performance metrics. Compared to previous works, our experiments indicate that using the HMI parameters and RF is a valid method for flare forecasting with fairly reasonable prediction performance. To our knowledge, this is the first time that RF has been used to make multiclass predictions of solar flares. We also find that the total unsigned quantities of vertical current, current helicity, and flux near the polarity inversion line are among the most important parameters for classifying flaring regions into different classes.« less

  15. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    NASA Astrophysics Data System (ADS)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  16. The Relation Between Magnetic Fields and X-ray Emission for Solar Microflares and Active Regions

    NASA Astrophysics Data System (ADS)

    Kirichenko, A. S.; Bogachev, S. A.

    2017-09-01

    We present the result of a comparison between magnetic field parameters and the intensity of X-ray emission for solar microflares with Geosynchronous Operational Environmental Satellites (GOES) classes from A0.02 to B5.1. For our study, we used the monochromatic MgXII Imaging Spectroheliometer (MISH), the Full-disk EUV Telescope (FET), and the Solar PHotometer in X-rays (SphinX) instruments onboard the Complex Orbital Observations Near-Earth of Activity of the Sun-Photon CORONAS- Photon spacecraft because of their high sensitivity in soft X-rays. The peak flare flux (PFF) for solar microflares was found to depend on the strength of the magnetic field and on the total unsigned magnetic flux as a power-law function. In the spectral range 2.8 - 36.6 Å, which shows very little increase related to microflares, the power-law index of the relation between the X-ray flux and magnetic flux for active regions is 1.48 ±0.86, which is close to the value obtained previously by Pevtsov et al. ( Astrophys. J. 598, 1387, 2003) for different types of solar and stellar objects. In the spectral range 1 - 8 Å, the power-law indices for PFF(B) and PFF(Φ) for microflares are 3.87 ±2.16 and 3 ±1.6, respectively. We also make suggestions on the heating mechanisms in active regions and microflares under the assumption of loops with constant pressure and heating using the Rosner-Tucker-Vaiana scaling laws.

  17. Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.

    PubMed

    Gapsys, Vytautas; de Groot, Bert L

    2017-12-12

    Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poutsma, Marvin L.

    Recently we presented structure-reactivity correlations for the gas-phase ambient-temperature rate constants for hydrogen abstraction from sp 3-hybridized carbon by chlorine atom and hydroxyl radical (Cl•/HO• + HCR 3 → HCl/HOH + •CR 3); the reaction enthalpy effect was represented by the independent variable Δ rH and the polar effect by the independent variables F and R, the Hammett constants for field/inductive and resonance effects. Both these reactions are predominantly exothermic and have early transition states. Here we present a parallel treatment for Br• whose reaction is significantly endothermic with a correspondingly late transition state. In spite of lower expectations becausemore » the available data base is less extensive and much more scattered and because long temperature extrapolations are often required, the resulting least-squares fit (log k 298,Br = –0.147 Δ rH –4.32 ΣF –4.28 ΣR –12.38 with r 2 = 0.92) was modestly successful and useful for initial predictions. The coefficient of Δ rH was ~4-fold greater, indicative of the change from an early to a late transition state; meanwhile the sizable coefficients of ΣF and ΣR indicate the persistence of the polar effect. Although the mean unsigned deviation of 0.79 log k 298 units is rather large, it must be considered in the context of a total span of over 15 log units in the data set. Lastly, the major outliers are briefly discussed.« less

  19. Extension of Structure-Reactivity Correlations for the Hydrogen Abstraction Reaction by Bromine Atom and Comparison to Chlorine Atom and Hydroxyl Radical.

    PubMed

    Poutsma, Marvin L

    2016-01-21

    Recently we presented structure-reactivity correlations for the gas-phase ambient-temperature rate constants for hydrogen abstraction from sp(3)-hybridized carbon by chlorine atom and hydroxyl radical (Cl•/HO• + HCR3 → HCl/HOH + •CR3); the reaction enthalpy effect was represented by the independent variable ΔrH and the "polar effect" by the independent variables F and R, the Hammett constants for field/inductive and resonance effects. Both these reactions are predominantly exothermic and have early transition states. Here, we present a parallel treatment for Br• whose reaction is significantly endothermic with a correspondingly late transition state. Despite lower expectations because the available database is less extensive and much more scattered and because long temperature extrapolations are often required, the resulting least-squares fit (log k298,Br = -0.147 ΔrH - 4.32 ΣF - 4.28 ΣR - 12.38 with r(2) = 0.92) was modestly successful and useful for initial predictions. The coefficient of ΔrH was ∼4-fold greater, indicative of the change from an early to a late transition state; meanwhile the sizable coefficients of ΣF and ΣR indicate the persistence of the "polar effect". Although the mean unsigned deviation of 0.79 log k298 units is rather large, it must be considered in the context of a total span of over 15 log units in the data set. The major outliers are briefly discussed.

  20. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    NASA Astrophysics Data System (ADS)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  1. Can cycling safety be improved by opening all unidirectional cycle paths for cycle traffic in both directions? A theoretical examination of available literature and data.

    PubMed

    Methorst, Rob; Schepers, Paul; Kamminga, Jaap; Zeegers, Theo; Fishman, Elliot

    2017-08-01

    Many studies have found bicycle-motor vehicle crashes to be more likely on bidirectional cycle paths than on unidirectional cycle paths because drivers do not expect cyclists riding at the right side of the road. In this paper we discuss the hypothesis that opening all unidirectional cycle paths for cycle traffic in both directions prevent this lack of expectancy and accordingly improves cycling safety. A new national standard requires careful consideration because a reversal is difficult once cyclists are used to their new freedom of route choice. We therefore explored the hypothesis using available data, research, and theories. The results show that of the length of cycle paths along distributor roads in the Netherlands, 72% is bidirectional. If drivers would become used to cyclists riding at the left side of the road, this result raises the question of why bidirectional cycle paths in the Netherlands still have a poor safety record compared to unidirectional cycle paths. Moreover, our exploration suggested that bidirectional cycle paths have additional safety problems. It increases the complexity of unsignalized intersections because drivers have to scan more directions in a short period of time. Moreover, there are some indications that the likelihood of frontal crashes between cyclists increases. We reject the hypothesis that opening all unidirectional cycle paths for cycle traffic in both directions will improve cycle safety. We recommend more attention for mitigating measures given the widespread application of bidirectional cycle paths in the Netherlands. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. NCWin — A Component Object Model (COM) for processing and visualizing NetCDF data

    USGS Publications Warehouse

    Liu, Jinxun; Chen, J.M.; Price, D.T.; Liu, S.

    2005-01-01

    NetCDF (Network Common Data Form) is a data sharing protocol and library that is commonly used in large-scale atmospheric and environmental data archiving and modeling. The NetCDF tool described here, named NCWin and coded with Borland C + + Builder, was built as a standard executable as well as a COM (component object model) for the Microsoft Windows environment. COM is a powerful technology that enhances the reuse of applications (as components). Environmental model developers from different modeling environments, such as Python, JAVA, VISUAL FORTRAN, VISUAL BASIC, VISUAL C + +, and DELPHI, can reuse NCWin in their models to read, write and visualize NetCDF data. Some Windows applications, such as ArcGIS and Microsoft PowerPoint, can also call NCWin within the application. NCWin has three major components: 1) The data conversion part is designed to convert binary raw data to and from NetCDF data. It can process six data types (unsigned char, signed char, short, int, float, double) and three spatial data formats (BIP, BIL, BSQ); 2) The visualization part is designed for displaying grid map series (playing forward or backward) with simple map legend, and displaying temporal trend curves for data on individual map pixels; and 3) The modeling interface is designed for environmental model development by which a set of integrated NetCDF functions is provided for processing NetCDF data. To demonstrate that the NCWin can easily extend the functions of some current GIS software and the Office applications, examples of calling NCWin within ArcGIS and MS PowerPoint for showing NetCDF map animations are given.

  3. Optical Coherence Tomography (OCT) Device Independent Intraretinal Layer Segmentation

    PubMed Central

    Ehnes, Alexander; Wenner, Yaroslava; Friedburg, Christoph; Preising, Markus N.; Bowl, Wadim; Sekundo, Walter; zu Bexten, Erdmuthe Meyer; Stieger, Knut; Lorenz, Birgit

    2014-01-01

    Purpose To develop and test an algorithm to segment intraretinal layers irrespectively of the actual Optical Coherence Tomography (OCT) device used. Methods The developed algorithm is based on the graph theory optimization. The algorithm's performance was evaluated against that of three expert graders for unsigned boundary position difference and thickness measurement of a retinal layer group in 50 and 41 B-scans, respectively. Reproducibility of the algorithm was tested in 30 C-scans of 10 healthy subjects each with the Spectralis and the Stratus OCT. Comparability between different devices was evaluated in 84 C-scans (volume or radial scans) obtained from 21 healthy subjects, two scans per subject with the Spectralis OCT, and one scan per subject each with the Stratus OCT and the RTVue-100 OCT. Each C-scan was segmented and the mean thickness for each retinal layer in sections of the early treatment of diabetic retinopathy study (ETDRS) grid was measured. Results The algorithm was able to segment up to 11 intraretinal layers. Measurements with the algorithm were within the 95% confidence interval of a single grader and the difference was smaller than the interindividual difference between the expert graders themselves. The cross-device examination of ETDRS-grid related layer thicknesses highly agreed between the three OCT devices. The algorithm correctly segmented a C-scan of a patient with X-linked retinitis pigmentosa. Conclusions The segmentation software provides device-independent, reliable, and reproducible analysis of intraretinal layers, similar to what is obtained from expert graders. Translational Relevance Potential application of the software includes routine clinical practice and multicenter clinical trials. PMID:24820053

  4. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishizuka, N.; Kubo, Y.; Den, M.

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutralmore » lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.« less

  5. Connecting Coronal Mass Ejections to Their Solar Active Region Sources: Combining Results from the HELCATS and FLARECAST Projects

    NASA Astrophysics Data System (ADS)

    Murray, Sophie A.; Guerra, Jordan A.; Zucca, Pietro; Park, Sung-Hong; Carley, Eoin P.; Gallagher, Peter T.; Vilmer, Nicole; Bothmer, Volker

    2018-04-01

    Coronal mass ejections (CMEs) and other solar eruptive phenomena can be physically linked by combining data from a multitude of ground-based and space-based instruments alongside models; however, this can be challenging for automated operational systems. The EU Framework Package 7 HELCATS project provides catalogues of CME observations and properties from the Heliospheric Imagers on board the two NASA/STEREO spacecraft in order to track the evolution of CMEs in the inner heliosphere. From the main HICAT catalogue of over 2,000 CME detections, an automated algorithm has been developed to connect the CMEs observed by STEREO to any corresponding solar flares and active-region (AR) sources on the solar surface. CME kinematic properties, such as speed and angular width, are compared with AR magnetic field properties, such as magnetic flux, area, and neutral line characteristics. The resulting LOWCAT catalogue is also compared to the extensive AR property database created by the EU Horizon 2020 FLARECAST project, which provides more complex magnetic field parameters derived from vector magnetograms. Initial statistical analysis has been undertaken on the new data to provide insight into the link between flare and CME events, and characteristics of eruptive ARs. Warning thresholds determined from analysis of the evolution of these parameters is shown to be a useful output for operational space weather purposes. Parameters of particular interest for further analysis include total unsigned flux, vertical current, and current helicity. The automated method developed to create the LOWCAT catalogue may also be useful for future efforts to develop operational CME forecasting.

  6. A Comparative Serological Study of Toxoplasmosis in Pregnant Women by CLIA and ELISA Methods in Chalus City Iran.

    PubMed

    Elahian Firouz, Zahra; Kaboosi, Hami; Faghih Nasiri, Abdolreza; Tabatabaie, Seyed Saleh; Golhasani-Keshtan, Farideh; Zaboli, Fatemeh

    2014-04-01

    Toxoplasmosis is the most common disease in humans and animals (zoonosis) caused by the protozoan parasite Toxoplasma gondii. The disease is usually appeared as asymptomatic in immunocompromised individuals but its most common symptom is lymphadenopathy. Shortly before or during the first trimester of pregnancy, this disease can be transferred to the fetus and cause serious infection in the fetus. In late pregnancy (third trimester), the complications of this infection is very low or unsigned. Due to the absence of non-specific clinical symptoms or slight infection in pregnant women, prenatal diagnosis is often impossible. Since no research compared these two methods, we decided to compare these methods and determine which method works better for diagnosis of toxoplasmosis. In this study, 50 pregnant women who referred to the Chalus Health Center laboratory were included and the blood samples were tested for presence of IgG and IgM antibodies of Toxoplasma gondii by both ELISA and Chemiluminescence methods. Of the 50 samples tested by the ELISA method, 26 samples (52%) were positive for IgG . No samples were positive for IgM. Of the 50 samples tested by the Chemiluminescence method, 28 samples (56%) were positive for IgG. No samples were positive for IgM. A significant relationship between the age of the youngest child and the infection rate was seen. No significant correlation between age, number of individuals in the household, number of children, location, type of construction, consumption of greens, the way of greens and meat consumption, drug use, history of stillbirth and infection levels was seen.

  7. Characteristics of Low-latitude Coronal Holes near the Maximum of Solar Cycle 24

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmeister, Stefan J.; Veronig, Astrid; Reiss, Martin A.

    We investigate the statistics of 288 low-latitude coronal holes extracted from SDO /AIA-193 filtergrams over the time range of 2011 January 01–2013 December 31. We analyze the distribution of characteristic coronal hole properties, such as the areas, mean AIA-193 intensities, and mean magnetic field densities, the local distribution of the SDO /AIA-193 intensity and the magnetic field within the coronal holes, and the distribution of magnetic flux tubes in coronal holes. We find that the mean magnetic field density of all coronal holes under study is 3.0 ± 1.6 G, and the percentaged unbalanced magnetic flux is 49 ± 16%.more » The mean magnetic field density, the mean unsigned magnetic field density, and the percentaged unbalanced magnetic flux of coronal holes depend strongly pairwise on each other, with correlation coefficients cc > 0.92. Furthermore, we find that the unbalanced magnetic flux of the coronal holes is predominantly concentrated in magnetic flux tubes: 38% (81%) of the unbalanced magnetic flux of coronal holes arises from only 1% (10%) of the coronal hole area, clustered in magnetic flux tubes with field strengths >50 G (10 G). The average magnetic field density and the unbalanced magnetic flux derived from the magnetic flux tubes correlate with the mean magnetic field density and the unbalanced magnetic flux of the overall coronal hole (cc>0.93). These findings give evidence that the overall magnetic characteristics of coronal holes are governed by the characteristics of the magnetic flux tubes.« less

  8. Solar photospheric network properties and their cycle variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thibault, K.; Charbonneau, P.; Béland, M., E-mail: kim@astro.umontreal.ca-a, E-mail: paulchar@astro.umontreal.ca-b, E-mail: michel.beland@calculquebec.ca-c

    We present a numerical simulation of the formation and evolution of the solar photospheric magnetic network over a full solar cycle. The model exhibits realistic behavior as it produces large, unipolar concentrations of flux in the polar caps, a power-law flux distribution with index –1.69, a flux replacement timescale of 19.3 hr, and supergranule diameters of 20 Mm. The polar behavior is especially telling of model accuracy, as it results from lower-latitude activity, and accumulates the residues of any potential modeling inaccuracy and oversimplification. In this case, the main oversimplification is the absence of a polar sink for the flux,more » causing an amount of polar cap unsigned flux larger than expected by almost one order of magnitude. Nonetheless, our simulated polar caps carry the proper signed flux and dipole moment, and also show a spatial distribution of flux in good qualitative agreement with recent high-latitude magnetographic observations by Hinode. After the last cycle emergence, the simulation is extended until the network has recovered its quiet Sun initial condition. This permits an estimate of the network relaxation time toward the baseline state characterizing extended periods of suppressed activity, such as the Maunder Grand Minimum. Our simulation results indicate a network relaxation time of 2.9 yr, setting 2011 October as the soonest the time after which the last solar activity minimum could have qualified as a Maunder-type Minimum. This suggests that photospheric magnetism did not reach its baseline state during the recent extended minimum between cycles 23 and 24.« less

  9. Choice with frequently changing food rates and food ratios.

    PubMed

    Baum, William M; Davison, Michael

    2014-03-01

    In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale. © Society for the Experimental Analysis of Behavior.

  10. Extension of structure-reactivity correlations for the hydrogen abstraction reaction by bromine atom and comparison to chlorine atom and hydroxyl radical

    DOE PAGES

    Poutsma, Marvin L.

    2015-12-14

    Recently we presented structure-reactivity correlations for the gas-phase ambient-temperature rate constants for hydrogen abstraction from sp 3-hybridized carbon by chlorine atom and hydroxyl radical (Cl•/HO• + HCR 3 → HCl/HOH + •CR 3); the reaction enthalpy effect was represented by the independent variable Δ rH and the polar effect by the independent variables F and R, the Hammett constants for field/inductive and resonance effects. Both these reactions are predominantly exothermic and have early transition states. Here we present a parallel treatment for Br• whose reaction is significantly endothermic with a correspondingly late transition state. In spite of lower expectations becausemore » the available data base is less extensive and much more scattered and because long temperature extrapolations are often required, the resulting least-squares fit (log k 298,Br = –0.147 Δ rH –4.32 ΣF –4.28 ΣR –12.38 with r 2 = 0.92) was modestly successful and useful for initial predictions. The coefficient of Δ rH was ~4-fold greater, indicative of the change from an early to a late transition state; meanwhile the sizable coefficients of ΣF and ΣR indicate the persistence of the polar effect. Although the mean unsigned deviation of 0.79 log k 298 units is rather large, it must be considered in the context of a total span of over 15 log units in the data set. Lastly, the major outliers are briefly discussed.« less

  11. Analytical study into El Greco's baptism of Christ: clues to the genius of his palette

    NASA Astrophysics Data System (ADS)

    Daniilia, S.; Andrikopoulos, K. S.; Sotiropoulou, S.; Karapanagiotis, I.

    2008-03-01

    What the discerning gaze of the art historian has deduced from comparisons in style namely, that the unsigned Baptism of Christ (dated 1567) comes from the hand of the master Cretan painter, El Greco is now investigated by the dispassionate eye of technology. The examination by means of analytical methods of diagnosis aimed at making an in-depth investigation into the hitherto unknown personal traits of the artist’s painting technique. By observing the cross-sections under the optical microscope and analyzing the materials through the application of μRaman and μFTIR spectroscopies and of high performance liquid chromatography (HPCL/DAD), it was possible to reveal the “fingerprints” of the artist’s brushwork. In his masterfully executed Baptism, El Greco has succeeded through his perspicacity and ingenuity, to combine traditional techniques of Byzantine icon-painting with the innovative practices of Venetian Renaissance art. The artist’s palette contains mineral, earth and natural organic pigments, as well as some synthetic ones of glass or resin base: lapis lazuli, indigo, lead-tin yellow, orpiment, yellow ochre, cochineal lake, copper resinate, burnt umber, lead white and carbon black. Furthermore, he introduces a layer of white imprimatura containing varied combinations of powdered glass and lead white. The detection of substantial similarities between the glass varieties used in the Baptism and those found in works by Venetian painters contemporary with El Greco (such as Tintoretto) further attests ascription of the Baptism to the period of the artist’s brief sojourn in Venice.

  12. Email notification combined with off site signing substantially reduces resident approval to faculty verification time.

    PubMed

    Deitte, Lori A; Moser, Patricia P; Geller, Brian S; Sistrom, Chris L

    2011-06-01

    Attending radiologist signature time (AST) is a variable and modifiable component of overall report turnaround time. Delays in finalized reports have potential to undermine radiologists' value as consultants and adversely affect patient care. This study was performed to evaluate the impact of notebook computer distribution and daily automated e-mail notification on reducing AST. Two simultaneous interventions were initiated in the authors' radiology department in February 2010. These included the distribution of a notebook computer with preloaded software for each attending radiologist to sign radiology reports and daily automated e-mail notifications for unsigned reports. The digital dictation system archive and the radiology information system were queried for all radiology reports produced from January 2009 through August 2010. The time between resident approval and attending radiologist signature before and after the intervention was analyzed. Potential unintended "side effects" of the intervention were also studied. Resident-authored reports were signed, on average, 2.53 hours sooner after the intervention. This represented a highly significant (P = .003) decrease in AST with all else held equal. Postintervention reports were authored by residents at the same rate (about 70%). An unintended "side effect" was that attending radiologists were less likely to make changes to resident-authored reports after the intervention. E-mail notification combined with offsite signing can reduce AST substantially. Notebook computers with preloaded software streamline the process of accessing, editing, and signing reports. The observed decrease in AST reflects a positive change in the timeliness of report signature. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  13. Resistance to Change and Relapse of Observing

    PubMed Central

    Thrailkill, Eric A; Shahan, Timothy A

    2012-01-01

    Four experiments examined relapse of extinguished observing behavior of pigeons using a two-component multiple schedule of observing-response procedures. In both components, unsignaled periods of variable-interval (VI) food reinforcement alternated with extinction and observing responses produced stimuli associated with the availability of the VI schedule (i.e., S+). The components differed in the rate of food arranged (Rich  = VI 30 s; Lean  =  VI 120 s). In Experiment 1, following baseline training, extinction of observing involved removal of both food and S+ deliveries, and reinstatement was examined by presenting either response-independent food or S+ deliveries. In Experiment 2, extinction involved removal of only food deliveries while observing responses continued to produce S+. Reinstatement was examined by delivering food contingent upon the first two food-key responses occurring in the presence of the S+. Experiment 3 assessed ABA renewal of observing by extinguishing food-key and observing responses in the presence of one contextual stimulus (i.e., B) and then returning to the original training context (i.e., A) during continued extinction. Experiment 4 examined resurgence by introducing food reinforcement for an alternative response during extinction, and subsequently removing that alternative source of food. Across experiments, relative resistance to extinction and relapse of observing tended to be greater in the component previously associated with the higher rate of primary reinforcement. Relapse of observing or attending to stimuli associated with primary reinforcement appears to be impacted by frequency of primary reinforcement in a manner similar to responding maintained directly by primary reinforcement. PMID:22693359

  14. MEMO--a mobile phone depression prevention intervention for adolescents: development process and postprogram findings on acceptability from a randomized controlled trial.

    PubMed

    Whittaker, Robyn; Merry, Sally; Stasiak, Karolina; McDowell, Heather; Doherty, Iain; Shepherd, Matthew; Dorey, Enid; Parag, Varsha; Ameratunga, Shanthi; Rodgers, Anthony

    2012-01-24

    Prevention of the onset of depression in adolescence may prevent social dysfunction, teenage pregnancy, substance abuse, suicide, and mental health conditions in adulthood. New technologies allow delivery of prevention programs scalable to large and disparate populations. To develop and test the novel mobile phone delivery of a depression prevention intervention for adolescents. We describe the development of the intervention and the results of participants' self-reported satisfaction with the intervention. The intervention was developed from 15 key messages derived from cognitive behavioral therapy (CBT). The program was fully automated and delivered in 2 mobile phone messages/day for 9 weeks, with a mixture of text, video, and cartoon messages and a mobile website. Delivery modalities were guided by social cognitive theory and marketing principles. The intervention was compared with an attention control program of the same number and types of messages on different topics. A double-blind randomized controlled trial was undertaken in high schools in Auckland, New Zealand, from June 2009 to April 2011. A total of 1348 students (13-17 years of age) volunteered to participate at group sessions in schools, and 855 were eventually randomly assigned to groups. Of these, 835 (97.7%) self-completed follow-up questionnaires at postprogram interviews on satisfaction, perceived usefulness, and adherence to the intervention. Over three-quarters of participants viewed at least half of the messages and 90.7% (379/418) in the intervention group reported they would refer the program to a friend. Intervention group participants said the intervention helped them to be more positive (279/418, 66.7%) and to get rid of negative thoughts (210/418, 50.2%)--significantly higher than proportions in the control group. Key messages from CBT can be delivered by mobile phone, and young people report that these are helpful. Change in clinician-rated depression symptom scores from baseline to 12 months, yet to be completed, will provide evidence on the effectiveness of the intervention. If proven effective, this form of delivery may be useful in many countries lacking widespread mental health services but with extensive mobile phone coverage. CLINICALTRIAL: Australia New Zealand Clinical Trials Registry (ACTRN): 12609000405213; http://www.anzctr.org.au/trial_view.aspx?ID=83667 (Archived by WebCite at http://www.webcitation.org/64aueRqOb).

  15. Pedestrian temporal and spatial gap acceptance at mid-block street crossing in developing world.

    PubMed

    Pawar, Digvijay S; Patil, Gopal R

    2015-02-01

    Most of the midblock pedestrian crossings on urban roads in India are uncontrolled; wherein the high degree of discretion in pedestrians' behavior while crossing the traffic stream, has made the situation complex to analyze. Vehicles do not yield to pedestrians, even though the traffic laws give priority to pedestrians over motorized vehicles at unsignalized pedestrian crossings. Therefore, a pedestrian has to decide if an available gap is safe or not for crossing. This paper aims to investigate pedestrian temporal and spatial gap acceptance for midblock street crossings. Field data were collected using video camera at two midblock pedestrian crossings. The data extraction in laboratory resulted in 1107 pedestrian gaps. Available gaps, pedestrians' decision, traffic volume, etc. were extracted from the videos. While crossing a road with multiple lanes, rolling gap acceptance behavior was observed. Using binary logit analysis, six utility models were developed, three each for temporal and spatial gaps. The 50th percentile temporal and spatial gaps ranged from 4.1 to 4.8s and 67 to 79 m respectively, whereas the 85th percentile temporal and spatial gaps ranged from 5 to 5.8s and 82 to 95 m respectively. These gap values were smaller than that reported in the studies in developed countries. The speed of conflicting vehicle was found to be significant in spatial gap but not in temporal gap acceptance. The gap acceptance decision was also found to be affected by the type of conflicting vehicles. The insights from this study can be used for the safety and performance evaluation of uncontrolled midblock street crossings in developing countries. Copyright © 2014 Elsevier Ltd and National Safety Council. All rights reserved.

  16. Modeling the Partial Atomic Charges in Inorganometallic Molecules and Solids and Charge Redistribution in Lithium-Ion Cathodes

    DOE PAGES

    Wang, Bo; Li, Shaohong L.; Truhlar, Donald G.

    2014-10-30

    Partial atomic charges are widely used for the description of charge distributions of molecules and solids. These charges are useful to indicate the extent of charge transfer and charge flow during chemical reactions in batteries, fuel cells, and catalysts and to characterize charge distributions in capacitors, liquid-phase electrolytes, and solids and at electrochemical interfaces. However, partial atomic charges given by various charge models differ significantly, especially for systems containing metal atoms. In the present study, we have compared various charge models on both molecular systems and extended systems, including Hirshfeld, CM5, MK, ChElPG, Mulliken, MBS, NPA, DDEC, LoProp, and Badermore » charges. Their merits and drawbacks are compared. The CM5 charge model is found to perform well on the molecular systems, with a mean unsigned percentage deviation of only 9% for the dipole moments. We therefore formulated it for extended systems and applied it to study charge flow during the delithiation process in lithium-containing oxides used as cathodes. Our calculations show that the charges given by the CM5 charge model are reasonable and that during the delithiation process, the charge flow can occur not only on the transition metal but also on the anions. The oxygen atoms can lose a significant density of electrons, especially for deeply delithiated materials. We also discuss other methods in current use to analyze the charge transfer and charge flow in batteries, in particular the use of formal charge, spin density, and orbital occupancy. Here, we conclude that CM5 charges provide useful information in describing charge distributions in various materials and are very promising for the study of charge transfer and charge flows in both molecules and solids.« less

  17. Modeling Coronal Response in Decaying Active Regions with Magnetic Flux Transport and Steady Heating

    NASA Astrophysics Data System (ADS)

    Ugarte-Urra, Ignacio; Warren, Harry P.; Upton, Lisa A.; Young, Peter R.

    2017-09-01

    We present new measurements of the dependence of the extreme ultraviolet (EUV) radiance on the total magnetic flux in active regions as obtained from the Atmospheric Imaging Assembly (AIA) and the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory. Using observations of nine active regions tracked along different stages of evolution, we extend the known radiance—magnetic flux power-law relationship (I\\propto {{{Φ }}}α ) to the AIA 335 Å passband, and the Fe xviii 93.93 Å spectral line in the 94 Å passband. We find that the total unsigned magnetic flux divided by the polarity separation ({{Φ }}/D) is a better indicator of radiance for the Fe xviii line with a slope of α =3.22+/- 0.03. We then use these results to test our current understanding of magnetic flux evolution and coronal heating. We use magnetograms from the simulated decay of these active regions produced by the Advective Flux Transport model as boundary conditions for potential extrapolations of the magnetic field in the corona. We then model the hydrodynamics of each individual field line with the Enthalpy-based Thermal Evolution of Loops model with steady heating scaled as the ratio of the average field strength and the length (\\bar{B}/L) and render the Fe xviii and 335 Å emission. We find that steady heating is able to partially reproduce the magnitudes and slopes of the EUV radiance—magnetic flux relationships and discuss how impulsive heating can help reconcile the discrepancies. This study demonstrates that combined models of magnetic flux transport, magnetic topology, and heating can yield realistic estimates for the decay of active region radiances with time.

  18. Magnetic Flux Rope Identification and Characterization from Observationally Driven Solar Coronal Models

    NASA Astrophysics Data System (ADS)

    Lowder, Chris; Yeates, Anthony

    2017-09-01

    Formed through magnetic field shearing and reconnection in the solar corona, magnetic flux ropes are structures of twisted magnetic field, threaded along an axis. Their evolution and potential eruption are of great importance for space weather. Here we describe a new methodology for the automated detection of flux ropes in simulated magnetic fields, utilizing field-line helicity. Our Flux Rope Detection and Organization (FRoDO) code, which measures the magnetic flux and helicity content of pre-erupting flux ropes over time, as well as detecting eruptions, is publicly available. As a first demonstration, the code is applied to the output from a time-dependent magnetofrictional model, spanning 1996 June 15-2014 February 10. Over this period, 1561 erupting and 2099 non-erupting magnetic flux ropes are detected, tracked, and characterized. For this particular model data, erupting flux ropes have a mean net helicity magnitude of 2.66× {10}43 Mx2, while non-erupting flux ropes have a significantly lower mean of 4.04× {10}42 Mx2, although there is overlap between the two distributions. Similarly, the mean unsigned magnetic flux for erupting flux ropes is 4.04× {10}21 Mx, significantly higher than the mean value of 7.05× {10}20 Mx for non-erupting ropes. These values for erupting flux ropes are within the broad range expected from observational and theoretical estimates, although the eruption rate in this particular model is lower than that of observed coronal mass ejections. In the future, the FRoDO code will prove to be a valuable tool for assessing the performance of different non-potential coronal simulations and comparing them with observations.

  19. Modeling the Partial Atomic Charges in Inorganometallic Molecules and Solids and Charge Redistribution in Lithium-Ion Cathodes.

    PubMed

    Wang, Bo; Li, Shaohong L; Truhlar, Donald G

    2014-12-09

    Partial atomic charges are widely used for the description of charge distributions of molecules and solids. These charges are useful to indicate the extent of charge transfer and charge flow during chemical reactions in batteries, fuel cells, and catalysts and to characterize charge distributions in capacitors, liquid-phase electrolytes, and solids and at electrochemical interfaces. However, partial atomic charges given by various charge models differ significantly, especially for systems containing metal atoms. In the present study, we have compared various charge models on both molecular systems and extended systems, including Hirshfeld, CM5, MK, ChElPG, Mulliken, MBS, NPA, DDEC, LoProp, and Bader charges. Their merits and drawbacks are compared. The CM5 charge model is found to perform well on the molecular systems, with a mean unsigned percentage deviation of only 9% for the dipole moments. We therefore formulated it for extended systems and applied it to study charge flow during the delithiation process in lithium-containing oxides used as cathodes. Our calculations show that the charges given by the CM5 charge model are reasonable and that during the delithiation process, the charge flow can occur not only on the transition metal but also on the anions. The oxygen atoms can lose a significant density of electrons, especially for deeply delithiated materials. We also discuss other methods in current use to analyze the charge transfer and charge flow in batteries, in particular the use of formal charge, spin density, and orbital occupancy. We conclude that CM5 charges provide useful information in describing charge distributions in various materials and are very promising for the study of charge transfer and charge flows in both molecules and solids.

  20. Diagnostics of Turbulent Dynamo from the Flux Emergence Rate in Solar Active Regions

    NASA Astrophysics Data System (ADS)

    Abramenko, V. I.; Tikhonova, O. I.; Kutsenko, A. S.

    2017-12-01

    Line-of-sight magnetograms acquired by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamic Observatory (SDO) and by the Michelson Doppler Imager (MDI) onboard the Solar and Heliospheric Observatory (SOHO) for 14 emerging ARs were used to study the derivative of the total unsigned flux-the flux emergence rate, R( t). We found that the emergence regime is not universal: each AR displays a unique emergence process. Nevertheless, two types of the emergence process can be identified. First type is a "regular" emergence with quasi-constant behavior of R( t) during a 1-3 day emergence interval with a rather low magnitude of the flux derivative, R max = (0.57 ± 0.22) × 1022 Mx day-1. The second type can be described as "accelerated" emergence with a long interval (>1 day) of the rapidly increasing flux derivative R( t) that result in a rather high magnitude of R max= (0.92 ± 0.29) × 1022 Mx day-1, which later changes to a very short (about a one third of day) interval of R( t) = const followed by a monotonous decrease of R( t). The first type events might be associated with emergence of a flux tube with a constant amount of flux that rises through the photosphere with a quasi-constant speed. Such events can be explained by the traditional largescale solar dynamo generating the toroidal flux deep in the convective zone. The second-type events can be interpreted as a signature of sub-surface turbulent dynamo action that generates additional magnetic flux (via turbulent motions) as the magnetic structure makes its way up to the solar surface.

  1. De novo reconstruction of gene regulatory networks from time series data, an approach based on formal methods.

    PubMed

    Ceccarelli, Michele; Cerulo, Luigi; Santone, Antonella

    2014-10-01

    Reverse engineering of gene regulatory relationships from genomics data is a crucial task to dissect the complex underlying regulatory mechanism occurring in a cell. From a computational point of view the reconstruction of gene regulatory networks is an undetermined problem as the large number of possible solutions is typically high in contrast to the number of available independent data points. Many possible solutions can fit the available data, explaining the data equally well, but only one of them can be the biologically true solution. Several strategies have been proposed in literature to reduce the search space and/or extend the amount of independent information. In this paper we propose a novel algorithm based on formal methods, mathematically rigorous techniques widely adopted in engineering to specify and verify complex software and hardware systems. Starting with a formal specification of gene regulatory hypotheses we are able to mathematically prove whether a time course experiment belongs or not to the formal specification, determining in fact whether a gene regulation exists or not. The method is able to detect both direction and sign (inhibition/activation) of regulations whereas most of literature methods are limited to undirected and/or unsigned relationships. We empirically evaluated the approach on experimental and synthetic datasets in terms of precision and recall. In most cases we observed high levels of accuracy outperforming the current state of art, despite the computational cost increases exponentially with the size of the network. We made available the tool implementing the algorithm at the following url: http://www.bioinformatics.unisannio.it. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Temporal evolution of solar wind ion composition and their source coronal holes during the declining phase of cycle 23. I. Low-latitude extension of polar coronal holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ko, Yuan-Kuen; Wang, Yi-Ming; Muglach, Karin

    2014-06-01

    We analyzed 27 solar wind (SW) intervals during the declining phase of cycle 23, whose source coronal holes (CHs) can be unambiguously identified and are associated with one of the polar CHs. We found that the SW ions have a temporal trend of decreasing ionization state, and such a trend is different between the slow and fast SW. The photospheric magnetic field, both inside and at the outside boundary of the CH, also exhibits a trend of decrease with time. However, EUV line emissions from different layers of the atmosphere exhibit different temporal trends. The coronal emission inside the CHmore » generally increases toward the CH boundary as the underlying field increases in strength and becomes less unipolar. In contrast, this relationship is not seen in the coronal emission averaged over the entire CH. For C and O SW ions that freeze-in at lower altitude, stronger correlation between their ionization states and field strength (both signed and unsigned) appears in the slow SW, while for Fe ions that freeze-in at higher altitude, stronger correlation appears in the fast SW. Such correlations are seen both inside the CH and at its boundary region. On the other hand, the coronal electron temperature correlates well with the SW ion composition only in the boundary region. Our analyses, although not able to determine the likely footpoint locations of the SW of different speeds, raise many outstanding questions for how the SW is heated and accelerated in response to the long-term evolution of the solar magnetic field.« less

  3. Mapping Antarctica using Landsat-8 - the preliminary results

    NASA Astrophysics Data System (ADS)

    Cheng, X.; Hui, F.; Qi, X.

    2014-12-01

    The first Landsat Image Mosaic of Antarctica (LIMA) was released in 2009, which was created by USGS, BAS, and NASA from more than 1,000 Landsat ETM+ scenes. As the first major scientific outcome of the IPY, LIMA supports current scientific polar research, encourages new projects, and helps the general public visualize Antarctica and changes happening to this southernmost environment. As the latest satellite of Landsat mission, the Landsat-8 images the entire Earth every 16 days in an 8-day offset from Landsat-7. Data collected by the instruments onboard the satellite are available to download at no charge within 24 hours of reception. The standard Landsat 8 products provided by the USGS EROS Center consist of quantized and calibrated scaled Digital Numbers (DN) in 16-bit unsigned integer format and can be rescaled to the Top Of Atmosphere (TOA) reflectance and/or radiance. With the support of USGS portal, we searched and downloaded more than 1600 scenes of Level 1 T- Terrain Corrected Landsat 8 image products covering Antarctica from late 2013 to early 2014. These data were converted to planetary radiance for further processing. Since the distribution of clouds in these images are random and much complicated, statistics on the distribution of clouds were performed and then help to decide masking those thicker cloud to keep more useful information left and avoid observation holes. A preliminary result of the Landsat-8 mosaic of Antarctica under the joint efforts of Beijing Normal University, NSIDC and University of Maryland will be released on this AGU fall meeting. Comparison between Landsat 7 and 8 mosaic products will also be done to find the difference or advantage of the two products.

  4. Combined quantum mechanical and molecular mechanical method for metal-organic frameworks: proton topologies of NU-1000.

    PubMed

    Wu, Xin-Ping; Gagliardi, Laura; Truhlar, Donald G

    2018-01-17

    Metal-organic frameworks (MOFs) are materials with applications in catalysis, gas separations, and storage. Quantum mechanical (QM) calculations can provide valuable guidance to understand and predict their properties. In order to make the calculations faster, rather than modeling these materials as periodic (infinite) systems, it is useful to construct finite models (called cluster models) and use subsystem methods such as fragment methods or combined quantum mechanical and molecular mechanical (QM/MM) methods. Here we employ a QM/MM methodology to study one particular MOF that has been of widespread interest because of its wide pores and good solvent and thermal stability, namely NU-1000, which contains hexanuclear zirconium nodes and 1,3,6,8-tetrakis(p-benzoic acid)pyrene (TBAPy 4- ) linkers. A modified version of the Bristow-Tiana-Walsh transferable force field has been developed to allow QM/MM calculations on NU-1000; we call the new parametrization the NU1T force field. We consider isomeric structures corresponding to various proton topologies of the [Zr 6 (μ 3 -O) 8 O 8 H 16 ] 8+ node of NU-1000, and we compute their relative energies using a QM/MM scheme designed for the present kind of problem. We compared the results to full quantum mechanical (QM) energy calculations and found that the QM/MM models can reproduce the full QM relative energetics (which span a range of 334 kJ mol -1 ) with a mean unsigned deviation (MUD) of only 2 kJ mol -1 . Furthermore, we found that the structures optimized by QM/MM are nearly identical to their full QM optimized counterparts.

  5. Adolescent nicotine exposure disrupts context conditioning in adulthood in rats.

    PubMed

    Spaeth, Andrea M; Barnet, Robert C; Hunt, Pamela S; Burk, Joshua A

    2010-10-01

    Despite the prevalence of smoking among adolescents, few studies have assessed the effects of adolescent nicotine exposure on learning in adulthood. In particular, it remains unclear whether adolescent nicotine exposure has effects on hippocampus-dependent learning that persist into adulthood. The present experiment examined whether there were effects of adolescent nicotine exposure on context conditioning, a form of learning dependent on the integrity of the hippocampus, when tested during adulthood. Rats were exposed to nicotine during adolescence (postnatal days [PD] 28-42) via osmotic minipump (0, 3.0 or 6.0mg/kg/day). Context conditioning occurred in early adulthood (PD 65-70). Animals were exposed to an experimental context and were given 10 unsignaled footshocks or no shock. Additional groups were included to test the effects of adolescent nicotine on delay conditioning, a form of learning that is not dependent upon the hippocampus. Conditioning was assessed using a lick suppression paradigm. For animals in the context conditioning groups, adolescent nicotine resulted in significantly less suppression of drinking in the presence of context cues compared with vehicle-pretreated animals. For animals in the delay conditioning groups, there was a trend for adolescent nicotine (3.0mg/kg/day) to suppress drinking compared to vehicle-pretreated animals. There were no differences in extinction of contextual fear or cued fear between rats previously exposed to vehicle or nicotine. The data indicate that adolescent nicotine administration impairs context conditioning when animals are trained and tested as adults. The present data suggest that adolescent nicotine exposure may disrupt hippocampus-dependent learning when animals are tested during adulthood. (c) 2010 Elsevier Inc. All rights reserved.

  6. Is the straddle effect in contrast perception limited to second-order spatial vision?

    PubMed Central

    Graham, Norma V.; Wolfson, S. Sabina

    2018-01-01

    Previous work on the straddle effect in contrast perception (Foley, 2011; Graham & Wolfson, 2007; Wolfson & Graham, 2007, 2009) has used visual patterns and observer tasks of the type known as spatially second-order. After adaptation of about 1 s to a grid of Gabor patches all at one contrast, a second-order test pattern composed of two different test contrasts can be easy or difficult to perceive correctly. When the two test contrasts are both a bit less (or both a bit greater) than the adapt contrast, observers perform very well. However, when the two test contrasts straddle the adapt contrast (i.e., one of the test contrasts is greater than the adapt contrast and the other is less), performance drops dramatically. To explain this drop in performance—the straddle effect—we have suggested a contrast-comparison process. We began to wonder: Are second-order patterns necessary for the straddle effect? Here we show that the answer is “no”. We demonstrate the straddle effect using spatially first-order visual patterns and several different observer tasks. We also see the effect of contrast normalization using first-order visual patterns here, analogous to our prior findings with second-order visual patterns. We did find one difference between first- and second-order tasks: Performance in the first-order tasks was slightly lower. This slightly lower performance may be due to slightly greater memory load. For many visual scenes, the important quantity in human contrast processing may not be monotonic with physical contrast but may be something more like the unsigned difference between current contrast and recent average contrast. PMID:29904790

  7. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  8. Syntactic and semantic errors in radiology reports associated with speech recognition software.

    PubMed

    Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J

    2017-03-01

    Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.

  9. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  10. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  11. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. The effect of monetary punishment on error evaluation in a Go/No-go task.

    PubMed

    Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki

    2017-10-01

    Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  13. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  14. Spelling Errors of Dyslexic Children in Bosnian Language With Transparent Orthography.

    PubMed

    Duranović, Mirela

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors,10% of orthographic errors, and 4% of grammatical errors. Furthermore, the majority errors were the omissions and substitutions, followed by the insertions, omission of rules of assimilation by voicing, and errors with utilization of suffix. We can conclude that phonological errors were dominant in children with dyslexia at all grade levels.

  15. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    PubMed

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  16. Sensitivity to prediction error in reach adaptation

    PubMed Central

    Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza

    2012-01-01

    It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782

  17. Potential benefit of electronic pharmacy claims data to prevent medication history errors and resultant inpatient order errors

    PubMed Central

    Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A

    2016-01-01

    Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817

  18. Impact of a reengineered electronic error-reporting system on medication event reporting and care process improvements at an urban medical center.

    PubMed

    McKaig, Donald; Collins, Christine; Elsaid, Khaled A

    2014-09-01

    A study was conducted to evaluate the impact of a reengineered approach to electronic error reporting at a 719-bed multidisciplinary urban medical center. The main outcome of interest was the monthly reported medication errors during the preimplementation (20 months) and postimplementation (26 months) phases. An interrupted time series analysis was used to describe baseline errors, immediate change following implementation of the current electronic error-reporting system (e-ERS), and trend of error reporting during postimplementation. Errors were categorized according to severity using the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) Medication Error Index classifications. Reported errors were further analyzed by reporter and error site. During preimplementation, the monthly reported errors mean was 40.0 (95% confidence interval [CI]: 36.3-43.7). Immediately following e-ERS implementation, monthly reported errors significantly increased by 19.4 errors (95% CI: 8.4-30.5). The change in slope of reported errors trend was estimated at 0.76 (95% CI: 0.07-1.22). Near misses and no-patient-harm errors accounted for 90% of all errors, while errors that caused increased patient monitoring or temporary harm accounted for 9% and 1%, respectively. Nurses were the most frequent reporters, while physicians were more likely to report high-severity errors. Medical care units accounted for approximately half of all reported errors. Following the intervention, there was a significant increase in reporting of prevented errors and errors that reached the patient with no resultant harm. This improvement in reporting was sustained for 26 months and has contributed to designing and implementing quality improvement initiatives to enhance the safety of the medication use process.

  19. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety.

    PubMed

    Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie

    2014-01-01

    This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).

  20. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  1. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  2. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  3. Incidence of speech recognition errors in the emergency department.

    PubMed

    Goss, Foster R; Zhou, Li; Weiner, Scott G

    2016-09-01

    Physician use of computerized speech recognition (SR) technology has risen in recent years due to its ease of use and efficiency at the point of care. However, error rates between 10 and 23% have been observed, raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and medical liability that may arise. Our aim was to determine the incidence and types of SR errors introduced by this technology in the emergency department (ED). Level 1 emergency department with 42,000 visits/year in a tertiary academic teaching hospital. A random sample of 100 notes dictated by attending emergency physicians (EPs) using SR software was collected from the ED electronic health record between January and June 2012. Two board-certified EPs annotated the notes and conducted error analysis independently. An existing classification schema was adopted to classify errors into eight errors types. Critical errors deemed to potentially impact patient care were identified. There were 128 errors in total or 1.3 errors per note, and 14.8% (n=19) errors were judged to be critical. 71% of notes contained errors, and 15% contained one or more critical errors. Annunciation errors were the highest at 53.9% (n=69), followed by deletions at 18.0% (n=23) and added words at 11.7% (n=15). Nonsense errors, homonyms and spelling errors were present in 10.9% (n=14), 4.7% (n=6), and 0.8% (n=1) of notes, respectively. There were no suffix or dictionary errors. Inter-annotator agreement was 97.8%. This is the first estimate at classifying speech recognition errors in dictated emergency department notes. Speech recognition errors occur commonly with annunciation errors being the most frequent. Error rates were comparable if not lower than previous studies. 15% of errors were deemed critical, potentially leading to miscommunication that could affect patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  5. Effects of skilled nursing facility structure and process factors on medication errors during nursing home admission.

    PubMed

    Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M

    2014-01-01

    Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.

  6. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  7. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    NASA Astrophysics Data System (ADS)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  8. A map overlay error model based on boundary geometry

    USGS Publications Warehouse

    Gaeuman, D.; Symanzik, J.; Schmidt, J.C.

    2005-01-01

    An error model for quantifying the magnitudes and variability of errors generated in the areas of polygons during spatial overlay of vector geographic information system layers is presented. Numerical simulation of polygon boundary displacements was used to propagate coordinate errors to spatial overlays. The model departs from most previous error models in that it incorporates spatial dependence of coordinate errors at the scale of the boundary segment. It can be readily adapted to match the scale of error-boundary interactions responsible for error generation on a given overlay. The area of error generated by overlay depends on the sinuosity of polygon boundaries, as well as the magnitude of the coordinate errors on the input layers. Asymmetry in boundary shape has relatively little effect on error generation. Overlay errors are affected by real differences in boundary positions on the input layers, as well as errors in the boundary positions. Real differences between input layers tend to compensate for much of the error generated by coordinate errors. Thus, the area of change measured on an overlay layer produced by the XOR overlay operation will be more accurate if the area of real change depicted on the overlay is large. The model presented here considers these interactions, making it especially useful for estimating errors studies of landscape change over time. ?? 2005 The Ohio State University.

  9. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  11. Error-compensation model for simultaneous measurement of five degrees of freedom motion errors of a rotary axis

    NASA Astrophysics Data System (ADS)

    Bao, Chuanchen; Li, Jiakun; Feng, Qibo; Zhang, Bin

    2018-07-01

    This paper introduces an error-compensation model for our measurement method to measure five motion errors of a rotary axis based on fibre laser collimation. The error-compensation model is established in a matrix form using the homogeneous coordinate transformation theory. The influences of the installation errors, error crosstalk, and manufacturing errors are analysed. The model is verified by both ZEMAX simulation and measurement experiments. The repeatability values of the radial and axial motion errors are significantly suppressed by more than 50% after compensation. The repeatability experiments of five degrees of freedom motion errors and the comparison experiments of two degrees of freedom motion errors of an indexing table were performed by our measuring device and a standard instrument. The results show that the repeatability values of the angular positioning error ε z and tilt motion error around the Y axis ε y are 1.2″ and 4.4″, and the comparison deviations of the two motion errors are 4.0″ and 4.4″, respectively. The repeatability values of the radial and axial motion errors, δ y and δ z , are 1.3 and 0.6 µm, respectively. The repeatability value of the tilt motion error around the X axis ε x is 3.8″.

  12. Deep data fusion method for missile-borne inertial/celestial system

    NASA Astrophysics Data System (ADS)

    Zhang, Chunxi; Chen, Xiaofei; Lu, Jiazhen; Zhang, Hao

    2018-05-01

    Strap-down inertial-celestial integrated navigation system has the advantages of autonomy and high precision and is very useful for ballistic missiles. The star sensor installation error and inertial measurement error have a great influence for the system performance. Based on deep data fusion, this paper establishes measurement equations including star sensor installation error and proposes the deep fusion filter method. Simulations including misalignment error, star sensor installation error, IMU error are analyzed. Simulation results indicate that the deep fusion method can estimate the star sensor installation error and IMU error. Meanwhile, the method can restrain the misalignment errors caused by instrument errors.

  13. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.

  14. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography

    ERIC Educational Resources Information Center

    Duranovic, Mirela

    2017-01-01

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…

  16. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  17. Factors associated with reporting of medication errors by Israeli nurses.

    PubMed

    Kagan, Ilya; Barnoy, Sivia

    2008-01-01

    This study investigated medication error reporting among Israeli nurses, the relationship between nurses' personal views about error reporting, and the impact of the safety culture of the ward and hospital on this reporting. Nurses (n = 201) completed a questionnaire related to different aspects of error reporting (frequency, organizational norms of dealing with errors, and personal views on reporting). The higher the error frequency, the more errors went unreported. If the ward nurse manager corrected errors on the ward, error self-reporting decreased significantly. Ward nurse managers have to provide good role models.

  18. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  19. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  20. Evaluation of drug administration errors in a teaching hospital.

    PubMed

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-03-12

    Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  1. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged <18 years. Of the 310 pediatric chemotherapy error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  2. A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.

    PubMed

    Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema

    2016-01-01

    A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.

  3. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  4. Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study

    PubMed Central

    Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César

    2011-01-01

    OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039

  5. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  6. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety.

    PubMed

    Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L

    2010-02-01

    This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.

  7. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  8. The Relationship between Occurrence Timing of Dispensing Errors and Subsequent Danger to Patients under the Situation According to the Classification of Drugs by Efficacy.

    PubMed

    Tsuji, Toshikazu; Nagata, Kenichiro; Kawashiri, Takehiro; Yamada, Takaaki; Irisa, Toshihiro; Murakami, Yuko; Kanaya, Akiko; Egashira, Nobuaki; Masuda, Satohiro

    2016-01-01

    There are many reports regarding various medical institutions' attempts at the prevention of dispensing errors. However, the relationship between occurrence timing of dispensing errors and subsequent danger to patients has not been studied under the situation according to the classification of drugs by efficacy. Therefore, we analyzed the relationship between position and time regarding the occurrence of dispensing errors. Furthermore, we investigated the relationship between occurrence timing of them and danger to patients. In this study, dispensing errors and incidents in three categories (drug name errors, drug strength errors, drug count errors) were classified into two groups in terms of its drug efficacy (efficacy similarity (-) group, efficacy similarity (+) group), into three classes in terms of the occurrence timing of dispensing errors (initial phase errors, middle phase errors, final phase errors). Then, the rates of damage shifting from "dispensing errors" to "damage to patients" were compared as an index of danger between two groups and among three classes. Consequently, the rate of damage in "efficacy similarity (-) group" was significantly higher than that in "efficacy similarity (+) group". Furthermore, the rate of damage is the highest in "initial phase errors", the lowest in "final phase errors" among three classes. From the results of this study, it became clear that the earlier the timing of dispensing errors occurs, the more severe the damage to patients becomes.

  9. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  10. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  11. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  12. Frequency and types of the medication errors in an academic emergency department in Iran: The emergent need for clinical pharmacy services in emergency departments.

    PubMed

    Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin

    2013-07-01

    Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.

  13. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  14. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  15. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  16. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    PubMed

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication classes included anti-pyretics and non-opioid analgesics, psychoanaleptics, and psychleptic agents. Approximately 97% (n = 2279) of medication errors were as a result of drug administration errors (comprising a double dose [n = 1040], wrong dose [n = 395], wrong medication [n = 597], wrong route [n = 133], and wrong time [n = 110]). Prescribing and dispensing errors accounted for 0.68% (n = 16) and 2.26% (n = 53) of errors, respectively. Empirical data from poisons information centres facilitate the characterisation of medication errors occurring in the community and across the healthcare spectrum. Poison centre data facilitate the detection of subtle trends in medication errors and can contribute to pharmacovigilance. Collaboration between pharmaceutical manufacturers, consumers, medical, and regulatory communities is needed to advance patient safety and reduce medication errors.

  17. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    PubMed

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  18. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  19. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  20. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response.

    PubMed

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2014-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.

  1. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    PubMed Central

    Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki

    2015-01-01

    Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058

  2. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  3. Error management for musicians: an interdisciplinary conceptual framework

    PubMed Central

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501

  4. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  5. Zero tolerance prescribing: a strategy to reduce prescribing errors on the paediatric intensive care unit.

    PubMed

    Booth, Rachelle; Sturgess, Emma; Taberner-Stokes, Alison; Peters, Mark

    2012-11-01

    To establish the baseline prescribing error rate in a tertiary paediatric intensive care unit (PICU) and to determine the impact of a zero tolerance prescribing (ZTP) policy incorporating a dedicated prescribing area and daily feedback of prescribing errors. A prospective, non-blinded, observational study was undertaken in a 12-bed tertiary PICU over a period of 134 weeks. Baseline prescribing error data were collected on weekdays for all patients for a period of 32 weeks, following which the ZTP policy was introduced. Daily error feedback was introduced after a further 12 months. Errors were sub-classified as 'clinical', 'non-clinical' and 'infusion prescription' errors and the effects of interventions considered separately. The baseline combined prescribing error rate was 892 (95 % confidence interval (CI) 765-1,019) errors per 1,000 PICU occupied bed days (OBDs), comprising 25.6 % clinical, 44 % non-clinical and 30.4 % infusion prescription errors. The combined interventions of ZTP plus daily error feedback were associated with a reduction in the combined prescribing error rate to 447 (95 % CI 389-504) errors per 1,000 OBDs (p < 0.0001), an absolute risk reduction of 44.5 % (95 % CI 40.8-48.0 %). Introduction of the ZTP policy was associated with a significant decrease in clinical and infusion prescription errors, while the introduction of daily error feedback was associated with a significant reduction in non-clinical prescribing errors. The combined interventions of ZTP and daily error feedback were associated with a significant reduction in prescribing errors in the PICU, in line with Department of Health requirements of a 40 % reduction within 5 years.

  6. Error management for musicians: an interdisciplinary conceptual framework.

    PubMed

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.

  7. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  8. Pointing error analysis of Risley-prism-based beam steering system.

    PubMed

    Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng

    2014-09-01

    Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.

  9. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  10. Error disclosure: a new domain for safety culture assessment.

    PubMed

    Etchegaray, Jason M; Gallagher, Thomas H; Bell, Sigall K; Dunlap, Ben; Thomas, Eric J

    2012-07-01

    To (1) develop and test survey items that measure error disclosure culture, (2) examine relationships among error disclosure culture, teamwork culture and safety culture and (3) establish predictive validity for survey items measuring error disclosure culture. All clinical faculty from six health institutions (four medical schools, one cancer centre and one health science centre) in The University of Texas System were invited to anonymously complete an electronic survey containing questions about safety culture and error disclosure. The authors found two factors to measure error disclosure culture: one factor is focused on the general culture of error disclosure and the second factor is focused on trust. Both error disclosure culture factors were unique from safety culture and teamwork culture (correlations were less than r=0.85). Also, error disclosure general culture and error disclosure trust culture predicted intent to disclose a hypothetical error to a patient (r=0.25, p<0.001 and r=0.16, p<0.001, respectively) while teamwork and safety culture did not predict such an intent (r=0.09, p=NS and r=0.12, p=NS). Those who received prior error disclosure training reported significantly higher levels of error disclosure general culture (t=3.7, p<0.05) and error disclosure trust culture (t=2.9, p<0.05). The authors created and validated a new measure of error disclosure culture that predicts intent to disclose an error better than other measures of healthcare culture. This measure fills an existing gap in organisational assessments by assessing transparent communication after medical error, an important aspect of culture.

  11. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  12. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.

  13. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, S; Chao, C; Columbia University, NY, NY

    2014-06-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less

  14. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  15. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  16. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  17. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  18. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  19. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  20. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Using snowball sampling method with nurses to understand medication administration errors.

    PubMed

    Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In

    2009-02-01

    We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.

  2. Predictors of Errors of Novice Java Programmers

    ERIC Educational Resources Information Center

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  3. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  4. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  5. Voluntary Medication Error Reporting by ED Nurses: Examining the Association With Work Environment and Social Capital.

    PubMed

    Farag, Amany; Blegen, Mary; Gedney-Lose, Amalia; Lose, Daniel; Perkhounkova, Yelena

    2017-05-01

    Medication errors are one of the most frequently occurring errors in health care settings. The complexity of the ED work environment places patients at risk for medication errors. Most hospitals rely on nurses' voluntary medication error reporting, but these errors are under-reported. The purpose of this study was to examine the relationship among work environment (nurse manager leadership style and safety climate), social capital (warmth and belonging relationships and organizational trust), and nurses' willingness to report medication errors. A cross-sectional descriptive design using a questionnaire with a convenience sample of emergency nurses was used. Data were analyzed using descriptive, correlation, Mann-Whitney U, and Kruskal-Wallis statistics. A total of 71 emergency nurses were included in the study. Emergency nurses' willingness to report errors decreased as the nurses' years of experience increased (r = -0.25, P = .03). Their willingness to report errors increased when they received more feedback about errors (r = 0.25, P = .03) and when their managers used a transactional leadership style (r = 0.28, P = .01). ED nurse managers can modify their leadership style to encourage error reporting. Timely feedback after an error report is particularly important. Engaging experienced nurses to understand error root causes could increase voluntary error reporting. Published by Elsevier Inc.

  6. The pattern of the discovery of medication errors in a tertiary hospital in Hong Kong.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2013-06-01

    The primary goal of reducing medication errors is to eliminate those that reach the patient. We aimed to study the pattern of interceptions to tackle medication errors along the medication use processes. Tertiary care hospital in Hong Kong. The 'Swiss Cheese Model' was used to explain the interceptions targeting medication error reporting over 5 years (2006-2010). Proportions of prescribing, dispensing and drug administration errors intercepted by pharmacists and nurses; proportions of prescribing, dispensing and drug administration errors that reached the patient. Our analysis included 1,268 in-patient medication errors, of which 53.4% were related to prescribing, 29.0% to administration and 17.6% to dispensing. 34.1% of all medication errors (4.9% prescribing, 26.8% drug administration and 2.4% dispensing) were not intercepted. Pharmacy staff intercepted 85.4% of the prescribing errors. Nurses detected 83.0% of dispensing and 5.0% of prescribing errors. However, 92.4% of all drug administration errors reached the patient. Having a preventive measure at each stage of the medication use process helps to prevent most errors. Most drug administration errors reach the patient as there is no defense against these. Therefore, more interventions to prevent drug administration errors are warranted.

  7. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    PubMed

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  8. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  9. Medical errors in primary care clinics – a cross sectional study

    PubMed Central

    2012-01-01

    Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547

  10. Opioid errors in inpatient palliative care services: a retrospective review.

    PubMed

    Heneka, Nicole; Shaw, Tim; Rowett, Debra; Lapkin, Samuel; Phillips, Jane L

    2018-06-01

    Opioids are a high-risk medicine frequently used to manage palliative patients' cancer-related pain and other symptoms. Despite the high volume of opioid use in inpatient palliative care services, and the potential for patient harm, few studies have focused on opioid errors in this population. To (i) identify the number of opioid errors reported by inpatient palliative care services, (ii) identify reported opioid error characteristics and (iii) determine the impact of opioid errors on palliative patient outcomes. A 24-month retrospective review of opioid errors reported in three inpatient palliative care services in one Australian state. Of the 55 opioid errors identified, 84% reached the patient. Most errors involved morphine (35%) or hydromorphone (29%). Opioid administration errors accounted for 76% of reported opioid errors, largely due to omitted dose (33%) or wrong dose (24%) errors. Patients were more likely to receive a lower dose of opioid than ordered as a direct result of an opioid error (57%), with errors adversely impacting pain and/or symptom management in 42% of patients. Half (53%) of the affected patients required additional treatment and/or care as a direct consequence of the opioid error. This retrospective review has provided valuable insights into the patterns and impact of opioid errors in inpatient palliative care services. Iatrogenic harm related to opioid underdosing errors contributed to palliative patients' unrelieved pain. Better understanding the factors that contribute to opioid errors and the role of safety culture in the palliative care service context warrants further investigation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    PubMed

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  12. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  13. Error framing effects on performance: cognitive, motivational, and affective pathways.

    PubMed

    Steele-Johnson, Debra; Kalinoski, Zachary T

    2014-01-01

    Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.

  14. Influence of Tooth Spacing Error on Gears With and Without Profile Modifications

    NASA Technical Reports Server (NTRS)

    Padmasolala, Giri; Lin, Hsiang H.; Oswald, Fred B.

    2000-01-01

    A computer simulation was conducted to investigate the effectiveness of profile modification for reducing dynamic loads in gears with different tooth spacing errors. The simulation examined varying amplitudes of spacing error and differences in the span of teeth over which the error occurs. The modification considered included both linear and parabolic tip relief. The analysis considered spacing error that varies around most of the gear circumference (similar to a typical sinusoidal error pattern) as well as a shorter span of spacing errors that occurs on only a few teeth. The dynamic analysis was performed using a revised version of a NASA gear dynamics code, modified to add tooth spacing errors to the analysis. Results obtained from the investigation show that linear tip relief is more effective in reducing dynamic loads on gears with small spacing errors but parabolic tip relief becomes more effective as the amplitude of spacing error increases. In addition, the parabolic modification is more effective for the more severe error case where the error is spread over a longer span of teeth. The findings of this study can be used to design robust tooth profile modification for improving dynamic performance of gear sets with different tooth spacing errors.

  15. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  16. Asymmetric Memory Circuit Would Resist Soft Errors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Perlman, Marvin

    1990-01-01

    Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.

  17. Clover: Compiler directed lightweight soft error resilience

    DOE PAGES

    Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...

    2015-05-01

    This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less

  18. Evidence for aversive withdrawal response to own errors.

    PubMed

    Hochman, Eldad Yitzhak; Milman, Valery; Tal, Liron

    2017-10-01

    Recent model suggests that error detection gives rise to defensive motivation prompting protective behavior. Models of active avoidance behavior predict it should grow larger with threat imminence and avoidance. We hypothesized that in a task requiring left or right key strikes, error detection would drive an avoidance reflex manifested by rapid withdrawal of an erring finger growing larger with threat imminence and avoidance. In experiment 1, three groups differing by error-related threat imminence and avoidance performed a flanker task requiring left or right force sensitive-key strikes. As predicted, errors were followed by rapid force release growing faster with threat imminence and opportunity to evade threat. In experiment 2, we established a link between error key release time (KRT) and the subjective sense of inner-threat. In a simultaneous, multiple regression analysis of three error-related compensatory mechanisms (error KRT, flanker effect, error correction RT), only error KRT was significantly associated with increased compulsive checking tendencies. We propose that error response withdrawal reflects an error-withdrawal reflex. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  20. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  1. Medical students' experiences with medical errors: an analysis of medical student essays.

    PubMed

    Martinez, William; Lo, Bernard

    2008-07-01

    This study aimed to examine medical students' experiences with medical errors. In 2001 and 2002, 172 fourth-year medical students wrote an anonymous description of a significant medical error they had witnessed or committed during their clinical clerkships. The assignment represented part of a required medical ethics course. We analysed 147 of these essays using thematic content analysis. Many medical students made or observed significant errors. In either situation, some students experienced distress that seemingly went unaddressed. Furthermore, this distress was sometimes severe and persisted after the initial event. Some students also experienced considerable uncertainty as to whether an error had occurred and how to prevent future errors. Many errors may not have been disclosed to patients, and some students who desired to discuss or disclose errors were apparently discouraged from doing so by senior doctors. Some students criticised senior doctors who attempted to hide errors or avoid responsibility. By contrast, students who witnessed senior doctors take responsibility for errors and candidly disclose errors to patients appeared to recognise the importance of honesty and integrity and said they aspired to these standards. There are many missed opportunities to teach students how to respond to and learn from errors. Some faculty members and housestaff may at times respond to errors in ways that appear to contradict professional standards. Medical educators should increase exposure to exemplary responses to errors and help students to learn from and cope with errors.

  2. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  3. Error detection and reduction in blood banking.

    PubMed

    Motschman, T L; Moore, S B

    1996-12-01

    Error management plays a major role in facility process improvement efforts. By detecting and reducing errors, quality and, therefore, patient care improve. It begins with a strong organizational foundation of management attitude with clear, consistent employee direction and appropriate physical facilities. Clearly defined critical processes, critical activities, and SOPs act as the framework for operations as well as active quality monitoring. To assure that personnel can detect an report errors they must be trained in both operational duties and error management practices. Use of simulated/intentional errors and incorporation of error detection into competency assessment keeps employees practiced, confident, and diminishes fear of the unknown. Personnel can clearly see that errors are indeed used as opportunities for process improvement and not for punishment. The facility must have a clearly defined and consistently used definition for reportable errors. Reportable errors should include those errors with potentially harmful outcomes as well as those errors that are "upstream," and thus further away from the outcome. A well-written error report consists of who, what, when, where, why/how, and follow-up to the error. Before correction can occur, an investigation to determine the underlying cause of the error should be undertaken. Obviously, the best corrective action is prevention. Correction can occur at five different levels; however, only three of these levels are directed at prevention. Prevention requires a method to collect and analyze data concerning errors. In the authors' facility a functional error classification method and a quality system-based classification have been useful. An active method to search for problems uncovers them further upstream, before they can have disastrous outcomes. In the continual quest for improving processes, an error management program is itself a process that needs improvement, and we must strive to always close the circle of quality assurance. Ultimately, the goal of better patient care will be the reward.

  4. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Dromgoole, L; Alvarez, P

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less

  5. Laboratory errors and patient safety.

    PubMed

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.

  6. Error management in blood establishments: results of eight years of experience (2003–2010) at the Croatian Institute of Transfusion Medicine

    PubMed Central

    Vuk, Tomislav; Barišić, Marijan; Očić, Tihomir; Mihaljević, Ivanka; Šarlija, Dorotea; Jukić, Irena

    2012-01-01

    Background. Continuous and efficient error management, including procedures from error detection to their resolution and prevention, is an important part of quality management in blood establishments. At the Croatian Institute of Transfusion Medicine (CITM), error management has been systematically performed since 2003. Materials and methods. Data derived from error management at the CITM during an 8-year period (2003–2010) formed the basis of this study. Throughout the study period, errors were reported to the Department of Quality Assurance. In addition to surveys and the necessary corrective activities, errors were analysed and classified according to the Medical Event Reporting System for Transfusion Medicine (MERS-TM). Results. During the study period, a total of 2,068 errors were recorded, including 1,778 (86.0%) in blood bank activities and 290 (14.0%) in blood transfusion services. As many as 1,744 (84.3%) errors were detected before issue of the product or service. Among the 324 errors identified upon release from the CITM, 163 (50.3%) errors were detected by customers and reported as complaints. In only five cases was an error detected after blood product transfusion however without any harmful consequences for the patients. All errors were, therefore, evaluated as “near miss” and “no harm” events. Fifty-two (2.5%) errors were evaluated as high-risk events. With regards to blood bank activities, the highest proportion of errors occurred in the processes of labelling (27.1%) and blood collection (23.7%). With regards to blood transfusion services, errors related to blood product issuing prevailed (24.5%). Conclusion. This study shows that comprehensive management of errors, including near miss errors, can generate data on the functioning of transfusion services, which is a precondition for implementation of efficient corrective and preventive actions that will ensure further improvement of the quality and safety of transfusion treatment. PMID:22395352

  7. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  8. Determining relative error bounds for the CVBEM

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Methods provides a measure of relative error which can be utilized to subsequently reduce the error or provide information for further modeling analysis. By maximizing the relative error norm on each boundary element, a bound on the total relative error for each boundary element can be evaluated. This bound can be utilized to test CVBEM convergence, to analyze the effects of additional boundary nodal points in reducing the modeling error, and to evaluate the sensitivity of resulting modeling error within a boundary element from the error produced in another boundary element as a function of geometric distance. ?? 1985.

  9. An Investigation into Soft Error Detection Efficiency at Operating System Level

    PubMed Central

    Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894

  10. An investigation into soft error detection efficiency at operating system level.

    PubMed

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.

  11. An interpretation of radiosonde errors in the atmospheric boundary layer

    Treesearch

    Bernadette H. Connell; David R. Miller

    1995-01-01

    The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...

  12. Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2011-01-01

    Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…

  13. Finding Productive Talk around Errors in Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Olsen, Jennifer K.; Rummel, Nikol; Aleven, Vincent

    2015-01-01

    To learn from an error, students must correct the error by engaging in sense-making activities around the error. Past work has looked at how supporting collaboration around errors affects learning. This paper attempts to shed further light on the role that collaboration can play in the process of overcoming an error. We found that good…

  14. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  15. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  16. Preventability of Voluntarily Reported or Trigger Tool-Identified Medication Errors in a Pediatric Institution by Information Technology: A Retrospective Cohort Study.

    PubMed

    Stultz, Jeremy S; Nahata, Milap C

    2015-07-01

    Information technology (IT) has the potential to prevent medication errors. While many studies have analyzed specific IT technologies and preventable adverse drug events, no studies have identified risk factors for errors still occurring that are not preventable by IT. The objective of this study was to categorize reported or trigger tool-identified errors and adverse events (AEs) at a pediatric tertiary care institution. Also, we sought to identify medication errors preventable by IT, determine why IT-preventable errors occurred, and to identify risk factors for errors that were not preventable by IT. This was a retrospective analysis of voluntarily reported or trigger tool-identified errors and AEs occurring from 1 July 2011 to 30 June 2012. Medication errors reaching the patients were categorized based on the origin, severity, and location of the error, the month in which they occurred, and the age of the patient involved. Error characteristics were included in a multivariable logistic regression model to determine independent risk factors for errors occurring that were not preventable by IT. A medication error was defined as a medication-related failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. An IT-preventable error was defined as having an IT system in place to aid in prevention of the error at the phase and location of its origin. There were 936 medication errors (identified by voluntarily reporting or a trigger tool system) included and analyzed. Drug administration errors were identified most frequently (53.4% ), but prescribing errors most frequently caused harm (47.2 % of harmful errors). There were 470 (50.2 %) errors that were IT preventable at their origin, including 155 due to IT system bypasses, 103 due to insensitivity of IT alerting systems, and 47 with IT alert overrides. Dispensing, administration, and documentation errors had higher odds than prescribing errors for being not preventable by IT [odds ratio (OR) 8.0, 95 % CI 4.4-14.6; OR 2.4, 95 % CI 1.7-3.7; and OR 6.7, 95 % CI 3.3-14.5, respectively; all p < 0.001). Errors occurring in the operating room and in the outpatient setting had higher odds than intensive care units for being not preventable by IT (OR 10.4, 95 % CI 4.0-27.2, and OR 2.6, 95 % CI 1.3-5.0, respectively; all p ≤ 0.004). Despite extensive IT implementation at the studied institution, approximately one-half of the medication errors identified by voluntarily reporting or a trigger tool system were not preventable by the utilized IT systems. Inappropriate use of IT systems was a common cause of errors. The identified risk factors represent areas where IT safety features were lacking.

  17. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  18. Medication administration errors from a nursing viewpoint: a formal consensus of definition and scenarios using a Delphi technique.

    PubMed

    Shawahna, Ramzi; Masri, Dina; Al-Gharabeh, Rawan; Deek, Rawan; Al-Thayba, Lama; Halaweh, Masa

    2016-02-01

    To develop and achieve formal consensus on a definition of medication administration errors and scenarios that should or should not be considered as medication administration errors in hospitalised patient settings. Medication administration errors occur frequently in hospitalised patient settings. Currently, there is no formal consensus on a definition of medication administration errors or scenarios that should or should not be considered as medication administration errors. This was a descriptive study using Delphi technique. A panel of experts (n = 50) recruited from major hospitals, nursing schools and universities in Palestine took part in the study. Three Delphi rounds were followed to achieve consensus on a proposed definition of medication administration errors and a series of 61 scenarios representing potential medication administration error situations formulated into a questionnaire. In the first Delphi round, key contact nurses' views on medication administration errors were explored. In the second Delphi round, consensus was achieved to accept the proposed definition of medication administration errors and to include 36 (59%) scenarios and exclude 1 (1·6%) as medication administration errors. In the third Delphi round, consensus was achieved to consider further 14 (23%) and exclude 2 (3·3%) as medication administration errors while the remaining eight (13·1%) were considered equivocal. Of the 61 scenarios included in the Delphi process, experts decided to include 50 scenarios as medication administration errors, exclude three scenarios and include or exclude eight scenarios depending on the individual clinical situation. Consensus on a definition and scenarios representing medication administration errors can be achieved using formal consensus techniques. Researchers should be aware that using different definitions of medication administration errors, inclusion or exclusion of medication administration error situations could significantly affect the rate of medication administration errors reported in their studies. Consensual definitions and medication administration error situations can be used in future epidemiology studies investigating medication administration errors in hospitalised patient settings which may permit and promote direct comparisons of different studies. © 2015 John Wiley & Sons Ltd.

  19. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    PubMed

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, p<0.001). For the second procedure, recovery attempts increased for action and procedure errors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p < 0.001). Residents' error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the second simulation. Incorporating error recognition and management opportunities into surgical training could help track residents' learning curve and provide detailed, structured feedback on technical and decision-making skills. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  20. Optimizer convergence and local minima errors and their clinical importance

    NASA Astrophysics Data System (ADS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R.

    2003-09-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  1. Optimizer convergence and local minima errors and their clinical importance.

    PubMed

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-09-07

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization.

  2. Prevalence and cost of hospital medical errors in the general and elderly United States populations.

    PubMed

    Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S

    2013-12-01

    The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.

  3. Frequency and Type of Situational Awareness Errors Contributing to Death and Brain Damage: A Closed Claims Analysis.

    PubMed

    Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B

    2017-08-01

    Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.

  4. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  5. Patterns of technical error among surgical malpractice claims: an analysis of strategies to prevent injury to surgical patients.

    PubMed

    Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A

    2007-11-01

    To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.

  6. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  7. Identification of factors which affect the tendency towards and attitudes of emergency unit nurses to make medical errors.

    PubMed

    Kiymaz, Dilek; Koç, Zeliha

    2018-03-01

    To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.

  8. Global distortion of GPS networks associated with satellite antenna model errors

    NASA Astrophysics Data System (ADS)

    Cardellach, E.; Elósegui, P.; Davis, J. L.

    2007-07-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by ˜1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PCO errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm yr-1 level, which will impact high-precision crustal deformation studies.

  9. Global Distortion of GPS Networks Associated with Satellite Antenna Model Errors

    NASA Technical Reports Server (NTRS)

    Cardellach, E.; Elosequi, P.; Davis, J. L.

    2007-01-01

    Recent studies of the GPS satellite phase center offsets (PCOs) suggest that these have been in error by approx.1 m. Previous studies had shown that PCO errors are absorbed mainly by parameters representing satellite clock and the radial components of site position. On the basis of the assumption that the radial errors are equal, PCO errors will therefore introduce an error in network scale. However, PCO errors also introduce distortions, or apparent deformations, within the network, primarily in the radial (vertical) component of site position that cannot be corrected via a Helmert transformation. Using numerical simulations to quantify the effects of PC0 errors, we found that these PCO errors lead to a vertical network distortion of 6-12 mm per meter of PCO error. The network distortion depends on the minimum elevation angle used in the analysis of the GPS phase observables, becoming larger as the minimum elevation angle increases. The steady evolution of the GPS constellation as new satellites are launched, age, and are decommissioned, leads to the effects of PCO errors varying with time that introduce an apparent global-scale rate change. We demonstrate here that current estimates for PCO errors result in a geographically variable error in the vertical rate at the 1-2 mm/yr level, which will impact high-precision crustal deformation studies.

  10. Identifying medication error chains from critical incident reports: a new analytic approach.

    PubMed

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  11. Computational fluid dynamics analysis and experimental study of a low measurement error temperature sensor used in climate observation.

    PubMed

    Yang, Jie; Liu, Qingquan; Dai, Wei

    2017-02-01

    To improve the air temperature observation accuracy, a low measurement error temperature sensor is proposed. A computational fluid dynamics (CFD) method is implemented to obtain temperature errors under various environmental conditions. Then, a temperature error correction equation is obtained by fitting the CFD results using a genetic algorithm method. The low measurement error temperature sensor, a naturally ventilated radiation shield, a thermometer screen, and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated platform served as an air temperature reference. The mean temperature errors of the naturally ventilated radiation shield and the thermometer screen are 0.74 °C and 0.37 °C, respectively. In contrast, the mean temperature error of the low measurement error temperature sensor is 0.11 °C. The mean absolute error and the root mean square error between the corrected results and the measured results are 0.008 °C and 0.01 °C, respectively. The correction equation allows the temperature error of the low measurement error temperature sensor to be reduced by approximately 93.8%. The low measurement error temperature sensor proposed in this research may be helpful to provide a relatively accurate air temperature result.

  12. Medication errors in the obstetrics emergency ward in a low resource setting.

    PubMed

    Kandil, Mohamed; Sayyed, Tarek; Emarh, Mohamed; Ellakwa, Hamed; Masood, Alaa

    2012-08-01

    To investigate the patterns of medication errors in the obstetric emergency ward in a low resource setting. This prospective observational study included 10,000 women who presented at the obstetric emergency ward, department of Obstetrics and Gynecology, Menofyia University Hospital, Egypt between March and December 2010. All medications prescribed in the emergency ward were monitored for different types of errors. The head nurse in each shift was asked to monitor each pharmacologic order from the moment of prescribing till its administration. Retrospective review of the patients' charts and nurses' notes was carried out by the authors of this paper. Results were tabulated and statistically analyzed. A total of 1976 medication errors were detected. Administration errors were the commonest error reported. Omitted errors ranked second followed by unauthorized and prescription errors. Three administration errors resulted in three Cesareans were performed for fetal distress because of wrong doses of oxytocin infusion. The rest of errors did not cause patients harm but may have lead to an increase in monitoring. Most errors occurred during night shifts. The availability of automated infusion pumps will probably decrease administration errors significantly. There is a need for more obstetricians and nurses during the nightshifts to minimize errors resulting from working under stressful conditions.

  13. Analyzing communication errors in an air medical transport service.

    PubMed

    Dalto, Joseph D; Weir, Charlene; Thomas, Frank

    2013-01-01

    Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  14. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  15. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  17. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  18. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits

    PubMed Central

    Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.

    2015-01-01

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200

  19. Demonstration of a quantum error detection code using a square lattice of four superconducting qubits.

    PubMed

    Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M

    2015-04-29

    The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.

  20. Triangulation Error Analysis for the Barium Ion Cloud Experiment. M.S. Thesis - North Carolina State Univ.

    NASA Technical Reports Server (NTRS)

    Long, S. A. T.

    1973-01-01

    The triangulation method developed specifically for the Barium Ion Cloud Project is discussed. Expression for the four displacement errors, the three slope errors, and the curvature error in the triangulation solution due to a probable error in the lines-of-sight from the observation stations to points on the cloud are derived. The triangulation method is then used to determine the effect of the following on these different errors in the solution: the number and location of the stations, the observation duration, east-west cloud drift, the number of input data points, and the addition of extra cameras to one of the stations. The pointing displacement errors, and the pointing slope errors are compared. The displacement errors in the solution due to a probable error in the position of a moving station plus the weighting factors for the data from the moving station are also determined.

  1. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  3. Analysis of quantum error correction with symmetric hypergraph states

    NASA Astrophysics Data System (ADS)

    Wagner, T.; Kampermann, H.; Bruß, D.

    2018-03-01

    Graph states have been used to construct quantum error correction codes for independent errors. Hypergraph states generalize graph states, and symmetric hypergraph states have been shown to allow for the correction of correlated errors. In this paper, it is shown that symmetric hypergraph states are not useful for the correction of independent errors, at least for up to 30 qubits. Furthermore, error correction for error models with protected qubits is explored. A class of known graph codes for this scenario is generalized to hypergraph codes.

  4. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  5. Comparing errors in ED computer-assisted vs conventional pediatric drug dosing and administration.

    PubMed

    Yamamoto, Loren; Kanemori, Joan

    2010-06-01

    Compared to fixed-dose single-vial drug administration in adults, pediatric drug dosing and administration requires a series of calculations, all of which are potentially error prone. The purpose of this study is to compare error rates and task completion times for common pediatric medication scenarios using computer program assistance vs conventional methods. Two versions of a 4-part paper-based test were developed. Each part consisted of a set of medication administration and/or dosing tasks. Emergency department and pediatric intensive care unit nurse volunteers completed these tasks using both methods (sequence assigned to start with a conventional or a computer-assisted approach). Completion times, errors, and the reason for the error were recorded. Thirty-eight nurses completed the study. Summing the completion of all 4 parts, the mean conventional total time was 1243 seconds vs the mean computer program total time of 879 seconds (P < .001). The conventional manual method had a mean of 1.8 errors vs the computer program with a mean of 0.7 errors (P < .001). Of the 97 total errors, 36 were due to misreading the drug concentration on the label, 34 were due to calculation errors, and 8 were due to misplaced decimals. Of the 36 label interpretation errors, 18 (50%) occurred with digoxin or insulin. Computerized assistance reduced errors and the time required for drug administration calculations. A pattern of errors emerged, noting that reading/interpreting certain drug labels were more error prone. Optimizing the layout of drug labels could reduce the error rate for error-prone labels. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  6. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  7. Online Error Reporting for Managing Quality Control Within Radiology.

    PubMed

    Golnari, Pedram; Forsberg, Daniel; Rosipko, Beverly; Sunshine, Jeffrey L

    2016-06-01

    Information technology systems within health care, such as picture archiving and communication system (PACS) in radiology, can have a positive impact on production but can also risk compromising quality. The widespread use of PACS has removed the previous feedback loop between radiologists and technologists. Instead of direct communication of quality discrepancies found for an examination, the radiologist submitted a paper-based quality-control report. A web-based issue-reporting tool can help restore some of the feedback loop and also provide possibilities for more detailed analysis of submitted errors. The purpose of this study was to evaluate the hypothesis that data from use of an online error reporting software for quality control can focus our efforts within our department. For the 372,258 radiologic examinations conducted during the 6-month period study, 930 errors (390 exam protocol, 390 exam validation, and 150 exam technique) were submitted, corresponding to an error rate of 0.25 %. Within the category exam protocol, technologist documentation had the highest number of submitted errors in ultrasonography (77 errors [44 %]), while imaging protocol errors were the highest subtype error for computed tomography modality (35 errors [18 %]). Positioning and incorrect accession had the highest errors in the exam technique and exam validation error category, respectively, for nearly all of the modalities. An error rate less than 1 % could signify a system with a very high quality; however, a more likely explanation is that not all errors were detected or reported. Furthermore, staff reception of the error reporting system could also affect the reporting rate.

  8. New decoding methods of interleaved burst error-correcting codes

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  9. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  10. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  11. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  12. Proximal antecedents and correlates of adopted error approach: a self-regulatory perspective.

    PubMed

    Van Dyck, Cathy; Van Hooft, Edwin; De Gilder, Dick; Liesveld, Lillian

    2010-01-01

    The current study aims to further investigate earlier established advantages of an error mastery approach over an error aversion approach. The two main purposes of the study relate to (1) self-regulatory traits (i.e., goal orientation and action-state orientation) that may predict which error approach (mastery or aversion) is adopted, and (2) proximal, psychological processes (i.e., self-focused attention and failure attribution) that relate to adopted error approach. In the current study participants' goal orientation and action-state orientation were assessed, after which they worked on an error-prone task. Results show that learning goal orientation related to error mastery, while state orientation related to error aversion. Under a mastery approach, error occurrence did not result in cognitive resources "wasted" on self-consciousness. Rather, attention went to internal-unstable, thus controllable, improvement oriented causes of error. Participants that had adopted an aversion approach, in contrast, experienced heightened self-consciousness and attributed failure to internal-stable or external causes. These results imply that when working on an error-prone task, people should be stimulated to take on a mastery rather than an aversion approach towards errors.

  13. Recognizing and Reducing Analytical Errors and Sources of Variation in Clinical Pathology Data in Safety Assessment Studies.

    PubMed

    Schultze, A E; Irizarry, A R

    2017-02-01

    Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.

  14. A simulation of GPS and differential GPS sensors

    NASA Technical Reports Server (NTRS)

    Rankin, James M.

    1993-01-01

    The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  16. Identification and Remediation of Phonological and Motor Errors in Acquired Sound Production Impairment

    PubMed Central

    Gagnon, Bernadine; Miozzo, Michele

    2017-01-01

    Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044

  17. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system

    PubMed Central

    Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.

    2015-01-01

    Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702

  18. A prospective three-step intervention study to prevent medication errors in drug handling in paediatric care.

    PubMed

    Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo

    2015-01-01

    To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.

  19. Medication errors in the Middle East countries: a systematic review of the literature.

    PubMed

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality. Educational programmes on drug therapy for doctors and nurses are urgently needed.

  20. Is there any electrophysiological evidence for subliminal error processing?

    PubMed

    Shalgi, Shani; Deouell, Leon Y

    2013-08-29

    The role of error awareness in executive control and modification of behavior is not fully understood. In line with many recent studies showing that conscious awareness is unnecessary for numerous high-level processes such as strategic adjustments and decision making, it was suggested that error detection can also take place unconsciously. The Error Negativity (Ne) component, long established as a robust error-related component that differentiates between correct responses and errors, was a fine candidate to test this notion: if an Ne is elicited also by errors which are not consciously detected, it would imply a subliminal process involved in error monitoring that does not necessarily lead to conscious awareness of the error. Indeed, for the past decade, the repeated finding of a similar Ne for errors which became aware and errors that did not achieve awareness, compared to the smaller negativity elicited by correct responses (Correct Response Negativity; CRN), has lent the Ne the prestigious status of an index of subliminal error processing. However, there were several notable exceptions to these findings. The study in the focus of this review (Shalgi and Deouell, 2012) sheds new light on both types of previous results. We found that error detection as reflected by the Ne is correlated with subjective awareness: when awareness (or more importantly lack thereof) is more strictly determined using the wagering paradigm, no Ne is elicited without awareness. This result effectively resolves the issue of why there are many conflicting findings regarding the Ne and error awareness. The average Ne amplitude appears to be influenced by individual criteria for error reporting and therefore, studies containing different mixtures of participants who are more confident of their own performance or less confident, or paradigms that either encourage or don't encourage reporting low confidence errors will show different results. Based on this evidence, it is no longer possible to unquestioningly uphold the notion that the amplitude of the Ne is unrelated to subjective awareness, and therefore, that errors are detected without conscious awareness.

  1. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article describes the key concepts of the EU good practice guidance for defining, classifying, coding, reporting, evaluating and preventing medication errors. This guidance should contribute to the safe and effective use of medicines for the benefit of patients and public health.

  2. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    NASA Astrophysics Data System (ADS)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  3. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Kasami, T.; Fujiwara, T.; Lin, S.

    1986-01-01

    In this paper, a concatenated coding scheme for error control in data communications is presented and analyzed. In this scheme, the inner code is used for both error correction and detection; however, the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error (or decoding error) of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughput efficiency of the proposed error control scheme incorporated with a selective-repeat ARQ retransmission strategy is also analyzed. Three specific examples are presented. One of the examples is proposed for error control in the NASA Telecommand System.

  4. Evaluation of Parenteral Nutrition Errors in an Era of Drug Shortages.

    PubMed

    Storey, Michael A; Weber, Robert J; Besco, Kelly; Beatty, Stuart; Aizawa, Kumiko; Mirtallo, Jay M

    2016-04-01

    Ingredient shortages have forced many organizations to change practices or use unfamiliar ingredients, which creates potential for error. Parenteral nutrition (PN) has been significantly affected, as every ingredient in PN has been impacted in recent years. Ingredient errors involving PN that were reported to the national anonymous MedMARx database between May 2009 and April 2011 were reviewed. Errors were categorized by ingredient, node, and severity. Categorization was validated by experts in medication safety and PN. A timeline of PN ingredient shortages was developed and compared with the PN errors to determine if events correlated with an ingredient shortage. This information was used to determine the prevalence and change in harmful PN errors during periods of shortage, elucidating whether a statistically significant difference exists in errors during shortage as compared with a control period (ie, no shortage). There were 1311 errors identified. Nineteen errors were associated with harm. Fat emulsions and electrolytes were the PN ingredients most frequently associated with error. Insulin was the ingredient most often associated with patient harm. On individual error review, PN shortages were described in 13 errors, most of which were associated with intravenous fat emulsions; none were associated with harm. There was no correlation of drug shortages with the frequency of PN errors. Despite the significant impact that shortages have had on the PN use system, no adverse impact on patient safety could be identified from these reported PN errors. © 2015 American Society for Parenteral and Enteral Nutrition.

  5. Neural evidence for enhanced error detection in major depressive disorder.

    PubMed

    Chiu, Pearl H; Deldin, Patricia J

    2007-04-01

    Anomalies in error processing have been implicated in the etiology and maintenance of major depressive disorder. In particular, depressed individuals exhibit heightened sensitivity to error-related information and negative environmental cues, along with reduced responsivity to positive reinforcers. The authors examined the neural activation associated with error processing in individuals diagnosed with and without major depression and the sensitivity of these processes to modulation by monetary task contingencies. The error-related negativity and error-related positivity components of the event-related potential were used to characterize error monitoring in individuals with major depressive disorder and the degree to which these processes are sensitive to modulation by monetary reinforcement. Nondepressed comparison subjects (N=17) and depressed individuals (N=18) performed a flanker task under two external motivation conditions (i.e., monetary reward for correct responses and monetary loss for incorrect responses) and a nonmonetary condition. After each response, accuracy feedback was provided. The error-related negativity component assessed the degree of anomaly in initial error detection, and the error positivity component indexed recognition of errors. Across all conditions, the depressed participants exhibited greater amplitude of the error-related negativity component, relative to the comparison subjects, and equivalent error positivity amplitude. In addition, the two groups showed differential modulation by task incentives in both components. These data implicate exaggerated early error-detection processes in the etiology and maintenance of major depressive disorder. Such processes may then recruit excessive neural and cognitive resources that manifest as symptoms of depression.

  6. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  7. Evaluation of Faculty and Non-faculty Physicians’ Medication Errors in Outpatients’ Prescriptions in Shiraz, Iran

    PubMed Central

    Misagh, Pegah; Vazin, Afsaneh; Namazi, Soha

    2018-01-01

    This study was aimed at finding the occurrence rate of prescription errors in the outpatients› prescriptions written by faculty and non-faculty physicians practicing in Shiraz, Iran. In this cross-sectional study 2000 outpatient prescriptions were randomly collected from pharmacies affiliated with Shiraz University of Medical Sciences (SUMS) and social security insurance in Shiraz, Iran. Patient information including age, weight, diagnosis and chief complain were recorded. Physicians ‘characteristics were extracted from prescriptions. Prescription errors including errors in spelling, instruction, strength, dosage form and quantity as well as drug-drug interactions and contraindications were identified. The mean ± SD age of patients was 37.91 ± 21.10 years. Most of the patients were male (77.15%) and 81.50% of patients were adults. The average total number of drugs per prescription was 3.19 ± 1.60. The mean ± SD of prescription errors was 7.38 ± 4.06. Spelling error (26.4%), instruction error (21.03%), and strength error (19.18%) were the most frequent prescription errors. The mean ± SD of prescription errors was 7.83 ± 4.2 and 6.93 ± 3.88 in non-faculty and faculty physicians, respectively (P < 0.05). Number of prescription errors increased significantly as the number of prescribed drugs increased. All prescriptions had at least one error. The rate of prescription errors was higher in non-faculty physicians. Number of prescription errors related with the prescribed drugs in the prescription.

  8. Refractive errors in Mercyland Specialist Hospital, Osogbo, Western Nigeria.

    PubMed

    Adeoti, C O; Egbewale, B E

    2008-06-01

    The study was conducted to determine the magnitude and pattern of refractive errors in order to provide facilities for its management. A prospective study of 3601 eyes of 1824 consective patients was conducted. Information obtained included age, sex, occupation, visual acuity, type and degree of refractive error. The data was analysed using Statistical Package for Social Sciences 11.0 version) Computer Software. Refractive error was found in 1824(53.71%) patients. There were 832(45.61%) males and 992(54.39%) females with a mean age of 35.55. Myopia was the commonest (1412(39.21% eyes). Others include hypermetropia (840(23.33% eyes), astigmatism (785(21.80%) and 820 patients (1640 eyes) had presbyopia. Anisometropia was present in 791(44.51%) of 1777 patients that had bilateral refractive errors. Two thousand two hundred and fifty two eyes has spherical errors. Out of 2252 eyes with spherical errors, 1308 eyes (58.08%) had errors -0.50 to +0.50 dioptres, 567 eyes (25.18%) had errors less than -0.50 dioptres of whom 63 eyes (2.80%) had errors less than -5.00 dioptres while 377 eyes (16.74%) had errors greater than +0.50 dioptres of whom 81 eyes (3.60%) had errors greater than +2.00 dioptres. The highest error was 20.00 dioptres for myopia and 18.00 dioptres for hypermetropia. Refractive error is common in this environment. Adequate provision should be made for its correction bearing in mind the common types and degrees.

  9. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  10. Effects of learning climate and registered nurse staffing on medication errors.

    PubMed

    Chang, Yunkyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  11. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  12. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    NASA Astrophysics Data System (ADS)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  13. Reducing number entry errors: solving a widespread, serious problem.

    PubMed

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  14. Video error concealment using block matching and frequency selective extrapolation algorithms

    NASA Astrophysics Data System (ADS)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  15. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  16. Error control for reliable digital data transmission and storage systems

    NASA Technical Reports Server (NTRS)

    Costello, D. J., Jr.; Deng, R. H.

    1985-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256K-bit DRAM's are organized in 32Kx8 bit-bytes. Byte oriented codes such as Reed Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. In this paper we present some special decoding techniques for extended single-and-double-error-correcting RS codes which are capable of high speed operation. These techniques are designed to find the error locations and the error values directly from the syndrome without having to use the iterative alorithm to find the error locator polynomial. Two codes are considered: (1) a d sub min = 4 single-byte-error-correcting (SBEC), double-byte-error-detecting (DBED) RS code; and (2) a d sub min = 6 double-byte-error-correcting (DBEC), triple-byte-error-detecting (TBED) RS code.

  17. Effect of bar-code technology on the safety of medication administration.

    PubMed

    Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K

    2010-05-06

    Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society

  18. Understanding seasonal variability of uncertainty in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Li, M.; Wang, Q. J.

    2012-04-01

    Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.

  19. The use of a contextual, modal and psychological classification of medication errors in the emergency department: a retrospective descriptive study.

    PubMed

    Cabilan, C J; Hughes, James A; Shannon, Carl

    2017-12-01

    To describe the contextual, modal and psychological classification of medication errors in the emergency department to know the factors associated with the reported medication errors. The causes of medication errors are unique in every clinical setting; hence, error minimisation strategies are not always effective. For this reason, it is fundamental to understand the causes specific to the emergency department so that targeted strategies can be implemented. Retrospective analysis of reported medication errors in the emergency department. All voluntarily staff-reported medication-related incidents from 2010-2015 from the hospital's electronic incident management system were retrieved for analysis. Contextual classification involved the time, place and the type of medications involved. Modal classification pertained to the stage and issue (e.g. wrong medication, wrong patient). Psychological classification categorised the errors in planning (knowledge-based and rule-based errors) and skill (slips and lapses). There were 405 errors reported. Most errors occurred in the acute care area, short-stay unit and resuscitation area, during the busiest shifts (0800-1559, 1600-2259). Half of the errors involved high-alert medications. Many of the errors occurred during administration (62·7%), prescribing (28·6%) and commonly during both stages (18·5%). Wrong dose, wrong medication and omission were the issues that dominated. Knowledge-based errors characterised the errors that occurred in prescribing and administration. The highest proportion of slips (79·5%) and lapses (76·1%) occurred during medication administration. It is likely that some of the errors occurred due to the lack of adherence to safety protocols. Technology such as computerised prescribing, barcode medication administration and reminder systems could potentially decrease the medication errors in the emergency department. There was a possibility that some of the errors could be prevented if safety protocols were adhered to, which highlights the need to also address clinicians' attitudes towards safety. Technology can be implemented to help minimise errors in the ED, but this must be coupled with efforts to enhance the culture of safety. © 2017 John Wiley & Sons Ltd.

  20. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  1. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  2. A Novel Error Model of Optical Systems and an On-Orbit Calibration Method for Star Sensors.

    PubMed

    Wang, Shuang; Geng, Yunhai; Jin, Rongyu

    2015-12-12

    In order to improve the on-orbit measurement accuracy of star sensors, the effects of image-plane rotary error, image-plane tilt error and distortions of optical systems resulting from the on-orbit thermal environment were studied in this paper. Since these issues will affect the precision of star image point positions, in this paper, a novel measurement error model based on the traditional error model is explored. Due to the orthonormal characteristics of image-plane rotary-tilt errors and the strong nonlinearity among these error parameters, it is difficult to calibrate all the parameters simultaneously. To solve this difficulty, for the new error model, a modified two-step calibration method based on the Extended Kalman Filter (EKF) and Least Square Methods (LSM) is presented. The former one is used to calibrate the main point drift, focal length error and distortions of optical systems while the latter estimates the image-plane rotary-tilt errors. With this calibration method, the precision of star image point position influenced by the above errors is greatly improved from 15.42% to 1.389%. Finally, the simulation results demonstrate that the presented measurement error model for star sensors has higher precision. Moreover, the proposed two-step method can effectively calibrate model error parameters, and the calibration precision of on-orbit star sensors is also improved obviously.

  3. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  4. Alteration of a motor learning rule under mirror-reversal transformation does not depend on the amplitude of visual error.

    PubMed

    Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi

    2015-05-01

    Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  5. New double-byte error-correcting codes for memory systems

    NASA Technical Reports Server (NTRS)

    Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.

    1996-01-01

    Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.

  6. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  7. Common but unappreciated sources of error in one, two, and multiple-color pyrometry

    NASA Technical Reports Server (NTRS)

    Spjut, R. Erik

    1988-01-01

    The most common sources of error in optical pyrometry are examined. They can be classified as either noise and uncertainty errors, stray radiation errors, or speed-of-response errors. Through judicious choice of detectors and optical wavelengths the effect of noise errors can be minimized, but one should strive to determine as many of the system properties as possible. Careful consideration of the optical-collection system can minimize stray radiation errors. Careful consideration must also be given to the slowest elements in a pyrometer when measuring rapid phenomena.

  8. Errors detected in pediatric oral liquid medication doses prepared in an automated workflow management system.

    PubMed

    Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan

    2018-02-01

    The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  9. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    PubMed

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  10. Prevalence and pattern of prescription errors in a Nigerian kidney hospital.

    PubMed

    Babatunde, Kehinde M; Akinbodewa, Akinwumi A; Akinboye, Ayodele O; Adejumo, Ademola O

    2016-12-01

    To determine (i) the prevalence and pattern of prescription errors in our Centre and, (ii) appraise pharmacists' intervention and correction of identified prescription errors. A descriptive, single blinded cross-sectional study. Kidney Care Centre is a public Specialist hospital. The monthly patient load averages 60 General Out-patient cases and 17.4 in-patients. A total of 31 medical doctors (comprising of 2 Consultant Nephrologists, 15 Medical Officers, 14 House Officers), 40 nurses and 24 ward assistants participated in the study. One pharmacist runs the daily call schedule. Prescribers were blinded to the study. Prescriptions containing only galenicals were excluded. An error detection mechanism was set up to identify and correct prescription errors. Life-threatening prescriptions were discussed with the Quality Assurance Team of the Centre who conveyed such errors to the prescriber without revealing the on-going study. Prevalence of prescription errors, pattern of prescription errors, pharmacist's intervention. A total of 2,660 (75.0%) combined prescription errors were found to have one form of error or the other; illegitimacy 1,388 (52.18%), omission 1,221(45.90%), wrong dose 51(1.92%) and no error of style was detected. Life-threatening errors were low (1.1-2.2%). Errors were found more commonly among junior doctors and non-medical doctors. Only 56 (1.6%) of the errors were detected and corrected during the process of dispensing. Prescription errors related to illegitimacy and omissions were highly prevalent. There is a need to improve on patient-to-healthcare giver ratio. A medication quality assurance unit is needed in our hospitals. No financial support was received by any of the authors for this study.

  11. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Automated error correction in IBM quantum computer and explicit generalization

    NASA Astrophysics Data System (ADS)

    Ghosh, Debjit; Agarwal, Pratik; Pandey, Pratyush; Behera, Bikash K.; Panigrahi, Prasanta K.

    2018-06-01

    Construction of a fault-tolerant quantum computer remains a challenging problem due to unavoidable noise and fragile quantum states. However, this goal can be achieved by introducing quantum error-correcting codes. Here, we experimentally realize an automated error correction code and demonstrate the nondestructive discrimination of GHZ states in IBM 5-qubit quantum computer. After performing quantum state tomography, we obtain the experimental results with a high fidelity. Finally, we generalize the investigated code for maximally entangled n-qudit case, which could both detect and automatically correct any arbitrary phase-change error, or any phase-flip error, or any bit-flip error, or combined error of all types of error.

  13. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    NASA Technical Reports Server (NTRS)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  14. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  15. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  16. Quasi-eccentricity error modeling and compensation in vision metrology

    NASA Astrophysics Data System (ADS)

    Shen, Yijun; Zhang, Xu; Cheng, Wei; Zhu, Limin

    2018-04-01

    Circular targets are commonly used in vision applications for its detection accuracy and robustness. The eccentricity error of the circular target caused by perspective projection is one of the main factors of measurement error which needs to be compensated in high-accuracy measurement. In this study, the impact of the lens distortion on the eccentricity error is comprehensively investigated. The traditional eccentricity error turns to a quasi-eccentricity error in the non-linear camera model. The quasi-eccentricity error model is established by comparing the quasi-center of the distorted ellipse with the true projection of the object circle center. Then, an eccentricity error compensation framework is proposed which compensates the error by iteratively refining the image point to the true projection of the circle center. Both simulation and real experiment confirm the effectiveness of the proposed method in several vision applications.

  17. A cascaded coding scheme for error control and its performance analysis

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo

    1986-01-01

    A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.

  18. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  19. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  20. Heuristic errors in clinical reasoning.

    PubMed

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

Top