Sample records for achieve chemical accuracy

  1. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  2. Use of Chemical Inventory Accuracy Measurements as Leading Indicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quigley, David; Freshwater, David; Alnajjar, Mikhail S.

    2012-05-15

    Chemical safety and lifecycle management (CSLM) is a process that involves managing chemicals and chemical information from the moment someone begins to order a chemical and lasts through final disposition(1). Central to CSLM is tracking data associated with chemicals which, for the purposes of this paper, is termed the chemical inventory. Examples of data that could be tracked include chemical identity, location, quantity, date procured, container type, and physical state. The reason why so much data is tracked is that the chemical inventory supports many functions. These functions include emergency management, which depends upon the data to more effectively planmore » for, and respond to, chemical accidents; environmental management that uses inventory information to aid in the generation of various federally-mandated and other regulatory reports; and chemical management that uses the information to increase the efficiency and safety with which chemicals are stored and utilized. All of the benefits of having an inventory are predicated upon having an inventory that is reasonably accurate. Because of the importance of ensuring one's chemical inventory is accurate, many have become concerned about measuring inventory accuracy. But beyond providing a measure of confidence in information gleaned from the inventory, does the inventory accuracy measurement provide any additional function? The answer is 'Yes'. It provides valuable information that can be used as a leading indicator to gauge the health of a chemical management system. In this paper, we will discuss: (1) what properties make leading indicators effective, (2) how chemical inventories can be used as a leading indicator, (3) how chemical inventory accuracy can be measured, what levels of accuracies should realistically be expected in a healthy system, and (4) what a subpar inventory accuracy measurement portends.« less

  3. A Promising Tool to Achieve Chemical Accuracy for Density Functional Theory Calculations on Y-NO Homolysis Bond Dissociation Energies

    PubMed Central

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol−1) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol−1 to 0.15 and 0.18 kcal·mol−1, respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol−1. This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules. PMID:22942689

  4. Photon caliper to achieve submillimeter positioning accuracy

    NASA Astrophysics Data System (ADS)

    Gallagher, Kyle J.; Wong, Jennifer; Zhang, Junan

    2017-09-01

    The purpose of this study was to demonstrate the feasibility of using a commercial two-dimensional (2D) detector array with an inherent detector spacing of 5 mm to achieve submillimeter accuracy in localizing the radiation isocenter. This was accomplished by delivering the Vernier ‘dose’ caliper to a 2D detector array where the nominal scale was the 2D detector array and the non-nominal Vernier scale was the radiation dose strips produced by the high-definition (HD) multileaf collimators (MLCs) of the linear accelerator. Because the HD MLC sequence was similar to the picket fence test, we called this procedure the Vernier picket fence (VPF) test. We confirmed the accuracy of the VPF test by offsetting the HD MLC bank by known increments and comparing the known offset with the VPF test result. The VPF test was able to determine the known offset within 0.02 mm. We also cross-validated the accuracy of the VPF test in an evaluation of couch hysteresis. This was done by using both the VPF test and the ExacTrac optical tracking system to evaluate the couch position. We showed that the VPF test was in agreement with the ExacTrac optical tracking system within a root-mean-square value of 0.07 mm for both the lateral and longitudinal directions. In conclusion, we demonstrated the VPF test can determine the offset between a 2D detector array and the radiation isocenter with submillimeter accuracy. Until now, no method to locate the radiation isocenter using a 2D detector array has been able to achieve such accuracy.

  5. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1982-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm.

  6. Preliminary study of GPS orbit determination accuracy achievable from worldwide tracking data

    NASA Technical Reports Server (NTRS)

    Larden, D. R.; Bender, P. L.

    1983-01-01

    The improvement in the orbit accuracy if high accuracy tracking data from a substantially larger number of ground stations is available was investigated. Observations from 20 ground stations indicate that 20 cm or better accuracy can be achieved for the horizontal coordinates of the GPS satellites. With this accuracy, the contribution to the error budget for determining 1000 km baselines by GPS geodetic receivers would be only about 1 cm. Previously announced in STAR as N83-14605

  7. Interactional Effects of Instructional Quality and Teacher Judgement Accuracy on Achievement.

    ERIC Educational Resources Information Center

    Helmke, Andreas; Schrader, Friedrich-Wilhelm

    1987-01-01

    Analysis of predictions of 32 teachers regarding 690 fifth-graders' scores on a mathematics achievement test found that the combination of high judgement accuracy with varied instructional techniques was particularly favorable to students in contrast to a combination of high diagnostic sensitivity with a low frequency of cues or individual…

  8. The relation between children's accuracy estimates of their physical competence and achievement-related characteristics.

    PubMed

    Weiss, M R; Horn, T S

    1990-09-01

    The relationship between perceptions of competence and control, achievement, and motivated behavior in youth sport has been a topic of considerable interest. The purpose of this study was to examine whether children who are under-, accurate, or overestimators of their physical competence differ in their achievement characteristics. Children (N = 133), 8 to 13 years of age, who were attending a summer sport program, completed a series of questionnaires designed to assess perceptions of competence and control, motivational orientation, and competitive trait anxiety. Measures of physical competence were obtained by teachers' ratings that paralleled the children's measure of perceived competence. Perceived competence and teachers' ratings were standardized by grade level, and an accuracy score was computed from the difference between these scores. Children were then categorized as underestimators, accurate raters, or overestimators according to upper and lower quartiles of this distribution. A 2 x 2 x 3 (age level by gender by accuracy) MANCOVA revealed a significant gender by accuracy interaction. Underestimating girls were lower in challenge motivation, higher in trait anxiety, and more external in their control perceptions than accurate or overestimators. Underestimating boys were higher in perceived unknown control than accurate and overestimating boys. It was concluded that children who seriously underestimate their perceived competence may be likely candidates for discontinuation of sport activities or low levels of physical achievement.

  9. Accuracy of Self-Reported College GPA: Gender-Moderated Differences by Achievement Level and Academic Self-Efficacy

    ERIC Educational Resources Information Center

    Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.

    2014-01-01

    Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…

  10. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; hide

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  11. Motivational Factors Contributing to Turkish High School Students' Achievement in Gases and Chemical Reactions

    ERIC Educational Resources Information Center

    Kadioglu, Cansel; Uzuntiryaki, Esen

    2008-01-01

    This study aimed to investigate the contribution of motivational factors to 10th grade students' achievement in gases and chemical reactions in chemistry. Three hundred fifty nine 10th grade students participated in the study. The Gases and Chemical Reactions Achievement Test and the Motivated Strategies for Learning Questionnaire were…

  12. Construction and accuracy of partial differential equation approximations to the chemical master equation.

    PubMed

    Grima, Ramon

    2011-11-01

    The mesoscopic description of chemical kinetics, the chemical master equation, can be exactly solved in only a few simple cases. The analytical intractability stems from the discrete character of the equation, and hence considerable effort has been invested in the development of Fokker-Planck equations, second-order partial differential equation approximations to the master equation. We here consider two different types of higher-order partial differential approximations, one derived from the system-size expansion and the other from the Kramers-Moyal expansion, and derive the accuracy of their predictions for chemical reactive networks composed of arbitrary numbers of unimolecular and bimolecular reactions. In particular, we show that the partial differential equation approximation of order Q from the Kramers-Moyal expansion leads to estimates of the mean number of molecules accurate to order Ω(-(2Q-3)/2), of the variance of the fluctuations in the number of molecules accurate to order Ω(-(2Q-5)/2), and of skewness accurate to order Ω(-(Q-2)). We also show that for large Q, the accuracy in the estimates can be matched only by a partial differential equation approximation from the system-size expansion of approximate order 2Q. Hence, we conclude that partial differential approximations based on the Kramers-Moyal expansion generally lead to considerably more accurate estimates in the mean, variance, and skewness than approximations of the same order derived from the system-size expansion.

  13. Chemical accuracy from quantum Monte Carlo for the benzene dimer.

    PubMed

    Azadi, Sam; Cohen, R E

    2015-09-14

    We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimal VMC and DMC binding energies of -2.3(4) and -2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is -2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.

  14. Quantum chemical modeling of zeolite-catalyzed methylation reactions: toward chemical accuracy for barriers.

    PubMed

    Svelle, Stian; Tuma, Christian; Rozanska, Xavier; Kerber, Torsten; Sauer, Joachim

    2009-01-21

    The methylation of ethene, propene, and t-2-butene by methanol over the acidic microporous H-ZSM-5 catalyst has been investigated by a range of computational methods. Density functional theory (DFT) with periodic boundary conditions (PBE functional) fails to describe the experimentally determined decrease of apparent energy barriers with the alkene size due to inadequate description of dispersion forces. Adding a damped dispersion term expressed as a parametrized sum over atom pair C(6) contributions leads to uniformly underestimated barriers due to self-interaction errors. A hybrid MP2:DFT scheme is presented that combines MP2 energy calculations on a series of cluster models of increasing size with periodic DFT calculations, which allows extrapolation to the periodic MP2 limit. Additionally, errors caused by the use of finite basis sets, contributions of higher order correlation effects, zero-point vibrational energy, and thermal contributions to the enthalpy were evaluated and added to the "periodic" MP2 estimate. This multistep approach leads to enthalpy barriers at 623 K of 104, 77, and 48 kJ/mol for ethene, propene, and t-2-butene, respectively, which deviate from the experimentally measured values by 0, +13, and +8 kJ/mol. Hence, enthalpy barriers can be calculated with near chemical accuracy, which constitutes significant progress in the quantum chemical modeling of reactions in heterogeneous catalysis in general and microporous zeolites in particular.

  15. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    PubMed

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  16. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  17. Student Misconceptions in Chemical Equilibrium as Related to Cognitive Level and Achievement.

    ERIC Educational Resources Information Center

    Wheeler, Alan E.; Kass, Heidi

    Reported is an investigation to determine the nature and extent of student misconceptions in chemical equilibrium and to ascertain the degree to which certain misconceptions are related to chemistry achievement and to performance on specific tasks involving cognitive transformations characteristic of the concrete and formal operational stages of…

  18. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection.

    PubMed

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-09-01

    Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer's instructions. Dimensional changes were measured before and after different disinfection procedures. Dentsply aquasil showed smallest dimensional change (-0.0046%) and impregum penta soft highest linear dimensional changes (-0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method.

  19. Chemical Achievers: The Human Face of the Chemical Sciences (by Mary Ellen Bowden)

    NASA Astrophysics Data System (ADS)

    Kauffman, George B.

    1999-02-01

    Chemical Heritage Foundation: Philadelphia, PA, 1997. viii + 180 pp. 21.6 x 27.8 cm. ISBN 0-941901-15-1. Paper. 20.00 (10.00 for high school teachers who provide documentation). At a 1991 summer workshop sponsored by the Chemical Heritage Foundation and taught by Derek A. Davenport and William B. Jensen, high school and college teachers of introductory chemistry requested a source of pictorial material about famous chemical scientists suitable as a classroom aid. CHF responded by publishing this attractive, inexpensive paperback volume, which reflects the considerable research effort needed to locate appropriate images and to write the biographical essays. Printed on heavy, glossy paper and spiral bound to facilitate conversion to overhead transparencies, it contains 157 images from pictorial collections at CHF and many other institutions on two types of achievers: the historical "greats" most often referred to in introductory courses, and scientists who made contributions in areas of the chemical sciences that are of special relevance to modern life and the career choices students will make. The pictures are intended to provide the "human face" of the book's subtitle- "to point to the human beings who had the insights and made the major advances that [teachers] ask students to master." Thus, for example, Boyle's law becomes less cold and abstract if the student can connect it with the two portraits of the Irish scientist even if his face is topped with a wig. Marie Curie can be seen in the role of wife and mother as well as genius scientist in the photographs of her with her two daughters, one of whom also became a Nobel laureate. And students are reminded of the ubiquity of the contribution of the chemical scientists to all aspects of our everyday life by the stories and pictures of Wallace Hume Carothers' path to nylon, Percy Lavon Julian's work on hormones, and Charles F. Chandler and Rachel Carson's efforts to preserve the environment. In addition to portraits

  20. Comparative Evaluation of Dimensional Accuracy of Elastomeric Impression Materials when Treated with Autoclave, Microwave, and Chemical Disinfection

    PubMed Central

    Kamble, Suresh S; Khandeparker, Rakshit Vijay; Somasundaram, P; Raghav, Shweta; Babaji, Rashmi P; Varghese, T Joju

    2015-01-01

    Background: Impression materials during impression procedure often get infected with various infectious diseases. Hence, disinfection of impression materials with various disinfectants is advised to protect the dental team. Disinfection can alter the dimensional accuracy of impression materials. The present study was aimed to evaluate the dimensional accuracy of elastomeric impression materials when treated with different disinfectants; autoclave, chemical, and microwave method. Materials and Methods: The impression materials used for the study were, dentsply aquasil (addition silicone polyvinylsiloxane syringe and putty), zetaplus (condensation silicone putty and light body), and impregum penta soft (polyether). All impressions were made according to manufacturer’s instructions. Dimensional changes were measured before and after different disinfection procedures. Result: Dentsply aquasil showed smallest dimensional change (−0.0046%) and impregum penta soft highest linear dimensional changes (−0.026%). All the tested elastomeric impression materials showed some degree of dimensional changes. Conclusion: The present study showed that all the disinfection procedures produce minor dimensional changes of impression material. However, it was within American Dental Association specification. Hence, steam autoclaving and microwave method can be used as an alternative method to chemical sterilization as an effective method. PMID:26435611

  1. Estimating Achievable Accuracy for Global Imaging Spectroscopy Measurement of Non-Photosynthetic Vegetation Cover

    NASA Astrophysics Data System (ADS)

    Dennison, P. E.; Kokaly, R. F.; Daughtry, C. S. T.; Roberts, D. A.; Thompson, D. R.; Chambers, J. Q.; Nagler, P. L.; Okin, G. S.; Scarth, P.

    2016-12-01

    Terrestrial vegetation is dynamic, expressing seasonal, annual, and long-term changes in response to climate and disturbance. Phenology and disturbance (e.g. drought, insect attack, and wildfire) can result in a transition from photosynthesizing "green" vegetation to non-photosynthetic vegetation (NPV). NPV cover can include dead and senescent vegetation, plant litter, agricultural residues, and non-photosynthesizing stem tissue. NPV cover is poorly captured by conventional remote sensing vegetation indices, but it is readily separable from substrate cover based on spectral absorption features in the shortwave infrared. We will present past research motivating the need for global NPV measurements, establishing that mapping seasonal NPV cover is critical for improving our understanding of ecosystem function and carbon dynamics. We will also present new research that helps determine a best achievable accuracy for NPV cover estimation. To test the sensitivity of different NPV cover estimation methods, we simulated satellite imaging spectrometer data using field spectra collected over mixtures of NPV, green vegetation, and soil substrate. We incorporated atmospheric transmittance and modeled sensor noise to create simulated spectra with spectral resolutions ranging from 10 to 30 nm. We applied multiple methods of NPV estimation to the simulated spectra, including spectral indices, spectral feature analysis, multiple endmember spectral mixture analysis, and partial least squares regression, and compared the accuracy and bias of each method. These results prescribe sensor characteristics for an imaging spectrometer mission with NPV measurement capabilities, as well as a "Quantified Earth Science Objective" for global measurement of NPV cover. Copyright 2016, all rights reserved.

  2. Integrative Chemical-Biological Read-Across Approach for Chemical Hazard Classification

    PubMed Central

    Low, Yen; Sedykh, Alexander; Fourches, Denis; Golbraikh, Alexander; Whelan, Maurice; Rusyn, Ivan; Tropsha, Alexander

    2013-01-01

    Traditional read-across approaches typically rely on the chemical similarity principle to predict chemical toxicity; however, the accuracy of such predictions is often inadequate due to the underlying complex mechanisms of toxicity. Here we report on the development of a hazard classification and visualization method that draws upon both chemical structural similarity and comparisons of biological responses to chemicals measured in multiple short-term assays (”biological” similarity). The Chemical-Biological Read-Across (CBRA) approach infers each compound's toxicity from those of both chemical and biological analogs whose similarities are determined by the Tanimoto coefficient. Classification accuracy of CBRA was compared to that of classical RA and other methods using chemical descriptors alone, or in combination with biological data. Different types of adverse effects (hepatotoxicity, hepatocarcinogenicity, mutagenicity, and acute lethality) were classified using several biological data types (gene expression profiling and cytotoxicity screening). CBRA-based hazard classification exhibited consistently high external classification accuracy and applicability to diverse chemicals. Transparency of the CBRA approach is aided by the use of radial plots that show the relative contribution of analogous chemical and biological neighbors. Identification of both chemical and biological features that give rise to the high accuracy of CBRA-based toxicity prediction facilitates mechanistic interpretation of the models. PMID:23848138

  3. The effect of inquiry-flipped classroom model toward students' achievement on chemical reaction rate

    NASA Astrophysics Data System (ADS)

    Paristiowati, Maria; Fitriani, Ella; Aldi, Nurul Hanifah

    2017-08-01

    The aim of this research is to find out the effect of Inquiry-Flipped Classroom Models toward Students' Achievement on Chemical Reaction Rate topic. This study was conducted at SMA Negeri 3 Tangerang in Eleventh Graders. The Quasi Experimental Method with Non-equivalent Control Group design was implemented in this study. 72 students as the sample was selected by purposive sampling. Students in experimental group were learned through inquiry-flipped classroom model. Meanwhile, in control group, students were learned through guided inquiry learning model. Based on the data analysis, it can be seen that there is significant difference in the result of the average achievement of the students. The average achievement of the students in inquiry-flipped classroom model was 83,44 and the average achievement of the students in guided inquiry learning model was 74,06. It can be concluded that the students' achievement with inquiry-flipped classroom better than guided inquiry. The difference of students' achievement were significant through t-test which is tobs 3.056 > ttable 1.994 (α = 0.005).

  4. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  5. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  6. FREE INVENTORY PLATFORM MANAGES CHEMICAL RISKS, ADDRESSES CHEMICAL ACCOUNTABILITY, AND MEASURES COST-EFFECTIVENESS

    PubMed Central

    D’Souza, Malcolm J.; Roeske, Kristopher P.; Neff, Lily S.

    2017-01-01

    To develop best practices for laboratory safety and for chemical product and supplies management accountability, the freely-available online platform, Quartzy, was integrated within an interdisciplinary science department at a small Mid-Atlantic liberal-arts college. This was done to ensure the accuracy of purchase records, the appropriate use of storage and handling protocols, and for a continually updated chemical inventory system. Quartzy also facilitated the digital tracking and dispersal of the College’s hazardous waste inventory. Since the implementation of the Quartzy platform, the science department achieved significant cost-savings during the procurement of laboratory supplies and equipment, and it developed a sense of ownership towards the common goal of lowering the College's environmental impact as it relates to its managing of laboratory-generated hazardous wastes. PMID:29251298

  7. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  8. NMRDSP: an accurate prediction of protein shape strings from NMR chemical shifts and sequence data.

    PubMed

    Mao, Wusong; Cong, Peisheng; Wang, Zhiheng; Lu, Longjian; Zhu, Zhongliang; Li, Tonghua

    2013-01-01

    Shape string is structural sequence and is an extremely important structure representation of protein backbone conformations. Nuclear magnetic resonance chemical shifts give a strong correlation with the local protein structure, and are exploited to predict protein structures in conjunction with computational approaches. Here we demonstrate a novel approach, NMRDSP, which can accurately predict the protein shape string based on nuclear magnetic resonance chemical shifts and structural profiles obtained from sequence data. The NMRDSP uses six chemical shifts (HA, H, N, CA, CB and C) and eight elements of structure profiles as features, a non-redundant set (1,003 entries) as the training set, and a conditional random field as a classification algorithm. For an independent testing set (203 entries), we achieved an accuracy of 75.8% for S8 (the eight states accuracy) and 87.8% for S3 (the three states accuracy). This is higher than only using chemical shifts or sequence data, and confirms that the chemical shift and the structure profile are significant features for shape string prediction and their combination prominently improves the accuracy of the predictor. We have constructed the NMRDSP web server and believe it could be employed to provide a solid platform to predict other protein structures and functions. The NMRDSP web server is freely available at http://cal.tongji.edu.cn/NMRDSP/index.jsp.

  9. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  10. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; ...

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  11. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  12. THE IMPORTANCE OF SPATIAL ACCURACY FOR CHEMICAL INFORMATION MANAGEMENT

    EPA Science Inventory

    Information about chemicals can be critical to making timely decisions. The results of these decisions may not be realized for many years. In order to increase the value of chemical information and to create and utilize meaningful environmental models, the Environmental Prote...

  13. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-10-09

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  14. Type I and II β-turns prediction using NMR chemical shifts.

    PubMed

    Wang, Ching-Cheng; Lai, Wen-Chung; Chuang, Woei-Jer

    2014-07-01

    A method for predicting type I and II β-turns using nuclear magnetic resonance (NMR) chemical shifts is proposed. Isolated β-turn chemical-shift data were collected from 1,798 protein chains. One-dimensional statistical analyses on chemical-shift data of three classes β-turn (type I, II, and VIII) showed different distributions at four positions, (i) to (i + 3). Considering the central two residues of type I β-turns, the mean values of Cο, Cα, H(N), and N(H) chemical shifts were generally (i + 1) > (i + 2). The mean values of Cβ and Hα chemical shifts were (i + 1) < (i + 2). The distributions of the central two residues in type II and VIII β-turns were also distinguishable by trends of chemical shift values. Two-dimensional cluster analyses on chemical-shift data show positional distributions more clearly. Based on these propensities of chemical shift classified as a function of position, rules were derived using scoring matrices for four consecutive residues to predict type I and II β-turns. The proposed method achieves an overall prediction accuracy of 83.2 and 84.2% with the Matthews correlation coefficient values of 0.317 and 0.632 for type I and II β-turns, indicating that its higher accuracy for type II turn prediction. The results show that it is feasible to use NMR chemical shifts to predict the β-turn types in proteins. The proposed method can be incorporated into other chemical-shift based protein secondary structure prediction methods.

  15. Time averaging of NMR chemical shifts in the MLF peptide in the solid state.

    PubMed

    De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele

    2010-05-05

    Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.

  16. Achieving accuracy in first-principles calculations at extreme temperature and pressure

    NASA Astrophysics Data System (ADS)

    Mattsson, Ann; Wills, John

    2013-06-01

    First-principles calculations are increasingly used to provide EOS data at pressures and temperatures where experimental data is difficult or impossible to obtain. The lack of experimental data, however, also precludes validation of the calculations in those regimes. Factors influencing the accuracy of first-principles data include theoretical approximations, and computational approximations used in implementing and solving the underlying equations. The first category includes approximate exchange-correlation functionals and wave equations simplifying the Dirac equation. In the second category are, e.g., basis completeness and pseudo-potentials. While the first category is extremely hard to assess without experimental data, inaccuracies of the second type should be well controlled. We are using two rather different electronic structure methods (VASP and RSPt) to make explicit the requirements for accuracy of the second type. We will discuss the VASP Projector Augmented Wave potentials, with examples for Li and Mo. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  17. Graphene Nanoplatelet-Polymer Chemiresistive Sensor Arrays for the Detection and Discrimination of Chemical Warfare Agent Simulants.

    PubMed

    Wiederoder, Michael S; Nallon, Eric C; Weiss, Matt; McGraw, Shannon K; Schnee, Vincent P; Bright, Collin J; Polcha, Michael P; Paffenroth, Randy; Uzarski, Joshua R

    2017-11-22

    A cross-reactive array of semiselective chemiresistive sensors made of polymer-graphene nanoplatelet (GNP) composite coated electrodes was examined for detection and discrimination of chemical warfare agents (CWA). The arrays employ a set of chemically diverse polymers to generate a unique response signature for multiple CWA simulants and background interferents. The developed sensors' signal remains consistent after repeated exposures to multiple analytes for up to 5 days with a similar signal magnitude across different replicate sensors with the same polymer-GNP coating. An array of 12 sensors each coated with a different polymer-GNP mixture was exposed 100 times to a cycle of single analyte vapors consisting of 5 chemically similar CWA simulants and 8 common background interferents. The collected data was vector normalized to reduce concentration dependency, z-scored to account for baseline drift and signal-to-noise ratio, and Kalman filtered to reduce noise. The processed data was dimensionally reduced with principal component analysis and analyzed with four different machine learning algorithms to evaluate discrimination capabilities. For 5 similarly structured CWA simulants alone 100% classification accuracy was achieved. For all analytes tested 99% classification accuracy was achieved demonstrating the CWA discrimination capabilities of the developed system. The novel sensor fabrication methods and data processing techniques are attractive for development of sensor platforms for discrimination of CWA and other classes of chemical vapors.

  18. The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification

    PubMed Central

    Wang, Xueyi; Davidson, Nicholas J.

    2011-01-01

    Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162

  19. Large-Scale Chemical Similarity Networks for Target Profiling of Compounds Identified in Cell-Based Chemical Screens

    PubMed Central

    Lo, Yu-Chen; Senese, Silvia; Li, Chien-Ming; Hu, Qiyang; Huang, Yong; Damoiseaux, Robert; Torres, Jorge Z.

    2015-01-01

    Target identification is one of the most critical steps following cell-based phenotypic chemical screens aimed at identifying compounds with potential uses in cell biology and for developing novel disease therapies. Current in silico target identification methods, including chemical similarity database searches, are limited to single or sequential ligand analysis that have limited capabilities for accurate deconvolution of a large number of compounds with diverse chemical structures. Here, we present CSNAP (Chemical Similarity Network Analysis Pulldown), a new computational target identification method that utilizes chemical similarity networks for large-scale chemotype (consensus chemical pattern) recognition and drug target profiling. Our benchmark study showed that CSNAP can achieve an overall higher accuracy (>80%) of target prediction with respect to representative chemotypes in large (>200) compound sets, in comparison to the SEA approach (60–70%). Additionally, CSNAP is capable of integrating with biological knowledge-based databases (Uniprot, GO) and high-throughput biology platforms (proteomic, genetic, etc) for system-wise drug target validation. To demonstrate the utility of the CSNAP approach, we combined CSNAP's target prediction with experimental ligand evaluation to identify the major mitotic targets of hit compounds from a cell-based chemical screen and we highlight novel compounds targeting microtubules, an important cancer therapeutic target. The CSNAP method is freely available and can be accessed from the CSNAP web server (http://services.mbi.ucla.edu/CSNAP/). PMID:25826798

  20. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  1. Achieving DFT accuracy with a machine-learning interatomic potential: Thermomechanics and defects in bcc ferromagnetic iron

    NASA Astrophysics Data System (ADS)

    Dragoni, Daniele; Daff, Thomas D.; Csányi, Gábor; Marzari, Nicola

    2018-01-01

    We show that the Gaussian Approximation Potential (GAP) machine-learning framework can describe complex magnetic potential energy surfaces, taking ferromagnetic iron as a paradigmatic challenging case. The training database includes total energies, forces, and stresses obtained from density-functional theory in the generalized-gradient approximation, and comprises approximately 150,000 local atomic environments, ranging from pristine and defected bulk configurations to surfaces and generalized stacking faults with different crystallographic orientations. We find the structural, vibrational, and thermodynamic properties of the GAP model to be in excellent agreement with those obtained directly from first-principles electronic-structure calculations. There is good transferability to quantities, such as Peierls energy barriers, which are determined to a large extent by atomic configurations that were not part of the training set. We observe the benefit and the need of using highly converged electronic-structure calculations to sample a target potential energy surface. The end result is a systematically improvable potential that can achieve the same accuracy of density-functional theory calculations, but at a fraction of the computational cost.

  2. Accuracy Analysis of a Low-Cost Platform for Positioning and Navigation

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Kuntzsch, C.; Schulze, M. J.; Eggert, D.; Sester, M.

    2012-07-01

    This paper presents an accuracy analysis of a platform based on low-cost components for landmark-based navigation intended for research and teaching purposes. The proposed platform includes a LEGO MINDSTORMS NXT 2.0 kit, an Android-based Smartphone as well as a compact laser scanner Hokuyo URG-04LX. The robot is used in a small indoor environment, where GNSS is not available. Therefore, a landmark map was produced in advance, with the landmark positions provided to the robot. All steps of procedure to set up the platform are shown. The main focus of this paper is the reachable positioning accuracy, which was analyzed in this type of scenario depending on the accuracy of the reference landmarks and the directional and distance measuring accuracy of the laser scanner. Several experiments were carried out, demonstrating the practically achievable positioning accuracy. To evaluate the accuracy, ground truth was acquired using a total station. These results are compared to the theoretically achievable accuracies and the laser scanner's characteristics.

  3. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; ...

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/ 239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presencemore » of a significant quantity of 238U in the samples.« less

  4. Achievable accuracy of hip screw holding power estimation by insertion torque measurement.

    PubMed

    Erani, Paolo; Baleani, Massimiliano

    2018-02-01

    To ensure stability of proximal femoral fractures, the hip screw must firmly engage into the femoral head. Some studies suggested that screw holding power into trabecular bone could be evaluated, intraoperatively, through measurement of screw insertion torque. However, those studies used synthetic bone, instead of trabecular bone, as host material or they did not evaluate accuracy of predictions. We determined prediction accuracy, also assessing the impact of screw design and host material. We measured, under highly-repeatable experimental conditions, disregarding clinical procedure complexities, insertion torque and pullout strength of four screw designs, both in 120 synthetic and 80 trabecular bone specimens of variable density. For both host materials, we calculated the root-mean-square error and the mean-absolute-percentage error of predictions based on the best fitting model of torque-pullout data, in both single-screw and merged dataset. Predictions based on screw-specific regression models were the most accurate. Host material impacts on prediction accuracy: the replacement of synthetic with trabecular bone decreased both root-mean-square errors, from 0.54 ÷ 0.76 kN to 0.21 ÷ 0.40 kN, and mean-absolute-percentage errors, from 14 ÷ 21% to 10 ÷ 12%. However, holding power predicted on low insertion torque remained inaccurate, with errors up to 40% for torques below 1 Nm. In poor-quality trabecular bone, tissue inhomogeneities likely affect pullout strength and insertion torque to different extents, limiting the predictive power of the latter. This bias decreases when the screw engages good-quality bone. Under this condition, predictions become more accurate although this result must be confirmed by close in-vitro simulation of the clinical procedure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  6. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  7. Teachers' Judgements of Students' Foreign-Language Achievement

    ERIC Educational Resources Information Center

    Zhu, Mingjing; Urhahne, Detlef

    2015-01-01

    Numerous studies have been conducted on the accuracy of teacher judgement in different educational areas such as mathematics, language arts and reading. Teacher judgement of students' foreign-language achievement, however, has been rarely investigated. The study aimed to examine the accuracy of teacher judgement of students' foreign-language…

  8. Critical thinking and accuracy of nurses' diagnoses.

    PubMed

    Lunney, Margaret

    2003-01-01

    Interpretations of patient data are complex and diverse, contributing to a risk of low accuracy nursing diagnoses. This risk is confirmed in research findings that accuracy of nurses' diagnoses varied widely from high to low. Highly accurate diagnoses are essential, however, to guide nursing interventions for the achievement of positive health outcomes. Development of critical thinking abilities is likely to improve accuracy of nurses' diagnoses. New views of critical thinking serve as a basis for critical thinking in nursing. Seven cognitive skills and ten habits of mind are identified as dimensions of critical thinking for use in the diagnostic process. Application of the cognitive skills of critical thinking illustrates the importance of using critical thinking for accuracy of nurses' diagnoses. Ten strategies are proposed for self-development of critical thinking abilities.

  9. Predicting ready biodegradability of premanufacture notice chemicals.

    PubMed

    Boethling, Robert S; Lynch, David G; Thom, Gary C

    2003-04-01

    Chemical substances other than pesticides, drugs, and food additives are regulated by the U.S. Environmental Protection Agency (U.S. EPA) under the Toxic Substances Control Act (TSCA), but the United States does not require that new substances be tested automatically for such critical properties as biodegradability. The resulting lack of submitted data has fostered the development of estimation methods, and the BioWIN models for predicting biodegradability from chemical structure have played a prominent role in premanufacture notice (PMN) review. Until now, validation efforts have used only the Japanese Ministry of International Trade and Industry (MITI) test data and have not included all models. To assess BioWIN performance with PMN substances, we assembled a database of PMNs for which ready biodegradation data had been submitted over the period 1995 through 2001. The 305 PMN structures are highly varied and pose major challenges to chemical property estimation. Despite the variability of ready biodegradation tests, the use of at least six different test methods, and widely varying quality of submitted data, accuracy of four of six BioWIN models (MITI linear, MITI nonlinear, survey ultimate, survey primary) was in the 80+% range for predicting ready biodegradability. Greater accuracy (>90%) can be achieved by using model estimates only when the four models agree (true for 3/4 of the PMNs). The BioWIN linear and nonlinear probability models did not perform as well even when classification criteria were optimized. The results suggest that the MITI and survey BioWIN models are suitable for use in screening-level applications.

  10. High accuracy in short ISS missions

    NASA Astrophysics Data System (ADS)

    Rüeger, J. M.

    1986-06-01

    Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.

  11. Achieving accuracy in first-principles calculations for EOS: basis completeness at high temperatures

    NASA Astrophysics Data System (ADS)

    Wills, John; Mattsson, Ann

    2013-06-01

    First-principles electronic structure calculations can provide EOS data in regimes of pressure and temperature where accurate experimental data is difficult or impossible to obtain. This lack, however, also precludes validation of calculations in those regimes. Factors that influence the accuracy of first-principles data include (1) theoretical approximations and (2) computational approximations used in implementing and solving the underlying equations. In the first category are the approximate exchange/correlation functionals and approximate wave equations approximating the Dirac equation; in the second are basis completeness, series convergence, and truncation errors. We are using two rather different electronic structure methods (VASP and RSPt) to make definitive the requirements for accuracy of the second type, common to both. In this talk, we discuss requirements for converged calculation at high temperature and moderated pressure. At convergence we show that both methods give identical results. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Limits on the Accuracy of Linking. Research Report. ETS RR-10-22

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2010-01-01

    Sampling errors limit the accuracy with which forms can be linked. Limitations on accuracy are especially important in testing programs in which a very large number of forms are employed. Standard inequalities in mathematical statistics may be used to establish lower bounds on the achievable inking accuracy. To illustrate results, a variety of…

  13. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  14. Sensitivity of chemical transport model simulations to the duration of chemical and transport operators: a case study with GEOS-Chem v10-01

    NASA Astrophysics Data System (ADS)

    Philip, S.; Martin, R. V.; Keller, C. A.

    2015-11-01

    Chemical transport models involve considerable computational expense. Fine temporal resolution offers accuracy at the expense of computation time. Assessment is needed of the sensitivity of simulation accuracy to the duration of chemical and transport operators. We conduct a series of simulations with the GEOS-Chem chemical transport model at different temporal and spatial resolutions to examine the sensitivity of simulated atmospheric composition to temporal resolution. Subsequently, we compare the tracers simulated with operator durations from 10 to 60 min as typically used by global chemical transport models, and identify the timesteps that optimize both computational expense and simulation accuracy. We found that longer transport timesteps increase concentrations of emitted species such as nitrogen oxides and carbon monoxide since a more homogeneous distribution reduces loss through chemical reactions and dry deposition. The increased concentrations of ozone precursors increase ozone production at longer transport timesteps. Longer chemical timesteps decrease sulfate and ammonium but increase nitrate due to feedbacks with in-cloud sulfur dioxide oxidation and aerosol thermodynamics. The simulation duration decreases by an order of magnitude from fine (5 min) to coarse (60 min) temporal resolution. We assess the change in simulation accuracy with resolution by comparing the root mean square difference in ground-level concentrations of nitrogen oxides, ozone, carbon monoxide and secondary inorganic aerosols with a finer temporal or spatial resolution taken as truth. Simulation error for these species increases by more than a factor of 5 from the shortest (5 min) to longest (60 min) temporal resolution. Chemical timesteps twice that of the transport timestep offer more simulation accuracy per unit computation. However, simulation error from coarser spatial resolution generally exceeds that from longer timesteps; e.g. degrading from 2° × 2.5° to 4° × 5

  15. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  16. GIAO-DFT calculation of 15 N NMR chemical shifts of Schiff bases: Accuracy factors and protonation effects.

    PubMed

    Semenov, Valentin A; Samultsev, Dmitry O; Krivdin, Leonid B

    2018-02-09

    15 N NMR chemical shifts in the representative series of Schiff bases together with their protonated forms have been calculated at the density functional theory level in comparison with available experiment. A number of functionals and basis sets have been tested in terms of a better agreement with experiment. Complimentary to gas phase results, 2 solvation models, namely, a classical Tomasi's polarizable continuum model (PCM) and that in combination with an explicit inclusion of one molecule of solvent into calculation space to form supermolecule 1:1 (SM + PCM), were examined. Best results are achieved with PCM and SM + PCM models resulting in mean absolute errors of calculated 15 N NMR chemical shifts in the whole series of neutral and protonated Schiff bases of accordingly 5.2 and 5.8 ppm as compared with 15.2 ppm in gas phase for the range of about 200 ppm. Noticeable protonation effects (exceeding 100 ppm) in protonated Schiff bases are rationalized in terms of a general natural bond orbital approach. Copyright © 2018 John Wiley & Sons, Ltd.

  17. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  18. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.

    PubMed

    Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J

    2018-06-12

    Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.

  19. HIV-1 tropism testing in subjects achieving undetectable HIV-1 RNA: diagnostic accuracy, viral evolution and compartmentalization.

    PubMed

    Pou, Christian; Codoñer, Francisco M; Thielen, Alexander; Bellido, Rocío; Pérez-Álvarez, Susana; Cabrera, Cecilia; Dalmau, Judith; Curriu, Marta; Lie, Yolanda; Noguera-Julian, Marc; Puig, Jordi; Martínez-Picado, Javier; Blanco, Julià; Coakley, Eoin; Däumer, Martin; Clotet, Bonaventura; Paredes, Roger

    2013-01-01

    Technically, HIV-1 tropism can be evaluated in plasma or peripheral blood mononuclear cells (PBMCs). However, only tropism testing of plasma HIV-1 has been validated as a tool to predict virological response to CCR5 antagonists in clinical trials. The preferable tropism testing strategy in subjects with undetectable HIV-1 viremia, in whom plasma tropism testing is not feasible, remains uncertain. We designed a proof-of-concept study including 30 chronically HIV-1-infected individuals who achieved HIV-1 RNA <50 copies/mL during at least 2 years after first-line ART initiation. First, we determined the diagnostic accuracy of 454 and population sequencing of gp120 V3-loops in plasma and PBMCs, as well as of MT-2 assays before ART initiation. The Enhanced Sensitivity Trofile Assay (ESTA) was used as the technical reference standard. 454 sequencing of plasma viruses provided the highest agreement with ESTA. The accuracy of 454 sequencing decreased in PBMCs due to reduced specificity. Population sequencing in plasma and PBMCs was slightly less accurate than plasma 454 sequencing, being less sensitive but more specific. MT-2 assays had low sensitivity but 100% specificity. Then, we used optimized 454 sequence data to investigate viral evolution in PBMCs during viremia suppression and only found evolution of R5 viruses in one subject. No de novo CXCR4-using HIV-1 production was observed over time. Finally, Slatkin-Maddison tests suggested that plasma and cell-associated V3 forms were sometimes compartmentalized. The absence of tropism shifts during viremia suppression suggests that, when available, testing of stored plasma samples is generally safe and informative, provided that HIV-1 suppression is maintained. Tropism testing in PBMCs may not necessarily produce equivalent biological results to plasma, because the structure of viral populations and the diagnostic performance of tropism assays may sometimes vary between compartments. Thereby, proviral DNA tropism testing

  20. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  1. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  2. COMPASS time synchronization and dissemination—Toward centimetre positioning accuracy

    NASA Astrophysics Data System (ADS)

    Wang, ZhengBo; Zhao, Lu; Wang, ShiGuang; Zhang, JianWei; Wang, Bo; Wang, LiJun

    2014-09-01

    In this paper we investigate methods to achieve highly accurate time synchronization among the satellites of the COMPASS global navigation satellite system (GNSS). Owing to the special design of COMPASS which implements several geo-stationary satellites (GEO), time synchronization can be highly accurate via microwave links between ground stations to the GEO satellites. Serving as space-borne relay stations, the GEO satellites can further disseminate time and frequency signals to other satellites such as the inclined geo-synchronous (IGSO) and mid-earth orbit (MEO) satellites within the system. It is shown that, because of the accuracy in clock synchronization, the theoretical accuracy of COMPASS positioning and navigation will surpass that of the GPS. In addition, the COMPASS system can function with its entire positioning, navigation, and time-dissemination services even without the ground link, thus making it much more robust and secure. We further show that time dissemination using the COMPASS-GEO satellites to earth-fixed stations can achieve very high accuracy, to reach 100 ps in time dissemination and 3 cm in positioning accuracy, respectively. In this paper, we also analyze two feasible synchronization plans. All special and general relativistic effects related to COMPASS clocks frequency and time shifts are given. We conclude that COMPASS can reach centimeter-level positioning accuracy and discuss potential applications.

  3. The Effects of Alcohol Intoxication on Accuracy and the Confidence–Accuracy Relationship in Photographic Simultaneous Line‐ups

    PubMed Central

    Colloff, Melissa F.; Karoğlu, Nilda; Zelek, Katarzyna; Ryder, Hannah; Humphries, Joyce E.; Takarangi, Melanie K.T.

    2017-01-01

    Summary Acute alcohol intoxication during encoding can impair subsequent identification accuracy, but results across studies have been inconsistent, with studies often finding no effect. Little is also known about how alcohol intoxication affects the identification confidence–accuracy relationship. We randomly assigned women (N = 153) to consume alcohol (dosed to achieve a 0.08% blood alcohol content) or tonic water, controlling for alcohol expectancy. Women then participated in an interactive hypothetical sexual assault scenario and, 24 hours or 7 days later, attempted to identify the assailant from a perpetrator present or a perpetrator absent simultaneous line‐up and reported their decision confidence. Overall, levels of identification accuracy were similar across the alcohol and tonic water groups. However, women who had consumed tonic water as opposed to alcohol identified the assailant with higher confidence on average. Further, calibration analyses suggested that confidence is predictive of accuracy regardless of alcohol consumption. The theoretical and applied implications of our results are discussed.© 2017 The Authors Applied Cognitive Psychology Published by John Wiley & Sons Ltd. PMID:28781426

  4. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    PubMed

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  5. Experimental studies of high-accuracy RFID localization with channel impairments

    NASA Astrophysics Data System (ADS)

    Pauls, Eric; Zhang, Yimin D.

    2015-05-01

    Radio frequency identification (RFID) systems present an incredibly cost-effective and easy-to-implement solution to close-range localization. One of the important applications of a passive RFID system is to determine the reader position through multilateration based on the estimated distances between the reader and multiple distributed reference tags obtained from, e.g., the received signal strength indicator (RSSI) readings. In practice, the achievable accuracy of passive RFID reader localization suffers from many factors, such as the distorted RSSI reading due to channel impairments in terms of the susceptibility to reader antenna patterns and multipath propagation. Previous studies have shown that the accuracy of passive RFID localization can be significantly improved by properly modeling and compensating for such channel impairments. The objective of this paper is to report experimental study results that validate the effectiveness of such approaches for high-accuracy RFID localization. We also examine a number of practical issues arising in the underlying problem that limit the accuracy of reader-tag distance measurements and, therefore, the estimated reader localization. These issues include the variations in tag radiation characteristics for similar tags, effects of tag orientations, and reader RSS quantization and measurement errors. As such, this paper reveals valuable insights of the issues and solutions toward achieving high-accuracy passive RFID localization.

  6. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  7. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    PubMed

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  8. The Effects of Student Characteristics on Teachers' Judgment Accuracy: Disentangling Ethnicity, Minority Status, and Achievement

    ERIC Educational Resources Information Center

    Kaiser, Johanna; Südkamp, Anna; Möller, Jens

    2017-01-01

    Teachers' judgments of students' academic achievement are not only affected by the achievement themselves but also by several other characteristics such as ethnicity, gender, and minority status. In real-life classrooms, achievement and further characteristics are often confounded. We disentangled achievement, ethnicity and minority status and…

  9. An integrated multi-label classifier with chemical-chemical interactions for prediction of chemical toxicity effects.

    PubMed

    Liu, Tao; Chen, Lei; Pan, Xiaoyong

    2018-05-31

    Chemical toxicity effect is one of the major reasons for declining candidate drugs. Detecting the toxicity effects of all chemicals can accelerate the procedures of drug discovery. However, it is time-consuming and expensive to identify the toxicity effects of a given chemical through traditional experiments. Designing quick, reliable and non-animal-involved computational methods is an alternative way. In this study, a novel integrated multi-label classifier was proposed. First, based on five types of chemical-chemical interactions retrieved from STITCH, each of which is derived from one aspect of chemicals, five individual classifiers were built. Then, several integrated classifiers were built by integrating some or all individual classifiers. By testing the integrated classifiers on a dataset with chemicals and their toxicity effects in Accelrys Toxicity database and non-toxic chemicals with their performance evaluated by jackknife test, an optimal integrated classifier was selected as the proposed classifier, which provided quite high prediction accuracies and wide applications. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. The accuracy of Genomic Selection in Norwegian red cattle assessed by cross-validation.

    PubMed

    Luan, Tu; Woolliams, John A; Lien, Sigbjørn; Kent, Matthew; Svendsen, Morten; Meuwissen, Theo H E

    2009-11-01

    Genomic Selection (GS) is a newly developed tool for the estimation of breeding values for quantitative traits through the use of dense markers covering the whole genome. For a successful application of GS, accuracy of the prediction of genomewide breeding value (GW-EBV) is a key issue to consider. Here we investigated the accuracy and possible bias of GW-EBV prediction, using real bovine SNP genotyping (18,991 SNPs) and phenotypic data of 500 Norwegian Red bulls. The study was performed on milk yield, fat yield, protein yield, first lactation mastitis traits, and calving ease. Three methods, best linear unbiased prediction (G-BLUP), Bayesian statistics (BayesB), and a mixture model approach (MIXTURE), were used to estimate marker effects, and their accuracy and bias were estimated by using cross-validation. The accuracies of the GW-EBV prediction were found to vary widely between 0.12 and 0.62. G-BLUP gave overall the highest accuracy. We observed a strong relationship between the accuracy of the prediction and the heritability of the trait. GW-EBV prediction for production traits with high heritability achieved higher accuracy and also lower bias than health traits with low heritability. To achieve a similar accuracy for the health traits probably more records will be needed.

  11. Child Effortful Control, Teacher-student Relationships, and Achievement in Academically At-risk Children: Additive and Interactive Effects

    PubMed Central

    Liew, Jeffrey; Chen, Qi; Hughes, Jan N.

    2009-01-01

    The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children. PMID:20161421

  12. Child Effortful Control, Teacher-student Relationships, and Achievement in Academically At-risk Children: Additive and Interactive Effects.

    PubMed

    Liew, Jeffrey; Chen, Qi; Hughes, Jan N

    2010-01-01

    The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children.

  13. Quantum Dot and Polymer Composite Cross-Reactive Array for Chemical Vapor Detection.

    PubMed

    Bright, Collin J; Nallon, Eric C; Polcha, Michael P; Schnee, Vincent P

    2015-12-15

    A cross-reactive chemical sensing array was made from CdSe Quantum Dots (QDs) and five different organic polymers by inkjet printing to create segmented fluorescent composite regions on quartz substrates. The sensor array was challenged with exposures from two sets of analytes, including one set of 14 different functionalized benzenes and one set of 14 compounds related to security concerns, including the explosives trinitrotoluene (TNT) and ammonium nitrate. The array was broadly responsive to analytes with different chemical functionalities due to the multiple sensing mechanisms that altered the QDs' fluorescence. The sensor array displayed excellent discrimination between members within both sets. Classification accuracy of more than 93% was achieved, including the complete discrimination of very similar dinitrobenzene isomers and three halogenated, substituted benzene compounds. The simple fabrication, broad responsivity, and high discrimination capacity of this type of cross-reactive array are ideal qualities for the development of sensors with excellent sensitivity to chemical and explosive threats while maintaining low false alarm rates.

  14. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  15. The Credibility of Children's Testimony: Can Children Control the Accuracy of Their Memory Reports?

    ERIC Educational Resources Information Center

    Koriat, Asher; Goldsmith, Morris; Schneider, Wolfgang; Nakash-Dura, Michal

    2001-01-01

    Three experiments examined children's strategic regulation of memory accuracy. Found that younger (7 to 9 years) and older (10 to 12 years) children could enhance the accuracy of their testimony by screening out wrong answers under free-report conditions. Findings suggest a developmental trend in level of memory accuracy actually achieved.…

  16. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  17. Analytical verification of waterborne chemical treatment regimens in hatchery raceways

    USGS Publications Warehouse

    Rach, J.J.; Ramsay, R.T.

    2000-01-01

    Chemical therapy for control and prevention of fish diseases is a necessary and common practice in aquaculture. Many factors affect the accuracy of a chemical treatment application, such as the functioning of the chemical delivery system, calculation of chemical quantities to be delivered, water temperature, geometry of the culture unit, inlet-outlet structure, the influence of aerators, wind movement, and measurement of water volumes and flow rates. Three separate trials were conducted at the Osceola Fish Hatchery, a facility of the Wisconsin Department of Natural Resources, evaluating the accuracy of flow-through hydrogen peroxide treatments applied to 1, 3, or 9 raceways that were connected in series. Raceways were treated with 50 or 75 ??L/L of hydrogen peroxide for 30 min. Chemical concentrations were determined titrimetrically. The target treatment regimen was not realized in any of the applications. Chemical concentrations dropped and exposure times increased with each additional raceway treated in series. Single introduction of a therapeutant to more than three raceways in series is not recommended. Factors that interfered with the accuracy of the treatments were culture unit configuration, aeration, and flow rates. Several treatment modifications were identified that would result in more accurate chemical treatments.

  18. Physics of a ballistic missile defense - The chemical laser boost-phase defense

    NASA Technical Reports Server (NTRS)

    Grabbe, Crockett L.

    1988-01-01

    The basic physics involved in proposals to use a chemical laser based on satellites for a boost-phase defense are investigated. After a brief consideration of simple physical conditions for the defense, a calculation of an equation for the number of satellites needed for the defense is made along with some typical values of this for possible future conditions for the defense. Basic energy and power requirements for the defense are determined. A sumary is made of probable minimum conditions that must be achieved for laser power, targeting accuracy, number of satellites, and total sources for power needed.

  19. Time needed to achieve completeness and accuracy in bedside lung ultrasound reporting in intensive care unit.

    PubMed

    Tutino, Lorenzo; Cianchi, Giovanni; Barbani, Francesco; Batacchi, Stefano; Cammelli, Rita; Peris, Adriano

    2010-08-12

    The use of lung ultrasound (LUS) in ICU is increasing but ultrasonographic patterns of lung are often difficult to quantify by different operators. The aim of this study was to evaluate the accuracy and quality of LUS reporting after the introduction of a standardized electronic recording sheet. Intensivists were trained for LUS following a teaching programme. From April 2008, an electronic sheet was designed and introduced in ICU database in order to uniform LUS examination reporting. A mark from 0 to 24 has been given for each exam by two senior intensivists not involved in the survey. The mark assigned was based on completeness of a precise reporting scheme, concerning the main finding of LUS. A cut off of 15 was considered sufficiency. The study comprehended 12 months of observations and a total of 637 LUS. Initially, although some improvement in the reports completeness, still the accuracy and precision of examination reporting was below 15. The time required to reach a sufficient quality was 7 months. A linear trend in physicians progress was observed. The uniformity in teaching programme and examinations reporting system permits to improve the level of completeness and accuracy of LUS reporting, helping physicians in following lung pathology evolution.

  20. Small Body Landing Accuracy Using In-Situ Navigation

    NASA Technical Reports Server (NTRS)

    Bhaskaran, Shyam; Nandi, Sumita; Broschart, Stephen; Wallace, Mark; Olson, Corwin; Cangahuala, L. Alberto

    2011-01-01

    Spacecraft landings on small bodies (asteroids and comets) can require target accuracies too stringent to be met using ground-based navigation alone, especially if specific landing site requirements must be met for safety or to meet science goals. In-situ optical observations coupled with onboard navigation processing can meet the tighter accuracy requirements to enable such missions. Recent developments in deep space navigation capability include a self-contained autonomous navigation system (used in flight on three missions) and a landmark tracking system (used experimentally on the Japanese Hayabusa mission). The merging of these two technologies forms a methodology to perform autonomous onboard navigation around small bodies. This paper presents an overview of these systems, as well as the results from Monte Carlo studies to quantify the achievable landing accuracies by using these methods. Sensitivity of the results to variations in spacecraft maneuver execution error, attitude control accuracy and unmodeled forces are examined. Cases for two bodies, a small asteroid and on a mid-size comet, are presented.

  1. [Construction of chemical information database based on optical structure recognition technique].

    PubMed

    Lv, C Y; Li, M N; Zhang, L R; Liu, Z M

    2018-04-18

    articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.

  2. The achievement of good chemical status: an impossible mission for local water managers?

    NASA Astrophysics Data System (ADS)

    La Jeunesse, Isabelle; Jadas-Hécart, Alain; Landry, David

    2017-04-01

    The European Water Framework Directive (2000) required to achieve good ecological and chemical status in surface waters of the EU Member States in 2015. For pesticides, this means ensuring that concentrations in rivers do not exceed 0.1 μg/L per molecule and 0.5 µg/L for the sum of the concentrations of the different molecules found. At national scale, EcoPhyto plan (2008) aimed to reduce pesticide use by 50% within 10. This plan has been revised and postponed to 2025 as observed pesticide use is varying between years and concentrations in river did not decrease as expected. Although vineyards cover a small percentage of agricultural land surfaces, they contribute to 20% of national pesticide use. The presence of pesticides in rivers surrounding wine territories is therefore a current environmental concern. Thus, the recovery of the water quality requires local action programs to reduce pesticide contamination in rivers. The Layon catchment comprises 13% of vineyard. It is therefore subject to an action program led by the local water committee: the SAGE Layon-Aubance-Louet. Its goal is to ensure pesticide concentrations are reduced to 1 µg/L in 2018 and 0.5 µg/L in 2027. In this context, one of the actions of the SAGE, with the assistance of the University of Angers, addresses the study of peaks in pesticide concentrations during runoff events in a small catchment covered by vineyards. Between 2009 and 2016, one of the two farmers has converted to organic farming with consequent decreases in pesticides input to the case study which thus complied with the EcoPhyto objectives. Results demonstrate first a peak intensity of pesticides in runoff waters in relation with the date of application with a decrease of concentrations during time after the treatment and second a relation between peaks of SPM and pesticides. Transfer of pesticides in this catchment is strongly linked to runoff. Thus, even if the increase of grass surface within vineyard improves the soil

  3. Efficient first-principles prediction of solid stability: Towards chemical accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Yubo; Kitchaev, Daniil A.; Yang, Julia; Chen, Tina; Dacek, Stephen T.; Sarmiento-Pérez, Rafael A.; Marques, Maguel A. L.; Peng, Haowei; Ceder, Gerbrand; Perdew, John P.; Sun, Jianwei

    2018-03-01

    The question of material stability is of fundamental importance to any analysis of system properties in condensed matter physics and materials science. The ability to evaluate chemical stability, i.e., whether a stoichiometry will persist in some chemical environment, and structure selection, i.e. what crystal structure a stoichiometry will adopt, is critical to the prediction of materials synthesis, reactivity and properties. Here, we demonstrate that density functional theory, with the recently developed strongly constrained and appropriately normed (SCAN) functional, has advanced to a point where both facets of the stability problem can be reliably and efficiently predicted for main group compounds, while transition metal compounds are improved but remain a challenge. SCAN therefore offers a robust model for a significant portion of the periodic table, presenting an opportunity for the development of novel materials and the study of fine phase transformations even in largely unexplored systems with little to no experimental data.

  4. Efficient first-principles prediction of solid stability: Towards chemical accuracy

    DOE PAGES

    Zhang, Yubo; Kitchaev, Daniil A.; Yang, Julia; ...

    2018-03-09

    The question of material stability is of fundamental importance to any analysis of system properties in condensed matter physics and materials science. The ability to evaluate chemical stability, i.e., whether a stoichiometry will persist in some chemical environment, and structure selection, i.e. what crystal structure a stoichiometry will adopt, is critical to the prediction of materials synthesis, reactivity and properties. In this paper, we demonstrate that density functional theory, with the recently developed strongly constrained and appropriately normed (SCAN) functional, has advanced to a point where both facets of the stability problem can be reliably and efficiently predicted for mainmore » group compounds, while transition metal compounds are improved but remain a challenge. SCAN therefore offers a robust model for a significant portion of the periodic table, presenting an opportunity for the development of novel materials and the study of fine phase transformations even in largely unexplored systems with little to no experimental data.« less

  5. Efficient first-principles prediction of solid stability: Towards chemical accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yubo; Kitchaev, Daniil A.; Yang, Julia

    The question of material stability is of fundamental importance to any analysis of system properties in condensed matter physics and materials science. The ability to evaluate chemical stability, i.e., whether a stoichiometry will persist in some chemical environment, and structure selection, i.e. what crystal structure a stoichiometry will adopt, is critical to the prediction of materials synthesis, reactivity and properties. In this paper, we demonstrate that density functional theory, with the recently developed strongly constrained and appropriately normed (SCAN) functional, has advanced to a point where both facets of the stability problem can be reliably and efficiently predicted for mainmore » group compounds, while transition metal compounds are improved but remain a challenge. SCAN therefore offers a robust model for a significant portion of the periodic table, presenting an opportunity for the development of novel materials and the study of fine phase transformations even in largely unexplored systems with little to no experimental data.« less

  6. Accuracy requirements and uncertainties in radiotherapy: a report of the International Atomic Energy Agency.

    PubMed

    van der Merwe, Debbie; Van Dyk, Jacob; Healy, Brendan; Zubizarreta, Eduardo; Izewska, Joanna; Mijnheer, Ben; Meghzifene, Ahmed

    2017-01-01

    Radiotherapy technology continues to advance and the expectation of improved outcomes requires greater accuracy in various radiotherapy steps. Different factors affect the overall accuracy of dose delivery. Institutional comprehensive quality assurance (QA) programs should ensure that uncertainties are maintained at acceptable levels. The International Atomic Energy Agency has recently developed a report summarizing the accuracy achievable and the suggested action levels, for each step in the radiotherapy process. Overview of the report: The report seeks to promote awareness and encourage quantification of uncertainties in order to promote safer and more effective patient treatments. The radiotherapy process and the radiobiological and clinical frameworks that define the need for accuracy are depicted. Factors that influence uncertainty are described for a range of techniques, technologies and systems. Methodologies for determining and combining uncertainties are presented, and strategies for reducing uncertainties through QA programs are suggested. The role of quality audits in providing international benchmarking of achievable accuracy and realistic action levels is also discussed. The report concludes with nine general recommendations: (1) Radiotherapy should be applied as accurately as reasonably achievable, technical and biological factors being taken into account. (2) For consistency in prescribing, reporting and recording, recommendations of the International Commission on Radiation Units and Measurements should be implemented. (3) Each institution should determine uncertainties for their treatment procedures. Sample data are tabulated for typical clinical scenarios with estimates of the levels of accuracy that are practically achievable and suggested action levels. (4) Independent dosimetry audits should be performed regularly. (5) Comprehensive quality assurance programs should be in place. (6) Professional staff should be appropriately

  7. High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.

    PubMed

    Song, Shiyu; Chandraker, Manmohan; Guest, Clark C

    2016-04-01

    We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.

  8. Achieving sub-pixel geolocation accuracy in support of MODIS land science

    USGS Publications Warehouse

    Wolfe, R.E.; Nishihama, M.; Fleig, A.J.; Kuyper, J.A.; Roy, David P.; Storey, James C.; Patt, F.S.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands—29 with 1 km, five with 500 m, and two with 250 m nadir pixel dimensions. The Terra satellite has on-board exterior orientation (position and attitude) measurement systems designed to enable geolocation of MODIS data to approximately 150 m (1σ) at nadir. A global network of ground control points is being used to determine biases and trends in the sensor orientation. Biases have been removed by updating models of the spacecraft and instrument orientation in the MODIS geolocation software several times since launch and have improved the MODIS geolocation to approximately 50 m (1σ) at nadir. This paper overviews the geolocation approach, summarizes the first year of geolocation analysis, and overviews future work. The approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications.

  9. High-accuracy user identification using EEG biometrics.

    PubMed

    Koike-Akino, Toshiaki; Mahajan, Ruhi; Marks, Tim K; Ye Wang; Watanabe, Shinji; Tuzel, Oncel; Orlik, Philip

    2016-08-01

    We analyze brain waves acquired through a consumer-grade EEG device to investigate its capabilities for user identification and authentication. First, we show the statistical significance of the P300 component in event-related potential (ERP) data from 14-channel EEGs across 25 subjects. We then apply a variety of machine learning techniques, comparing the user identification performance of various different combinations of a dimensionality reduction technique followed by a classification algorithm. Experimental results show that an identification accuracy of 72% can be achieved using only a single 800 ms ERP epoch. In addition, we demonstrate that the user identification accuracy can be significantly improved to more than 96.7% by joint classification of multiple epochs.

  10. You are so beautiful... to me: seeing beyond biases and achieving accuracy in romantic relationships.

    PubMed

    Solomon, Brittany C; Vazire, Simine

    2014-09-01

    Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically). 2014 APA, all rights reserved

  11. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  12. Accuracy increase of self-compensator

    NASA Astrophysics Data System (ADS)

    Zhambalova, S. Ts; Vinogradova, A. A.

    2018-03-01

    In this paper, the authors consider a self-compensation system and a method for increasing its accuracy, without compromising the condition of the information theory of measuring devices. The result can be achieved using the pulse control of the tracking system in the dead zone (the zone of the proportional section of the amplifier's characteristic). Pulse control allows one to increase the control power, but the input signal of the amplifier is infinitesimal. To do this, the authors use the conversion scheme for the input quantity. It is also possible to reduce the dead band, but the system becomes unstable. The amount of information received from the instrument, correcting circuits complicates the system, and, reducing the feedback coefficient dramatically, reduces the speed. Thanks to this, without compromising the measurement condition, the authors increase the accuracy of the self-compensation system. The implementation technique allows increasing the power of the input signal by many orders of magnitude.

  13. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  14. Protein structure refinement using a quantum mechanics-based chemical shielding predictor.

    PubMed

    Bratholm, Lars A; Jensen, Jan H

    2017-03-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ , 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1-0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large structural

  15. Accuracy versus transparency in pharmacoeconomic modelling: finding the right balance.

    PubMed

    Eddy, David M

    2006-01-01

    As modellers push to make their models more accurate, the ability of others to understand the models can decrease, causing the models to lose transparency. When this type of conflict between accuracy and transparency occurs, the question arises, "Where do we want to operate on that spectrum?" This paper argues that in such cases we should give absolute priority to accuracy: push for whatever degree of accuracy is needed to answer the question being asked, try to maximise transparency within that constraint, and find other ways to replace what we wanted to get from transparency. There are several reasons. The fundamental purpose of a model is to help us get the right answer to a question and, by any measure, the expected value of a model is proportional to its accuracy. Ironically, we use transparency as a way to judge accuracy. But transparency is not a very powerful or useful way to do this. It rarely enables us to actually replicate the model's results and, even if we could, replication would not tell us the model's accuracy. Transparency rarely provides even face validity; from the content expert's perspective, the simplifications that modellers have to make usually raise more questions than they answer. Transparency does enable modellers to alert users to weaknesses in their models, but that can be achieved simply by listing the model's limitations and does not get us any closer to real accuracy. Sensitivity analysis tests the importance of uncertainty about the variables in a model, but does not tell us about the variables that were omitted or the structure of the model. What people really want to know is whether a model actually works. Transparency by itself can't answer this; only demonstrations that the model accurately calculates or predicts real events can. Rigorous simulations of clinical trials are a good place to start. This is the type of empirical validation we need to provide if the potential of mathematical models in pharmacoeconomics is to be

  16. Accuracy Assessment of Professional Grade Unmanned Systems for High Precision Airborne Mapping

    NASA Astrophysics Data System (ADS)

    Mostafa, M. M. R.

    2017-08-01

    Recently, sophisticated multi-sensor systems have been implemented on-board modern Unmanned Aerial Systems. This allows for producing a variety of mapping products for different mapping applications. The resulting accuracies match the traditional well engineered manned systems. This paper presents the results of a geometric accuracy assessment project for unmanned systems equipped with multi-sensor systems for direct georeferencing purposes. There are a number of parameters that either individually or collectively affect the quality and accuracy of a final airborne mapping product. This paper focuses on identifying and explaining these parameters and their mutual interaction and correlation. Accuracy Assessment of the final ground object positioning accuracy is presented through real-world 8 flight missions that were flown in Quebec, Canada. The achievable precision of map production is addressed in some detail.

  17. Quantifying chemical uncertainties in simulations of the ISM

    NASA Astrophysics Data System (ADS)

    Glover, Simon

    2018-06-01

    The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.

  18. Accuracy comparison among different machine learning techniques for detecting malicious codes

    NASA Astrophysics Data System (ADS)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  19. Prediction of the true digestible amino acid contents from the chemical composition of sorghum grain for poultry.

    PubMed

    Ebadi, M R; Sedghi, M; Golian, A; Ahmadi, H

    2011-10-01

    Accurate knowledge of true digestible amino acid (TDAA) contents of feedstuffs is necessary to accurately formulate poultry diets for profitable production. Several experimental approaches that are highly expensive and time consuming have been used to determine available amino acids. Prediction of the nutritive value of a feed ingredient from its chemical composition via regression methodology has been attempted for many years. The artificial neural network (ANN) model is a powerful method that may describe the relationship between digestible amino acid contents and chemical composition. Therefore, multiple linear regressions (MLR) and ANN models were developed for predicting the TDAA contents of sorghum grain based on chemical composition. A precision-fed assay trial using cecectomized roosters was performed to determine the TDAA contents in 48 sorghum samples from 12 sorghum varieties differing in chemical composition. The input variables for both MLR and ANN models were CP, ash, crude fiber, ether extract, and total phenols whereas the output variable was each individual TDAA for every sample. The results of this study revealed that it is possible to satisfactorily estimate the TDAA of sorghum grain through its chemical composition. The chemical composition of sorghum grain seems to highly influence the TDAA contents when considering components such as CP, crude fiber, ether extract, ash and total phenols. It is also possible to estimate the TDAA contents through multiple regression equations with reasonable accuracy depending on composition. However, a more satisfactory prediction may be achieved via ANN for all amino acids. The R(2) values for the ANN model corresponding to testing and training parameters showed a higher accuracy of prediction than equations established by the MLR method. In addition, the current data confirmed that chemical composition, often considered in total amino acid prediction, could be also a useful predictor of true digestible values

  20. Toward chemical accuracy in the description of ion-water interactions through many-body representations. Alkali-water dimer potential energy surfaces

    NASA Astrophysics Data System (ADS)

    Riera, Marc; Mardirossian, Narbe; Bajaj, Pushp; Götz, Andreas W.; Paesani, Francesco

    2017-10-01

    This study presents the extension of the MB-nrg (Many-Body energy) theoretical/computational framework of transferable potential energy functions (PEFs) for molecular simulations of alkali metal ion-water systems. The MB-nrg PEFs are built upon the many-body expansion of the total energy and include the explicit treatment of one-body, two-body, and three-body interactions, with all higher-order contributions described by classical induction. This study focuses on the MB-nrg two-body terms describing the full-dimensional potential energy surfaces of the M+(H2O) dimers, where M+ = Li+, Na+, K+, Rb+, and Cs+. The MB-nrg PEFs are derived entirely from "first principles" calculations carried out at the explicitly correlated coupled-cluster level including single, double, and perturbative triple excitations [CCSD(T)-F12b] for Li+ and Na+ and at the CCSD(T) level for K+, Rb+, and Cs+. The accuracy of the MB-nrg PEFs is systematically assessed through an extensive analysis of interaction energies, structures, and harmonic frequencies for all five M+(H2O) dimers. In all cases, the MB-nrg PEFs are shown to be superior to both polarizable force fields and ab initio models based on density functional theory. As previously demonstrated for halide-water dimers, the MB-nrg PEFs achieve higher accuracy by correctly describing short-range quantum-mechanical effects associated with electron density overlap as well as long-range electrostatic many-body interactions.

  1. A note on the accuracy of spectral method applied to nonlinear conservation laws

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Wong, Peter S.

    1994-01-01

    Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.

  2. Application of mass spectrometry in the characterization of chemicals in coal-derived liquids.

    PubMed

    Liu, Fang-Jing; Fan, Maohong; Wei, Xian-Yong; Zong, Zhi-Min

    2017-07-01

    Coal-derived liquids (CDLs) are primarily generated from pyrolysis, carbonization, gasification, direct liquefaction, low-temperature extraction, thermal dissolution, and mild oxidation. CDLs are important feedstocks for producing value-added chemicals and clean liquid fuels as well as high performance carbon materials. Accordingly, the compositional characterization of chemicals in CDLs at the molecular level with advanced analytical techniques is significant for the efficient utilization of CDLs. Although reviews on advancements have been rarely reported, great progress has been achieved in this area by using gas chromatography/mass spectrometry (GC/MS), two-dimensional GC-time of flight mass spectrometry (GC × GC-TOFMS), and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS). This review focuses on characterizing hydrocarbon, oxygen-containing, nitrogen-containing, sulfur-containing, and halogen-containing chemicals in various CDLs with these three mass spectrometry techniques. Small molecular (< 500 u), volatile and semi-volatile, and less polar chemicals in CDLs have been identified with GC/MS and GC × GC-TOFMS. By equipped with two-dimensional GC, GC × GC-TOFMS can achieve a clearly chromatographic separation of complex chemicals in CDLs without prior fractionation, and thus can overcome the disadvantages of co-elution and serious peak overlap in GC/MS analysis, providing much more compositional information. With ultrahigh resolving power and mass accuracy, FT-ICR MS reveals a huge number of compositionally distinct compounds assigned to various chemical classes in CDLs. It shows excellent performance in resolving and characterizing higher-molecular, less volatile, and polar chemicals that cannot be detected by GC/MS and GC × GC-TOFMS. The application of GC × GC-TOFMS and FT-ICR MS to chemical characterization of CDLs is not as prevalent as that of petroleum and largely remains to be developed in many respects

  3. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a…

  4. Muscular and Aerobic Fitness, Working Memory, and Academic Achievement in Children.

    PubMed

    Kao, Shih-Chun; Westfall, Daniel R; Parks, Andrew C; Pontifex, Matthew B; Hillman, Charles H

    2017-03-01

    This study investigated the relationship between aerobic and muscular fitness with working memory and academic achievement in preadolescent children. Seventy-nine 9- to 11-yr-old children completed an aerobic fitness assessment using a graded exercise test; a muscular fitness assessment consisting of upper body, lower body, and core exercises; a serial n-back task to assess working memory; and an academic achievement test of mathematics and reading. Hierarchical regression analyses indicated that after controlling for demographic variables (age, sex, grade, IQ, socioeconomic status), aerobic fitness was associated with greater response accuracy and d' in the 2-back condition and increased mathematic performance in algebraic functions. Muscular fitness was associated with increased response accuracy and d', and longer reaction time in the 2-back condition. Further, the associations of muscular fitness with response accuracy and d' in the 2-back condition were independent of aerobic fitness. The current findings suggest the differential relationships between the aerobic and the muscular aspects of physical fitness with working memory and academic achievement. With the majority of research focusing on childhood health benefits of aerobic fitness, this study suggests the importance of muscular fitness to cognitive health during preadolescence.

  5. Electron Probe MicroAnalysis (EPMA) Standards. Issues Related to Measurement and Accuracy Evaluation in EPMA

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul

    2003-01-01

    Electron-probe microanalysis standards and issues related to measurement and accuracy of microanalysis will be discussed. Critical evaluation of standards based on homogeneity and comparison with wet-chemical analysis will be made. Measurement problems such as spectrometer dead-time will be discussed. Analytical accuracy issues will be evaluated for systems by alpha-factor analysis and comparison with experimental k-ratio databases.

  6. A two-dimensional numerical simulation of a supersonic, chemically reacting mixing layer

    NASA Technical Reports Server (NTRS)

    Drummond, J. Philip

    1988-01-01

    Research has been undertaken to achieve an improved understanding of physical phenomena present when a supersonic flow undergoes chemical reaction. A detailed understanding of supersonic reacting flows is necessary to successfully develop advanced propulsion systems now planned for use late in this century and beyond. In order to explore such flows, a study was begun to create appropriate physical models for describing supersonic combustion, and to develop accurate and efficient numerical techniques for solving the governing equations that result from these models. From this work, two computer programs were written to study reacting flows. Both programs were constructed to consider the multicomponent diffusion and convection of important chemical species, the finite rate reaction of these species, and the resulting interaction of the fluid mechanics and the chemistry. The first program employed a finite difference scheme for integrating the governing equations, whereas the second used a hybrid Chebyshev pseudospectral technique for improved accuracy.

  7. Application of kernel functions for accurate similarity search in large chemical databases.

    PubMed

    Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H

    2010-04-29

    Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.

  8. Parts-Per-Billion Mass Measurement Accuracy Achieved through the Combination of Multiple Linear Regression and Automatic Gain Control in a Fourier Transform Ion Cyclotron Resonance Mass Spectrometer

    PubMed Central

    Williams, D. Keith; Muddiman, David C.

    2008-01-01

    Fourier transform ion cyclotron resonance mass spectrometry has the ability to achieve unprecedented mass measurement accuracy (MMA); MMA is one of the most significant attributes of mass spectrometric measurements as it affords extraordinary molecular specificity. However, due to space-charge effects, the achievable MMA significantly depends on the total number of ions trapped in the ICR cell for a particular measurement. Even through the use of automatic gain control (AGC), the total ion population is not constant between spectra. Multiple linear regression calibration in conjunction with AGC is utilized in these experiments to formally account for the differences in total ion population in the ICR cell between the external calibration spectra and experimental spectra. This ability allows for the extension of dynamic range of the instrument while allowing mean MMA values to remain less than 1 ppm. In addition, multiple linear regression calibration is used to account for both differences in total ion population in the ICR cell as well as relative ion abundance of a given species, which also affords mean MMA values at the parts-per-billion level. PMID:17539605

  9. Chemical element transport in stellar evolution models

    PubMed Central

    Cassisi, Santi

    2017-01-01

    Stellar evolution computations provide the foundation of several methods applied to study the evolutionary properties of stars and stellar populations, both Galactic and extragalactic. The accuracy of the results obtained with these techniques is linked to the accuracy of the stellar models, and in this context the correct treatment of the transport of chemical elements is crucial. Unfortunately, in many respects calculations of the evolution of the chemical abundance profiles in stars are still affected by sometimes sizable uncertainties. Here, we review the various mechanisms of element transport included in the current generation of stellar evolution calculations, how they are implemented, the free parameters and uncertainties involved, the impact on the models and the observational constraints. PMID:28878972

  10. Chemical element transport in stellar evolution models.

    PubMed

    Salaris, Maurizio; Cassisi, Santi

    2017-08-01

    Stellar evolution computations provide the foundation of several methods applied to study the evolutionary properties of stars and stellar populations, both Galactic and extragalactic. The accuracy of the results obtained with these techniques is linked to the accuracy of the stellar models, and in this context the correct treatment of the transport of chemical elements is crucial. Unfortunately, in many respects calculations of the evolution of the chemical abundance profiles in stars are still affected by sometimes sizable uncertainties. Here, we review the various mechanisms of element transport included in the current generation of stellar evolution calculations, how they are implemented, the free parameters and uncertainties involved, the impact on the models and the observational constraints.

  11. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  12. Assessment of Weighted Quantile Sum Regression for Modeling Chemical Mixtures and Cancer Risk

    PubMed Central

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case–control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome. PMID:26005323

  13. Assessment of weighted quantile sum regression for modeling chemical mixtures and cancer risk.

    PubMed

    Czarnota, Jenna; Gennings, Chris; Wheeler, David C

    2015-01-01

    In evaluation of cancer risk related to environmental chemical exposures, the effect of many chemicals on disease is ultimately of interest. However, because of potentially strong correlations among chemicals that occur together, traditional regression methods suffer from collinearity effects, including regression coefficient sign reversal and variance inflation. In addition, penalized regression methods designed to remediate collinearity may have limitations in selecting the truly bad actors among many correlated components. The recently proposed method of weighted quantile sum (WQS) regression attempts to overcome these problems by estimating a body burden index, which identifies important chemicals in a mixture of correlated environmental chemicals. Our focus was on assessing through simulation studies the accuracy of WQS regression in detecting subsets of chemicals associated with health outcomes (binary and continuous) in site-specific analyses and in non-site-specific analyses. We also evaluated the performance of the penalized regression methods of lasso, adaptive lasso, and elastic net in correctly classifying chemicals as bad actors or unrelated to the outcome. We based the simulation study on data from the National Cancer Institute Surveillance Epidemiology and End Results Program (NCI-SEER) case-control study of non-Hodgkin lymphoma (NHL) to achieve realistic exposure situations. Our results showed that WQS regression had good sensitivity and specificity across a variety of conditions considered in this study. The shrinkage methods had a tendency to incorrectly identify a large number of components, especially in the case of strong association with the outcome.

  14. Statistical algorithms improve accuracy of gene fusion detection

    PubMed Central

    Hsieh, Gillian; Bierman, Rob; Szabo, Linda; Lee, Alex Gia; Freeman, Donald E.; Watson, Nathaniel; Sweet-Cordero, E. Alejandro

    2017-01-01

    Abstract Gene fusions are known to play critical roles in tumor pathogenesis. Yet, sensitive and specific algorithms to detect gene fusions in cancer do not currently exist. In this paper, we present a new statistical algorithm, MACHETE (Mismatched Alignment CHimEra Tracking Engine), which achieves highly sensitive and specific detection of gene fusions from RNA-Seq data, including the highest Positive Predictive Value (PPV) compared to the current state-of-the-art, as assessed in simulated data. We show that the best performing published algorithms either find large numbers of fusions in negative control data or suffer from low sensitivity detecting known driving fusions in gold standard settings, such as EWSR1-FLI1. As proof of principle that MACHETE discovers novel gene fusions with high accuracy in vivo, we mined public data to discover and subsequently PCR validate novel gene fusions missed by other algorithms in the ovarian cancer cell line OVCAR3. These results highlight the gains in accuracy achieved by introducing statistical models into fusion detection, and pave the way for unbiased discovery of potentially driving and druggable gene fusions in primary tumors. PMID:28541529

  15. Martial arts striking hand peak acceleration, accuracy and consistency.

    PubMed

    Neto, Osmar Pinto; Marzullo, Ana Carolina De Miranda; Bolander, Richard P; Bir, Cynthia A

    2013-01-01

    The goal of this paper was to investigate the possible trade-off between peak hand acceleration and accuracy and consistency of hand strikes performed by martial artists of different training experiences. Ten male martial artists with training experience ranging from one to nine years volunteered to participate in the experiment. Each participant performed 12 maximum effort goal-directed strikes. Hand acceleration during the strikes was obtained using a tri-axial accelerometer block. A pressure sensor matrix was used to determine the accuracy and consistency of the strikes. Accuracy was estimated by the radial distance between the centroid of each subject's 12 strikes and the target, whereas consistency was estimated by the square root of the 12 strikes mean squared distance from their centroid. We found that training experience was significantly correlated to hand peak acceleration prior to impact (r(2)=0.456, p =0.032) and accuracy (r(2)=0. 621, p=0.012). These correlations suggest that more experienced participants exhibited higher hand peak accelerations and at the same time were more accurate. Training experience, however, was not correlated to consistency (r(2)=0.085, p=0.413). Overall, our results suggest that martial arts training may lead practitioners to achieve higher striking hand accelerations with better accuracy and no change in striking consistency.

  16. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  17. Performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from the Cochrane Library.

    PubMed

    Huang, Yuansheng; Yang, Zhirong; Wang, Jing; Zhuo, Lin; Li, Zhixia; Zhan, Siyan

    2016-05-06

    To compare the performance of search strategies to retrieve systematic reviews of diagnostic test accuracy from The Cochrane Library. Databases of CDSR and DARE in the Cochrane Library were searched for systematic reviews of diagnostic test accuracy published between 2008 and 2012 through nine search strategies. Each strategy consists of one group or combination of groups of searching filters about diagnostic test accuracy. Four groups of diagnostic filters were used. The Strategy combing all the filters was used as the reference to determine the sensitivity, precision, and the sensitivity x precision product for another eight Strategies. The reference Strategy retrieved 8029 records, of which 832 were eligible. The strategy only composed of MeSH terms about "accuracy measures" achieved the highest values in both precision (69.71%) and product (52.45%) with a moderate sensitivity (75.24%). The combination of MeSH terms and free text words about "accuracy measures" contributed little to increasing the sensitivity. Strategies composed of filters about "diagnosis" had similar sensitivity but lower precision and product to those composed of filters about "accuracy measures". MeSH term "exp'diagnosis' " achieved the lowest precision (9.78%) and product (7.91%), while its hyponym retrieved only half the number of records at the expense of missing 53 target articles. The precision was negatively correlated with sensitivities among the nine strategies. Compared to the filters about "diagnosis", the filters about "accuracy measures" achieved similar sensitivities but higher precision. When combining both terms, sensitivity of the strategy was enhanced obviously. The combination of MeSH terms and free text words about the same concept seemed to be meaningless for enhancing sensitivity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. An optical lattice clock with accuracy and stability at the 10(-18) level.

    PubMed

    Bloom, B J; Nicholson, T L; Williams, J R; Campbell, S L; Bishof, M; Zhang, X; Zhang, W; Bromley, S L; Ye, J

    2014-02-06

    Progress in atomic, optical and quantum science has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks, their accuracy has remained 16 times worse. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10(-18), which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units, the search for time variation of fundamental constants, clock-based geodesy and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.

  19. Effects of accuracy motivation and anchoring on metacomprehension judgment and accuracy.

    PubMed

    Zhao, Qin

    2012-01-01

    The current research investigates how accuracy motivation impacts anchoring and adjustment in metacomprehension judgment and how accuracy motivation and anchoring affect metacomprehension accuracy. Participants were randomly assigned to one of six conditions produced by the between-subjects factorial design involving accuracy motivation (incentive or no) and peer performance anchor (95%, 55%, or no). Two studies showed that accuracy motivation did not impact anchoring bias, but the adjustment-from-anchor process occurred. Accuracy incentive increased anchor-judgment gap for the 95% anchor but not for the 55% anchor, which induced less certainty about the direction of adjustment. The findings offer support to the integrative theory of anchoring. Additionally, the two studies revealed a "power struggle" between accuracy motivation and anchoring in influencing metacomprehension accuracy. Accuracy motivation could improve metacomprehension accuracy in spite of anchoring effect, but if anchoring effect is too strong, it could overpower the motivation effect. The implications of the findings were discussed.

  20. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  1. Vibrationally averaged post Born-Oppenheimer isotopic dipole moment calculations approaching spectroscopic accuracy.

    PubMed

    Arapiraca, A F C; Jonsson, Dan; Mohallem, J R

    2011-12-28

    We report an upgrade of the Dalton code to include post Born-Oppenheimer nuclear mass corrections in the calculations of (ro-)vibrational averages of molecular properties. These corrections are necessary to achieve an accuracy of 10(-4) debye in the calculations of isotopic dipole moments. Calculations on the self-consistent field level present this accuracy, while numerical instabilities compromise correlated calculations. Applications to HD, ethane, and ethylene isotopologues are implemented, all of them approaching the experimental values.

  2. Investigating Pharmacological Similarity by Charting Chemical Space.

    PubMed

    Buonfiglio, Rosa; Engkvist, Ola; Várkonyi, Péter; Henz, Astrid; Vikeved, Elisabet; Backlund, Anders; Kogej, Thierry

    2015-11-23

    In this study, biologically relevant areas of the chemical space were analyzed using ChemGPS-NP. This application enables comparing groups of ligands within a multidimensional space based on principle components derived from physicochemical descriptors. Also, 3D visualization of the ChemGPS-NP global map can be used to conveniently evaluate bioactive compound similarity and visually distinguish between different types or groups of compounds. To further establish ChemGPS-NP as a method to accurately represent the chemical space, a comparison with structure-based fingerprint has been performed. Interesting complementarities between the two descriptions of molecules were observed. It has been shown that the accuracy of describing molecules with physicochemical descriptors like in ChemGPS-NP is similar to the accuracy of structural fingerprints in retrieving bioactive molecules. Lastly, pharmacological similarity of structurally diverse compounds has been investigated in ChemGPS-NP space. These results further strengthen the case of using ChemGPS-NP as a tool to explore and visualize chemical space.

  3. Constructing better classifier ensemble based on weighted accuracy and diversity measure.

    PubMed

    Zeng, Xiaodong; Wong, Derek F; Chao, Lidia S

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  4. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    PubMed Central

    Chao, Lidia S.

    2014-01-01

    A weighted accuracy and diversity (WAD) method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases. PMID:24672402

  5. A New Three-Dimensional High-Accuracy Automatic Alignment System For Single-Mode Fibers

    NASA Astrophysics Data System (ADS)

    Yun-jiang, Rao; Shang-lian, Huang; Ping, Li; Yu-mei, Wen; Jun, Tang

    1990-02-01

    In order to achieve the low-loss splices of single-mode fibers, a new three-dimension high-accuracy automatic alignment system for single -mode fibers has been developed, which includes a new-type three-dimension high-resolution microdisplacement servo stage driven by piezoelectric elements, a new high-accuracy measurement system for the misalignment error of the fiber core-axis, and a special single chip microcomputer processing system. The experimental results show that alignment accuracy of ±0.1 pin with a movable stroke of -±20μm has been obtained. This new system has more advantages than that reported.

  6. Extension of a hybrid particle-continuum method for a mixture of chemical species

    NASA Astrophysics Data System (ADS)

    Verhoff, Ashley M.; Boyd, Iain D.

    2012-11-01

    Due to the physical accuracy and numerical efficiency achieved by analyzing transitional, hypersonic flow fields with hybrid particle-continuum methods, this paper describes a Modular Particle-Continuum (MPC) method and its extension to include multiple chemical species. Considerations that are specific to a hybrid approach for simulating gas mixtures are addressed, including a discussion of the Chapman-Enskog velocity distribution function (VDF) for near-equilibrium flows, and consistent viscosity models for the individual CFD and DSMC modules of the MPC method. Representative results for a hypersonic blunt-body flow are then presented, where the flow field properties, surface properties, and computational performance are compared for simulations employing full CFD, full DSMC, and the MPC method.

  7. The influence of achievement goals on the constructive activity of low achievers during collaborative problem solving.

    PubMed

    Gabriele, Anthony J

    2007-03-01

    Previous research on small-group learning has found that level of constructive activity (solving or explaining how to solve problems using ideas stated or implied in the explanation provided by a partner) was a better predictor of post-test achievement than either a student's prior achievement or the quality of help received (Webb, Troper, & Fall, 1995). The purpose of this study was to extend this research by examining the influence of additional factors, in particular, achievement goals and comprehension monitoring, on low achieving students' constructive activity after receiving help from a high achieving peer. Thirty-two low achieving upper elementary students from an urban school district in the mid-west of the United States were paired with high achieving partners. Videotape data from a previously reported study on peer collaboration were transcribed and reanalyzed. In that study, dyads were randomly assigned instructions designed to induce either a learning or performance goal and were videotaped as they worked together to solve a set of mathematical word problems. The following day, students were individually post-tested on problems similar to the ones worked on in pairs. Consistent with previous research, low achieving students' level of constructive activity predicted post-test performance. In addition, constructive activity was found to mediate the relationship between achievement goals and learning. However, achievement goals were not related to low achievers constructive use of help. Instead, achievement goals were related to low achievers' relative accuracy in comprehension monitoring, which in turn was related to level of constructive activity. The meaning of these results for understanding the processes by which low achievers learn from peer help and implications for classroom practice are discussed.

  8. Spectroscopy of H3+ based on a new high-accuracy global potential energy surface.

    PubMed

    Polyansky, Oleg L; Alijah, Alexander; Zobov, Nikolai F; Mizus, Irina I; Ovsyannikov, Roman I; Tennyson, Jonathan; Lodi, Lorenzo; Szidarovszky, Tamás; Császár, Attila G

    2012-11-13

    The molecular ion H(3)(+) is the simplest polyatomic and poly-electronic molecular system, and its spectrum constitutes an important benchmark for which precise answers can be obtained ab initio from the equations of quantum mechanics. Significant progress in the computation of the ro-vibrational spectrum of H(3)(+) is discussed. A new, global potential energy surface (PES) based on ab initio points computed with an average accuracy of 0.01 cm(-1) relative to the non-relativistic limit has recently been constructed. An analytical representation of these points is provided, exhibiting a standard deviation of 0.097 cm(-1). Problems with earlier fits are discussed. The new PES is used for the computation of transition frequencies. Recently measured lines at visible wavelengths combined with previously determined infrared ro-vibrational data show that an accuracy of the order of 0.1 cm(-1) is achieved by these computations. In order to achieve this degree of accuracy, relativistic, adiabatic and non-adiabatic effects must be properly accounted for. The accuracy of these calculations facilitates the reassignment of some measured lines, further reducing the standard deviation between experiment and theory.

  9. Applying industrial symbiosis to chemical industry: A literature review

    NASA Astrophysics Data System (ADS)

    Cui, Hua; Liu, Changhao

    2017-08-01

    Chemical industry plays an important role in promoting the development of global economy and human society. However, the negative effects caused by chemical production cannot be ignored, which often leads to serious resource consumption and environmental pollution. It is essential for chemical industry to achieve a sustainable development. Industrial symbiosis is one of the key topics in the field of industrial ecology and circular economy, which has been identified as a creative path leading to sustainability. Based on an extensively searching for literatures on linking industrial symbiosis with chemical industry, this paper aims to review the literatures which involves three aspects: (1) economic and environmental benefits achieved by chemical industry through implementing industrial symbiosis, (2) chemical eco-industrial parks, (3) and safety issues for chemical industry. An outlook is also provided. This paper concludes that: (1) chemical industry can achieve both economic and environmental benefits by implementing industrial symbiosis, (2) establishing eco-industrial parks is essential for chemical industry to implement and improve industrial symbiosis, and (3) there is a close relationship between IS and safety issues of chemical industry.

  10. The Effect of Moderate and High-Intensity Fatigue on Groundstroke Accuracy in Expert and Non-Expert Tennis Players

    PubMed Central

    Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan

    2013-01-01

    Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in

  11. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  12. The accuracy of the ATLAS muon X-ray tomograph

    NASA Astrophysics Data System (ADS)

    Avramidou, R.; Berbiers, J.; Boudineau, C.; Dechelette, C.; Drakoulakos, D.; Fabjan, C.; Grau, S.; Gschwendtner, E.; Maugain, J.-M.; Rieder, H.; Rangod, S.; Rohrbach, F.; Sbrissa, E.; Sedykh, E.; Sedykh, I.; Smirnov, Y.; Vertogradov, L.; Vichou, I.

    2003-01-01

    A gigantic detector, the ATLAS project, is under construction at CERN for particle physics research at the Large Hadron Collider which is to be ready by 2006. An X-ray tomograph has been developed, designed and constructed at CERN in order to control the mechanical quality of the ATLAS muon chambers. We reached a measurement accuracy of 2 μm systematic and 2 μm statistical uncertainties in the horizontal and vertical directions in the working area 220 cm (horizontal)×60 cm (vertical). Here we describe in detail the fundamental approach of the basic principle chosen to achieve such good accuracy. In order to crosscheck our precision, key results of measurements are presented.

  13. Role of optics in the accuracy of depth-from-defocus systems: comment.

    PubMed

    Blendowske, Ralf

    2007-10-01

    In their paper "Role of optics in the accuracy of depth-from-defocus systems" [J. Opt. Soc. Am. A24, 967 (2007)] the authors Blayvas, Kimmel, and Rivlin discuss the effect of optics on the depth reconstruction accuracy. To this end they applied an approach in Fourier space. An alternative derivation of their result in the spatial domain, based on geometrical optics, is presented and compared with their outcome. A better agreement with experimental data is achieved if some unclarities are refined.

  14. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  15. OrganoRelease - A framework for modeling the release of organic chemicals from the use and post-use of consumer products.

    PubMed

    Tao, Mengya; Li, Dingsheng; Song, Runsheng; Suh, Sangwon; Keller, Arturo A

    2018-03-01

    Chemicals in consumer products have become the focus of recent regulatory developments including California's Safer Consumer Products Act. However, quantifying the amount of chemicals released during the use and post-use phases of consumer products is challenging, limiting the ability to understand their impacts. Here we present a comprehensive framework, OrganoRelease, for estimating the release of organic chemicals from the use and post-use of consumer products given limited information. First, a novel Chemical Functional Use Classifier estimates functional uses based on chemical structure. Second, the quantity of chemicals entering different product streams is estimated based on market share data of the chemical functional uses. Third, chemical releases are estimated based on either chemical product categories or functional uses by using the Specific Environmental Release Categories and EU Technological Guidance Documents. OrganoRelease connects 19 unique functional uses and 14 product categories across 4 data sources and provides multiple pathways for chemical release estimation. Available user information can be incorporated in the framework at various stages. The Chemical Functional Use Classifier achieved an average accuracy above 84% for nine functional uses, which enables the OrganoRelease to provide release estimates for the chemical, mostly using only the molecular structure. The results can be can be used as input for methods estimating environmental fate and exposure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The use of low density high accuracy (LDHA) data for correction of high density low accuracy (HDLA) point cloud

    NASA Astrophysics Data System (ADS)

    Rak, Michal Bartosz; Wozniak, Adam; Mayer, J. R. R.

    2016-06-01

    Coordinate measuring techniques rely on computer processing of coordinate values of points gathered from physical surfaces using contact or non-contact methods. Contact measurements are characterized by low density and high accuracy. On the other hand optical methods gather high density data of the whole object in a short time but with accuracy at least one order of magnitude lower than for contact measurements. Thus the drawback of contact methods is low density of data, while for non-contact methods it is low accuracy. In this paper a method for fusion of data from two measurements of fundamentally different nature: high density low accuracy (HDLA) and low density high accuracy (LDHA) is presented to overcome the limitations of both measuring methods. In the proposed method the concept of virtual markers is used to find a representation of pairs of corresponding characteristic points in both sets of data. In each pair the coordinates of the point from contact measurements is treated as a reference for the corresponding point from non-contact measurement. Transformation enabling displacement of characteristic points from optical measurement to their match from contact measurements is determined and applied to the whole point cloud. The efficiency of the proposed algorithm was evaluated by comparison with data from a coordinate measuring machine (CMM). Three surfaces were used for this evaluation: plane, turbine blade and engine cover. For the planar surface the achieved improvement was of around 200 μm. Similar results were obtained for the turbine blade but for the engine cover the improvement was smaller. For both freeform surfaces the improvement was higher for raw data than for data after creation of mesh of triangles.

  17. Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps

    DOE PAGES

    Isotalo, Aarno; Pusa, Maria

    2016-05-01

    The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less

  18. Camera sensor arrangement for crop/weed detection accuracy in agronomic images.

    PubMed

    Romeo, Juan; Guerrero, José Miguel; Montalvo, Martín; Emmi, Luis; Guijarro, María; Gonzalez-de-Santos, Pablo; Pajares, Gonzalo

    2013-04-02

    In Precision Agriculture, images coming from camera-based sensors are commonly used for weed identification and crop line detection, either to apply specific treatments or for vehicle guidance purposes. Accuracy of identification and detection is an important issue to be addressed in image processing. There are two main types of parameters affecting the accuracy of the images, namely: (a) extrinsic, related to the sensor's positioning in the tractor; (b) intrinsic, related to the sensor specifications, such as CCD resolution, focal length or iris aperture, among others. Moreover, in agricultural applications, the uncontrolled illumination, existing in outdoor environments, is also an important factor affecting the image accuracy. This paper is exclusively focused on two main issues, always with the goal to achieve the highest image accuracy in Precision Agriculture applications, making the following two main contributions: (a) camera sensor arrangement, to adjust extrinsic parameters and (b) design of strategies for controlling the adverse illumination effects.

  19. Adaptive sensor-based ultra-high accuracy solar concentrator tracker

    NASA Astrophysics Data System (ADS)

    Brinkley, Jordyn; Hassanzadeh, Ali

    2017-09-01

    Conventional solar trackers use information of the sun's position, either by direct sensing or by GPS. Our method uses the shading of the receiver. This, coupled with nonimaging optics design allows us to achieve ultra-high concentration. Incorporating a sensor based shadow tracking method with a two stage concentration solar hybrid parabolic trough allows the system to maintain high concentration with acute accuracy.

  20. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.

    PubMed

    Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P

    2011-09-01

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Sampling Molecular Conformers in Solution with Quantum Mechanical Accuracy at a Nearly Molecular-Mechanics Cost.

    PubMed

    Rosa, Marta; Micciarelli, Marco; Laio, Alessandro; Baroni, Stefano

    2016-09-13

    We introduce a method to evaluate the relative populations of different conformers of molecular species in solution, aiming at quantum mechanical accuracy, while keeping the computational cost at a nearly molecular-mechanics level. This goal is achieved by combining long classical molecular-dynamics simulations to sample the free-energy landscape of the system, advanced clustering techniques to identify the most relevant conformers, and thermodynamic perturbation theory to correct the resulting populations, using quantum-mechanical energies from density functional theory. A quantitative criterion for assessing the accuracy thus achieved is proposed. The resulting methodology is demonstrated in the specific case of cyanin (cyanidin-3-glucoside) in water solution.

  2. Modeling the Biodegradability of Chemical Compounds Using the Online CHEmical Modeling Environment (OCHEM)

    PubMed Central

    Vorberg, Susann

    2013-01-01

    Abstract Biodegradability describes the capacity of substances to be mineralized by free‐living bacteria. It is a crucial property in estimating a compound’s long‐term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. PMID:27485201

  3. Achieving Chemical Equilibrium: The Role of Imposed Conditions in the Ammonia Formation Reaction

    ERIC Educational Resources Information Center

    Tellinghuisen, Joel

    2006-01-01

    Under conditions of constant temperature T and pressure P, chemical equilibrium occurs in a closed system (fixed mass) when the Gibbs free energy G of the reaction mixture is minimized. However, when chemical reactions occur under other conditions, other thermodynamic functions are minimized or maximized. For processes at constant T and volume V,…

  4. Prediction of soil properties using imaging spectroscopy: Considering fractional vegetation cover to improve accuracy

    NASA Astrophysics Data System (ADS)

    Franceschini, M. H. D.; Demattê, J. A. M.; da Silva Terra, F.; Vicente, L. E.; Bartholomeus, H.; de Souza Filho, C. R.

    2015-06-01

    Spectroscopic techniques have become attractive to assess soil properties because they are fast, require little labor and may reduce the amount of laboratory waste produced when compared to conventional methods. Imaging spectroscopy (IS) can have further advantages compared to laboratory or field proximal spectroscopic approaches such as providing spatially continuous information with a high density. However, the accuracy of IS derived predictions decreases when the spectral mixture of soil with other targets occurs. This paper evaluates the use of spectral data obtained by an airborne hyperspectral sensor (ProSpecTIR-VS - Aisa dual sensor) for prediction of physical and chemical properties of Brazilian highly weathered soils (i.e., Oxisols). A methodology to assess the soil spectral mixture is adapted and a progressive spectral dataset selection procedure, based on bare soil fractional cover, is proposed and tested. Satisfactory performances are obtained specially for the quantification of clay, sand and CEC using airborne sensor data (R2 of 0.77, 0.79 and 0.54; RPD of 2.14, 2.22 and 1.50, respectively), after spectral data selection is performed; although results obtained for laboratory data are more accurate (R2 of 0.92, 0.85 and 0.75; RPD of 3.52, 2.62 and 2.04, for clay, sand and CEC, respectively). Most importantly, predictions based on airborne-derived spectra for which the bare soil fractional cover is not taken into account show considerable lower accuracy, for example for clay, sand and CEC (RPD of 1.52, 1.64 and 1.16, respectively). Therefore, hyperspectral remotely sensed data can be used to predict topsoil properties of highly weathered soils, although spectral mixture of bare soil with vegetation must be considered in order to achieve an improved prediction accuracy.

  5. ACCURACY AND COST CONSIDERATIONS IN CHOOSING A CHEMICAL MECHANISM FOR OPERATIONAL USE IN AQ MODELS

    EPA Science Inventory

    There are several contemporary chemical kinetic mechanisms available for use in tropospheric air quality simulation models, with varying degrees of condensation of the chemical reaction pathways. Likewise, there are several different numerical solution methods available to use w...

  6. Correlation of chemical shifts predicted by molecular dynamics simulations for partially disordered proteins.

    PubMed

    Karp, Jerome M; Eryilmaz, Ertan; Erylimaz, Ertan; Cowburn, David

    2015-01-01

    There has been a longstanding interest in being able to accurately predict NMR chemical shifts from structural data. Recent studies have focused on using molecular dynamics (MD) simulation data as input for improved prediction. Here we examine the accuracy of chemical shift prediction for intein systems, which have regions of intrinsic disorder. We find that using MD simulation data as input for chemical shift prediction does not consistently improve prediction accuracy over use of a static X-ray crystal structure. This appears to result from the complex conformational ensemble of the disordered protein segments. We show that using accelerated molecular dynamics (aMD) simulations improves chemical shift prediction, suggesting that methods which better sample the conformational ensemble like aMD are more appropriate tools for use in chemical shift prediction for proteins with disordered regions. Moreover, our study suggests that data accurately reflecting protein dynamics must be used as input for chemical shift prediction in order to correctly predict chemical shifts in systems with disorder.

  7. The Role of Feedback on Studying, Achievement and Calibration.

    ERIC Educational Resources Information Center

    Chu, Stephanie T. L.; Jamieson-Noel, Dianne L.; Winne, Philip H.

    One set of hypotheses examined in this study was that various types of feedback (outcome, process, and corrective) supply different information about performance and have different effects on studying processes and on achievement. Another set of hypotheses concerned students' calibration, their accuracy in predicting and postdicting achievement…

  8. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  9. Test battery with the human cell line activation test, direct peptide reactivity assay and DEREK based on a 139 chemical data set for predicting skin sensitizing potential and potency of chemicals.

    PubMed

    Takenouchi, Osamu; Fukui, Shiho; Okamoto, Kenji; Kurotani, Satoru; Imai, Noriyasu; Fujishiro, Miyuki; Kyotani, Daiki; Kato, Yoshinao; Kasahara, Toshihiko; Fujita, Masaharu; Toyoda, Akemi; Sekiya, Daisuke; Watanabe, Shinichi; Seto, Hirokazu; Hirota, Morihiko; Ashikaga, Takao; Miyazawa, Masaaki

    2015-11-01

    To develop a testing strategy incorporating the human cell line activation test (h-CLAT), direct peptide reactivity assay (DPRA) and DEREK, we created an expanded data set of 139 chemicals (102 sensitizers and 37 non-sensitizers) by combining the existing data set of 101 chemicals through the collaborative projects of Japan Cosmetic Industry Association. Of the additional 38 chemicals, 15 chemicals with relatively low water solubility (log Kow > 3.5) were selected to clarify the limitation of testing strategies regarding the lipophilic chemicals. Predictivities of the h-CLAT, DPRA and DEREK, and the combinations thereof were evaluated by comparison to results of the local lymph node assay. When evaluating 139 chemicals using combinations of three methods based on integrated testing strategy (ITS) concept (ITS-based test battery) and a sequential testing strategy (STS) weighing the predictive performance of the h-CLAT and DPRA, overall similar predictivities were found as before on the 101 chemical data set. An analysis of false negative chemicals suggested a major limitation of our strategies was the testing of low water-soluble chemicals. When excluded the negative results for chemicals with log Kow > 3.5, the sensitivity and accuracy of ITS improved to 97% (91 of 94 chemicals) and 89% (114 of 128). Likewise, the sensitivity and accuracy of STS to 98% (92 of 94) and 85% (111 of 129). Moreover, the ITS and STS also showed good correlation with local lymph node assay on three potency classifications, yielding accuracies of 74% (ITS) and 73% (STS). Thus, the inclusion of log Kow in analysis could give both strategies a higher predictive performance. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Chemical peels.

    PubMed

    Jackson, Adrianna

    2014-02-01

    Chemical peels are a method of resurfacing with a long-standing history of safety in the treatment of various skin conditions. This article reviews the classification of different chemical agents based on their depth of injury. The level of injury facilitates cell turnover, epidermal thickening, skin lightening, and new collagen formation. Preprocedural, periprocedural, and postprocedural skin care are briefly discussed. To select the appropriate chemical peel, the provider should evaluate the patient's expectations, medical history, skin type, and possible complications to determine the best chemical peel to achieve the desired results. Patients with Fitzpatrick skin types IV to VI have increased risk of dyspigmentation, hypertrophic, and keloid scarring. These individuals respond well to superficial and medium-depth chemical peels. Advances in the use of combination peels allow greater options for skin rejuvenation with less risk of complications. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  11. Localization accuracy of sphere fiducials in computed tomography images

    NASA Astrophysics Data System (ADS)

    Kobler, Jan-Philipp; Díaz Díaz, Jesus; Fitzpatrick, J. Michael; Lexow, G. Jakob; Majdani, Omid; Ortmaier, Tobias

    2014-03-01

    In recent years, bone-attached robots and microstereotactic frames have attracted increasing interest due to the promising targeting accuracy they provide. Such devices attach to a patient's skull via bone anchors, which are used as landmarks during intervention planning as well. However, as simulation results reveal, the performance of such mechanisms is limited by errors occurring during the localization of their bone anchors in preoperatively acquired computed tomography images. Therefore, it is desirable to identify the most suitable fiducials as well as the most accurate method for fiducial localization. We present experimental results of a study focusing on the fiducial localization error (FLE) of spheres. Two phantoms equipped with fiducials made from ferromagnetic steel and titanium, respectively, are used to compare two clinically available imaging modalities (multi-slice CT (MSCT) and cone-beam CT (CBCT)), three localization algorithms as well as two methods for approximating the FLE. Furthermore, the impact of cubic interpolation applied to the images is investigated. Results reveal that, generally, the achievable localization accuracy in CBCT image data is significantly higher compared to MSCT imaging. The lowest FLEs (approx. 40 μm) are obtained using spheres made from titanium, CBCT imaging, template matching based on cross correlation for localization, and interpolating the images by a factor of sixteen. Nevertheless, the achievable localization accuracy of spheres made from steel is only slightly inferior. The outcomes of the presented study will be valuable considering the optimization of future microstereotactic frame prototypes as well as the operative workflow.

  12. Can Providing Rubrics for Writing Tasks Improve Developing Writers' Calibration Accuracy?

    ERIC Educational Resources Information Center

    Hawthorne, Katrice A.; Bol, Linda; Pribesh, Shana

    2017-01-01

    Rubric-referenced calibration and the interaction between writing achievement and calibration, a measure of the relationship between one's performance and the accuracy of one's judgments, were investigated. Undergraduate students (N = 596) were assigned to one of three calibration conditions: (a) global, (b) global and general criteria, or (c)…

  13. A corpuscular picture of electrons in chemical bond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ando, Koji

    We introduce a theory of chemical bond with a corpuscular picture of electrons. It employs a minimal set of localized electron wave packets with “floating and breathing” degrees of freedom and the spin-coupling of non-orthogonal valence-bond theory. Its accuracy for describing potential energy curves of chemical bonds in ground and excited states of spin singlet and triplet is examined.

  14. Application of Mensuration Technology to Improve the Accuracy of Field Artillery Firing Unit Location

    DTIC Science & Technology

    2013-12-13

    8 U.S. Army Field Artillery Operations ............................................................................ 8 Geodesy ...Experts in this field of study have a full working knowledge of geodesy and the theory that allows mensuration to surpass the level of accuracy achieved...desired. (2) Fire that is intended to achieve the desired result on target.”6 Geodesy : “that branch of applied mathematics which determines by observation

  15. Super-resolution and super-localization microscopy: A novel tool for imaging chemical and biological processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Bin

    2015-01-01

    Optical microscopy imaging of single molecules and single particles is an essential method for studying fundamental biological and chemical processes at the molecular and nanometer scale. The best spatial resolution (~ λ/2) achievable in traditional optical microscopy is governed by the diffraction of light. However, single molecule-based super-localization and super-resolution microscopy imaging techniques have emerged in the past decade. Individual molecules can be localized with nanometer scale accuracy and precision for studying of biological and chemical processes.This work uncovered the heterogeneous properties of the pore structures. In this dissertation, the coupling of molecular transport and catalytic reaction at the singlemore » molecule and single particle level in multilayer mesoporous nanocatalysts was elucidated. Most previous studies dealt with these two important phenomena separately. A fluorogenic oxidation reaction of non-fluorescent amplex red to highly fluorescent resorufin was tested. The diffusion behavior of single resorufin molecules in aligned nanopores was studied using total internal reflection fluorescence microscopy (TIRFM).« less

  16. High-accuracy and high-sensitivity spectroscopic measurement of dinitrogen pentoxide (N2O5) in an atmospheric simulation chamber using a quantum cascade laser.

    PubMed

    Yi, Hongming; Wu, Tao; Lauraguais, Amélie; Semenov, Vladimir; Coeur, Cecile; Cassez, Andy; Fertein, Eric; Gao, Xiaoming; Chen, Weidong

    2017-12-04

    A spectroscopic instrument based on a mid-infrared external cavity quantum cascade laser (EC-QCL) was developed for high-accuracy measurements of dinitrogen pentoxide (N 2 O 5 ) at the ppbv-level. A specific concentration retrieval algorithm was developed to remove, from the broadband absorption spectrum of N 2 O 5 , both etalon fringes resulting from the EC-QCL intrinsic structure and spectral interference lines of H 2 O vapour absorption, which led to a significant improvement in measurement accuracy and detection sensitivity (by a factor of 10), compared to using a traditional algorithm for gas concentration retrieval. The developed EC-QCL-based N 2 O 5 sensing platform was evaluated by real-time tracking N 2 O 5 concentration in its most important nocturnal tropospheric chemical reaction of NO 3 + NO 2 ↔ N 2 O 5 in an atmospheric simulation chamber. Based on an optical absorption path-length of L eff = 70 m, a minimum detection limit of 15 ppbv was achieved with a 25 s integration time and it was down to 3 ppbv in 400 s. The equilibrium rate constant K eq involved in the above chemical reaction was determined with direct concentration measurements using the developed EC-QCL sensing platform, which was in good agreement with the theoretical value deduced from a referenced empirical formula under well controlled experimental conditions. The present work demonstrates the potential and the unique advantage of the use of a modern external cavity quantum cascade laser for applications in direct quantitative measurement of broadband absorption of key molecular species involved in chemical kinetic and climate-change related tropospheric chemistry.

  17. Halo abundance matching: accuracy and conditions for numerical convergence

    NASA Astrophysics Data System (ADS)

    Klypin, Anatoly; Prada, Francisco; Yepes, Gustavo; Heß, Steffen; Gottlöber, Stefan

    2015-03-01

    Accurate predictions of the abundance and clustering of dark matter haloes play a key role in testing the standard cosmological model. Here, we investigate the accuracy of one of the leading methods of connecting the simulated dark matter haloes with observed galaxies- the halo abundance matching (HAM) technique. We show how to choose the optimal values of the mass and force resolution in large volume N-body simulations so that they provide accurate estimates for correlation functions and circular velocities for haloes and their subhaloes - crucial ingredients of the HAM method. At the 10 per cent accuracy, results converge for ˜50 particles for haloes and ˜150 particles for progenitors of subhaloes. In order to achieve this level of accuracy a number of conditions should be satisfied. The force resolution for the smallest resolved (sub)haloes should be in the range (0.1-0.3)rs, where rs is the scale radius of (sub)haloes. The number of particles for progenitors of subhaloes should be ˜150. We also demonstrate that the two-body scattering plays a minor role for the accuracy of N-body simulations thanks to the relatively small number of crossing-times of dark matter in haloes, and the limited force resolution of cosmological simulations.

  18. Increase in the Accuracy of Calculating Length of Horizontal Cable SCS in Civil Engineering

    NASA Astrophysics Data System (ADS)

    Semenov, A.

    2017-11-01

    A modification of the method for calculating the horizontal cable consumption of SCS established at civil engineering facilities is proposed. The proposed procedure preserves the prototype simplicity and provides a 5 percent accuracy increase. The values of the achieved accuracy are justified, their compliance with the practice of real projects is proved. The method is brought to the level of the engineering algorithm and formalized in the form of 12/70 rule.

  19. Quantum Hall resistance standards from graphene grown by chemical vapour deposition on silicon carbide

    NASA Astrophysics Data System (ADS)

    Lafont, F.; Ribeiro-Palau, R.; Kazazis, D.; Michon, A.; Couturaud, O.; Consejo, C.; Chassagne, T.; Zielinski, M.; Portail, M.; Jouault, B.; Schopfer, F.; Poirier, W.

    2015-04-01

    Replacing GaAs by graphene to realize more practical quantum Hall resistance standards (QHRS), accurate to within 10-9 in relative value, but operating at lower magnetic fields than 10 T, is an ongoing goal in metrology. To date, the required accuracy has been reported, only few times, in graphene grown on SiC by Si sublimation, under higher magnetic fields. Here, we report on a graphene device grown by chemical vapour deposition on SiC, which demonstrates such accuracies of the Hall resistance from 10 T up to 19 T at 1.4 K. This is explained by a quantum Hall effect with low dissipation, resulting from strongly localized bulk states at the magnetic length scale, over a wide magnetic field range. Our results show that graphene-based QHRS can replace their GaAs counterparts by operating in as-convenient cryomagnetic conditions, but over an extended magnetic field range. They rely on a promising hybrid and scalable growth method and a fabrication process achieving low-electron-density devices.

  20. Quantum Hall resistance standards from graphene grown by chemical vapour deposition on silicon carbide

    PubMed Central

    Lafont, F.; Ribeiro-Palau, R.; Kazazis, D.; Michon, A.; Couturaud, O.; Consejo, C.; Chassagne, T.; Zielinski, M.; Portail, M.; Jouault, B.; Schopfer, F.; Poirier, W.

    2015-01-01

    Replacing GaAs by graphene to realize more practical quantum Hall resistance standards (QHRS), accurate to within 10−9 in relative value, but operating at lower magnetic fields than 10 T, is an ongoing goal in metrology. To date, the required accuracy has been reported, only few times, in graphene grown on SiC by Si sublimation, under higher magnetic fields. Here, we report on a graphene device grown by chemical vapour deposition on SiC, which demonstrates such accuracies of the Hall resistance from 10 T up to 19 T at 1.4 K. This is explained by a quantum Hall effect with low dissipation, resulting from strongly localized bulk states at the magnetic length scale, over a wide magnetic field range. Our results show that graphene-based QHRS can replace their GaAs counterparts by operating in as-convenient cryomagnetic conditions, but over an extended magnetic field range. They rely on a promising hybrid and scalable growth method and a fabrication process achieving low-electron-density devices. PMID:25891533

  1. A real-time freehand ultrasound calibration system with automatic accuracy feedback and control.

    PubMed

    Chen, Thomas Kuiran; Thurston, Adrian D; Ellis, Randy E; Abolmaesumi, Purang

    2009-01-01

    This article describes a fully automatic, real-time, freehand ultrasound calibration system. The system was designed to be simple and sterilizable, intended for operating-room usage. The calibration system employed an automatic-error-retrieval and accuracy-control mechanism based on a set of ground-truth data. Extensive validations were conducted on a data set of 10,000 images in 50 independent calibration trials to thoroughly investigate the accuracy, robustness, and performance of the calibration system. On average, the calibration accuracy (measured in three-dimensional reconstruction error against a known ground truth) of all 50 trials was 0.66 mm. In addition, the calibration errors converged to submillimeter in 98% of all trials within 12.5 s on average. Overall, the calibration system was able to consistently, efficiently and robustly achieve high calibration accuracy with real-time performance.

  2. On the Accuracy of Language Trees

    PubMed Central

    Pompei, Simone; Loreto, Vittorio; Tria, Francesca

    2011-01-01

    Historical linguistics aims at inferring the most likely language phylogenetic tree starting from information concerning the evolutionary relatedness of languages. The available information are typically lists of homologous (lexical, phonological, syntactic) features or characters for many different languages: a set of parallel corpora whose compilation represents a paramount achievement in linguistics. From this perspective the reconstruction of language trees is an example of inverse problems: starting from present, incomplete and often noisy, information, one aims at inferring the most likely past evolutionary history. A fundamental issue in inverse problems is the evaluation of the inference made. A standard way of dealing with this question is to generate data with artificial models in order to have full access to the evolutionary process one is going to infer. This procedure presents an intrinsic limitation: when dealing with real data sets, one typically does not know which model of evolution is the most suitable for them. A possible way out is to compare algorithmic inference with expert classifications. This is the point of view we take here by conducting a thorough survey of the accuracy of reconstruction methods as compared with the Ethnologue expert classifications. We focus in particular on state-of-the-art distance-based methods for phylogeny reconstruction using worldwide linguistic databases. In order to assess the accuracy of the inferred trees we introduce and characterize two generalizations of standard definitions of distances between trees. Based on these scores we quantify the relative performances of the distance-based algorithms considered. Further we quantify how the completeness and the coverage of the available databases affect the accuracy of the reconstruction. Finally we draw some conclusions about where the accuracy of the reconstructions in historical linguistics stands and about the leading directions to improve it. PMID:21674034

  3. Chemical-text hybrid search engines.

    PubMed

    Zhou, Yingyao; Zhou, Bin; Jiang, Shumei; King, Frederick J

    2010-01-01

    As the amount of chemical literature increases, it is critical that researchers be enabled to accurately locate documents related to a particular aspect of a given compound. Existing solutions, based on text and chemical search engines alone, suffer from the inclusion of "false negative" and "false positive" results, and cannot accommodate diverse repertoire of formats currently available for chemical documents. To address these concerns, we developed an approach called Entity-Canonical Keyword Indexing (ECKI), which converts a chemical entity embedded in a data source into its canonical keyword representation prior to being indexed by text search engines. We implemented ECKI using Microsoft Office SharePoint Server Search, and the resultant hybrid search engine not only supported complex mixed chemical and keyword queries but also was applied to both intranet and Internet environments. We envision that the adoption of ECKI will empower researchers to pose more complex search questions that were not readily attainable previously and to obtain answers at much improved speed and accuracy.

  4. On the tungsten single crystal coatings achieved by chemical vapor transportation deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, J.Q.; Shen, Y.B.; Yao, S.Y.

    2016-12-15

    The tungsten single crystal has many excellent properties, namely a high melting point, high anti-creeping strength. Chemical vapor transportation deposition (CVTD) is a possible approach to achieve large-sized W single crystals for high-temperature application such as the cathode of a thermionic energy converter. In this work, CVTD W coatings were deposited on the monocrystalline molybdenum substrate (a tube with < 111 > axial crystalline orientation) using WCl{sub 6} as a transport medium. The microstructures of the coatings were investigated by a scanning electron microscope (SEM) and electron backscatter diffraction (EBSD). The as-deposited coatings are hexagonal prisms—rough surfaces perpendicular to with alternating hill-like bulges and pits at the side edges of the prisms, and flat surfaces perpendicular to < 112 > with arc-shaped terraces at the side faces. This can be explained by two-dimensional nucleation -mediated lateral growth model. Some parts of the coatings contain hillocks of an exotic morphology (noted as “abnormal growth”). The authors hypothesize that the abnormal growth is likely caused by the defects of the Mo substrate, which facilitate W nucleation sites, cause orientation difference, and may even form boundaries in the coatings. A dislocation density of 10{sup 6} to 10{sup 7} (counts/cm{sup 2}) was revealed by an etch-pit method and synchrotron X-ray diffraction. As the depositing temperature rises, the dislocation density decreases, and no sub-boundaries are found on samples deposited over 1300 °C, as a result of atom diffusion and dislocation climbing. - Highlights: •The varied growth rate causes the different morphologies of different planes. •The W coating is a single crystal when only single hillocks appear. •The (110) plane tends to have the lowest dislocation density. •The dislocation density tends to decrease as the temperature increases.« less

  5. Real-Time Tropospheric Product Establishment and Accuracy Assessment in China

    NASA Astrophysics Data System (ADS)

    Chen, M.; Guo, J.; Wu, J.; Song, W.; Zhang, D.

    2018-04-01

    Tropospheric delay has always been an important issue in Global Navigation Satellite System (GNSS) processing. Empirical tropospheric delay models are difficult to simulate complex and volatile atmospheric environments, resulting in poor accuracy of the empirical model and difficulty in meeting precise positioning demand. In recent years, some scholars proposed to establish real-time tropospheric product by using real-time or near-real-time GNSS observations in a small region, and achieved some good results. This paper uses real-time observing data of 210 Chinese national GNSS reference stations to estimate the tropospheric delay, and establishes ZWD grid model in the country wide. In order to analyze the influence of tropospheric grid product on wide-area real-time PPP, this paper compares the method of taking ZWD grid product as a constraint with the model correction method. The results show that the ZWD grid product estimated based on the national reference stations can improve PPP accuracy and convergence speed. The accuracy in the north (N), east (E) and up (U) direction increase by 31.8 %,15.6 % and 38.3 %, respectively. As with the convergence speed, the accuracy of U direction experiences the most improvement.

  6. Positioning accuracy in a registration-free CT-based navigation system

    NASA Astrophysics Data System (ADS)

    Brandenberger, D.; Birkfellner, W.; Baumann, B.; Messmer, P.; Huegli, R. W.; Regazzoni, P.; Jacob, A. L.

    2007-12-01

    In order to maintain overall navigation accuracy established by a calibration procedure in our CT-based registration-free navigation system, the CT scanner has to repeatedly generate identical volume images of a target at the same coordinates. We tested the positioning accuracy of the prototype of an advanced workplace for image-guided surgery (AWIGS) which features an operating table capable of direct patient transfer into a CT scanner. Volume images (N = 154) of a specialized phantom were analysed for translational shifting after various table translations. Variables included added weight and phantom position on the table. The navigation system's calibration accuracy was determined (bias 2.1 mm, precision ± 0.7 mm, N = 12). In repeated use, a bias of 3.0 mm and a precision of ± 0.9 mm (N = 10) were maintainable. Instances of translational image shifting were related to the table-to-CT scanner docking mechanism. A distance scaling error when altering the table's height was detected. Initial prototype problems visible in our study causing systematic errors were resolved by repeated system calibrations between interventions. We conclude that the accuracy achieved is sufficient for a wide range of clinical applications in surgery and interventional radiology.

  7. Effects of using the developing nurses' thinking model on nursing students' diagnostic accuracy.

    PubMed

    Tesoro, Mary Gay

    2012-08-01

    This quasi-experimental study tested the effectiveness of an educational model, Developing Nurses' Thinking (DNT), on nursing students' clinical reasoning to achieve patient safety. Teaching nursing students to develop effective thinking habits that promote positive patient outcomes and patient safety is a challenging endeavor. Positive patient outcomes and safety are achieved when nurses accurately interpret data and subsequently implement appropriate plans of care. This study's pretest-posttest design determined whether use of the DNT model during 2 weeks of clinical postconferences improved nursing students' (N = 83) diagnostic accuracy. The DNT model helps students to integrate four constructs-patient safety, domain knowledge, critical thinking processes, and repeated practice-to guide their thinking when interpreting patient data and developing effective plans of care. The posttest scores of students from the intervention group showed statistically significant improvement in accuracy. Copyright 2012, SLACK Incorporated.

  8. Hydrodynamic and Chemical Modeling of a Chemical Vapor Deposition Reactor for Zirconia Deposition

    NASA Astrophysics Data System (ADS)

    Belmonte, T.; Gavillet, J.; Czerwiec, T.; Ablitzer, D.; Michel, H.

    1997-09-01

    Zirconia is deposited on cylindrical substrates by flowing post-discharge enhanced chemical vapor deposition. In this paper, a two dimensional hydrodynamic and chemical modeling of the reactor is described for given plasma characteristics. It helps in determining rate constants of the synthesis reaction of zirconia in gas phase and on the substrate which is ZrCl4 hydrolysis. Calculated deposition rate profiles are obtained by modeling under various conditions and fits with a satisfying accuracy the experimental results. The role of transport processes and the mixing conditions of excited gases with remaining ones are studied. Gas phase reaction influence on the growth rate is also discussed.

  9. Automated novel high-accuracy miniaturized positioning system for use in analytical instrumentation

    NASA Astrophysics Data System (ADS)

    Siomos, Konstadinos; Kaliakatsos, John; Apostolakis, Manolis; Lianakis, John; Duenow, Peter

    1996-01-01

    The development of three-dimensional automotive devices (micro-robots) for applications in analytical instrumentation, clinical chemical diagnostics and advanced laser optics, depends strongly on the ability of such a device: firstly to be positioned with high accuracy, reliability, and automatically, by means of user friendly interface techniques; secondly to be compact; and thirdly to operate under vacuum conditions, free of most of the problems connected with conventional micropositioners using stepping-motor gear techniques. The objective of this paper is to develop and construct a mechanically compact computer-based micropositioning system for coordinated motion in the X-Y-Z directions with: (1) a positioning accuracy of less than 1 micrometer, (the accuracy of the end-position of the system is controlled by a hard/software assembly using a self-constructed optical encoder); (2) a heat-free propulsion mechanism for vacuum operation; and (3) synchronized X-Y motion.

  10. Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services

    ERIC Educational Resources Information Center

    Wang, Guoquan

    2013-01-01

    High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…

  11. BIOMONITORING TO ACHIEVE CONTROL OF TOXIC EFFLUENTS

    EPA Science Inventory

    This 48 - page Technology Transfer Report provides a case study of how water quality-based toxicity control procedures can be combined with chemical analyses and biological stream surveys to achieve more effective water pollution control. t describes how regulatory agencies used ...

  12. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  13. Accuracy evaluation of 3D lidar data from small UAV

    NASA Astrophysics Data System (ADS)

    Tulldahl, H. M.; Bissmarck, Fredrik; Larsson, Hâkan; Grönwall, Christina; Tolt, Gustav

    2015-10-01

    A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.

  14. An angle encoder for super-high resolution and super-high accuracy using SelfA

    NASA Astrophysics Data System (ADS)

    Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko

    2014-06-01

    Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after

  15. Students' Conceptions of Chemical Change.

    ERIC Educational Resources Information Center

    Hesse, Joseph J., III; Anderson, Charles W.

    1992-01-01

    Presents results of intensive clinical interviews with 11 high school chemistry students representing a broad range of achievement levels as selected from 180 students who completed a written test upon completion of an instructional unit on chemical change. Results indicate that students commonly experience difficulties in chemical knowledge,…

  16. Decision Accuracy and the Role of Spatial Interaction in Opinion Dynamics

    NASA Astrophysics Data System (ADS)

    Torney, Colin J.; Levin, Simon A.; Couzin, Iain D.

    2013-04-01

    The opinions and actions of individuals within interacting groups are frequently determined by both social and personal information. When sociality (or the pressure to conform) is strong and individual preferences are weak, groups will remain cohesive until a consensus decision is reached. When group decisions are subject to a bias, representing for example private information known by some members of the population or imperfect information known by all, then the accuracy achieved for a fixed level of bias will increase with population size. In this work we determine how the scaling between accuracy and group size can be related to the microscopic properties of the decision-making process. By simulating a spatial model of opinion dynamics we show that the relationship between the instantaneous fraction of leaders in the population ( L), system size ( N), and accuracy depends on the frequency of individual opinion switches and the level of population viscosity. When social mixing is slow, and individual opinion changes are frequent, accuracy is determined by the absolute number of informed individuals. As mixing rates increase, or the rate of opinion updates decrease, a transition occurs to a regime where accuracy is determined by the value of L√{ N}. We investigate the transition between different scaling regimes analytically by examining a well-mixed limit.

  17. A design of optical modulation system with pixel-level modulation accuracy

    NASA Astrophysics Data System (ADS)

    Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu

    2018-01-01

    Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.

  18. Novel naïve Bayes classification models for predicting the carcinogenicity of chemicals.

    PubMed

    Zhang, Hui; Cao, Zhi-Xing; Li, Meng; Li, Yu-Zhi; Peng, Cheng

    2016-11-01

    The carcinogenicity prediction has become a significant issue for the pharmaceutical industry. The purpose of this investigation was to develop a novel prediction model of carcinogenicity of chemicals by using a naïve Bayes classifier. The established model was validated by the internal 5-fold cross validation and external test set. The naïve Bayes classifier gave an average overall prediction accuracy of 90 ± 0.8% for the training set and 68 ± 1.9% for the external test set. Moreover, five simple molecular descriptors (e.g., AlogP, Molecular weight (M W ), No. of H donors, Apol and Wiener) considered as important for the carcinogenicity of chemicals were identified, and some substructures related to the carcinogenicity were achieved. Thus, we hope the established naïve Bayes prediction model could be applied to filter early-stage molecules for this potential carcinogenicity adverse effect; and the identified five simple molecular descriptors and substructures of carcinogens would give a better understanding of the carcinogenicity of chemicals, and further provide guidance for medicinal chemists in the design of new candidate drugs and lead optimization, ultimately reducing the attrition rate in later stages of drug development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Accuracy analysis of the space shuttle solid rocket motor profile measuring device

    NASA Technical Reports Server (NTRS)

    Estler, W. Tyler

    1989-01-01

    The Profile Measuring Device (PMD) was developed at the George C. Marshall Space Flight Center following the loss of the Space Shuttle Challenger. It is a rotating gauge used to measure the absolute diameters of mating features of redesigned Solid Rocket Motor field joints. Diameter tolerance of these features are typically + or - 0.005 inches and it is required that the PMD absolute measurement uncertainty be within this tolerance. In this analysis, the absolute accuracy of these measurements were found to be + or - 0.00375 inches, worst case, with a potential accuracy of + or - 0.0021 inches achievable by improved temperature control.

  20. Fusion with Language Models Improves Spelling Accuracy for ERP-based Brain Computer Interface Spellers

    PubMed Central

    Orhan, Umut; Erdogmus, Deniz; Roark, Brian; Purwar, Shalini; Hild, Kenneth E.; Oken, Barry; Nezamfar, Hooman; Fried-Oken, Melanie

    2013-01-01

    Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system. PMID:22255652

  1. Configuration optimization and experimental accuracy evaluation of a bone-attached, parallel robot for skull surgery.

    PubMed

    Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias

    2016-03-01

    Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.

  2. Frapid: achieving full automation of FRAP for chemical probe validation

    PubMed Central

    Yapp, Clarence; Rogers, Catherine; Savitsky, Pavel; Philpott, Martin; Müller, Susanne

    2016-01-01

    Fluorescence Recovery After Photobleaching (FRAP) is an established method for validating chemical probes against the chromatin reading bromodomains, but so far requires constant human supervision. Here, we present Frapid, an automated open source code implementation of FRAP that fully handles cell identification through fuzzy logic analysis, drug dispensing with a custom-built fluid handler, image acquisition & analysis, and reporting. We successfully tested Frapid on 3 bromodomains as well as on spindlin1 (SPIN1), a methyl lysine binder, for the first time. PMID:26977352

  3. High-accuracy Aspheric X-ray Mirror Metrology Using Software Configurable Optical Test System/deflectometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Run; Su, Peng; Burge, James H.

    The Software Configurable Optical Test System (SCOTS) uses deflectometry to measure surface slopes of general optical shapes without the need for additional null optics. Careful alignment of test geometry and calibration of inherent system error improve the accuracy of SCOTS to a level where it competes with interferometry. We report a SCOTS surface measurement of an off-axis superpolished elliptical x-ray mirror that achieves <1 nm<1 nm root-mean-square accuracy for the surface measurement with low-order term included.

  4. Ambient Concentrations of Metabolic Disrupting Chemicals and Children's Academic Achievement in El Paso, Texas.

    PubMed

    Clark-Reyna, Stephanie E; Grineski, Sara E; Collins, Timothy W

    2016-09-01

    Concerns about children's weight have steadily risen alongside the manufacture and use of myriad chemicals in the US. One class of chemicals, known as metabolic disruptors, interfere with human endocrine and metabolic functioning and are of specific concern to children's health and development. This article examines the effect of residential concentrations of metabolic disrupting chemicals on children's school performance for the first time. Census tract-level ambient concentrations for known metabolic disruptors come from the US Environmental Protection Agency's National Air Toxics Assessment. Other measures were drawn from a survey of primary caretakers of 4th and 5th grade children in El Paso Independent School District (El Paso, TX, USA). A mediation model is employed to examine two hypothetical pathways through which the ambient level of metabolic disruptors at a child's home might affect grade point average. Results indicate that concentrations of metabolic disruptors are statistically significantly associated with lower grade point averages directly and indirectly through body mass index. Findings from this study have practical implications for environmental justice research and chemical policy reform in the US.

  5. Effect of X-Word Grammar and Traditional Grammar Instruction on Grammatical Accuracy

    ERIC Educational Resources Information Center

    Livingston, Sue; Toce, Andi; Casey, Toce; Montoya, Fernando; Hart, Bonny R.; O'Flaherty, Carmela

    2018-01-01

    This study first briefly describes an instructional approach to teaching grammar known as X-Word Grammar and then compares its effectiveness in assisting students in achieving grammatical accuracy with traditionally taught grammar. Two groups of L2 pre-college students were taught using curricula and practice procedures in two different grammar…

  6. Identifying geochemical processes using End Member Mixing Analysis to decouple chemical components for mixing ratio calculations

    NASA Astrophysics Data System (ADS)

    Pelizardi, Flavia; Bea, Sergio A.; Carrera, Jesús; Vives, Luis

    2017-07-01

    Mixing calculations (i.e., the calculation of the proportions in which end-members are mixed in a sample) are essential for hydrological research and water management. However, they typically require the use of conservative species, a condition that may be difficult to meet due to chemical reactions. Mixing calculation also require identifying end-member waters, which is usually achieved through End Member Mixing Analysis (EMMA). We present a methodology to help in the identification of both end-members and such reactions, so as to improve mixing ratio calculations. The proposed approach consists of: (1) identifying the potential chemical reactions with the help of EMMA; (2) defining decoupled conservative chemical components consistent with those reactions; (3) repeat EMMA with the decoupled (i.e., conservative) components, so as to identify end-members waters; and (4) computing mixing ratios using the new set of components and end-members. The approach is illustrated by application to two synthetic mixing examples involving mineral dissolution and cation exchange reactions. Results confirm that the methodology can be successfully used to identify geochemical processes affecting the mixtures, thus improving the accuracy of mixing ratios calculations and relaxing the need for conservative species.

  7. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    NASA Astrophysics Data System (ADS)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  8. Speed, Dissipation, and Accuracy in Early T-cell Recognition

    NASA Astrophysics Data System (ADS)

    Cui, Wenping; Mehta, Pankaj

    In the immune system, T cells can perform self-foreign discrimination with great foreign ligand sensitivity, high decision speed and low energy cost. There is significant evidence T-cells achieve such great performance with a mechanism: kinetic proofreading(KPR). KPR-based mechanisms actively consume energy to increase the specificity of T-cell recognition. An important theoretical question arises: how to understand trade-offs and fundamental limits on accuracy, speed, and dissipation (energy consumption). Recent theoretical work suggests that it is always possible to reduce the the error of KPR-based mechanisms by waiting longer and/or consuming more energy. Surprisingly, we find that this is not the case and that there actually exists an optimal point in the speed-energy-accuracy plane for KPR and its generalizations. This work was supported by NIH R35 and Simons MMLS Grant.

  9. Effects of chemical disinfectant solutions on the stability and accuracy of the dental impression complex.

    PubMed

    Rios, M P; Morgano, S M; Stein, R S; Rose, L

    1996-10-01

    Currently available impression materials were not designed for disinfection or sterilization, and it is conceivable that disinfectants may adversely affect impressions. This study evaluated the accuracy and dimensional stability of polyether (Permadyne/Impregum) and polyvinyl siloxane (Express) impression materials retained by their adhesives in two different acrylic resin tray designs (perforated and nonperforated) when the materials were immersed for either 30 or 60 minutes in three high-level disinfectants. Distilled water and no solution served as controls. A stainless steel test analog similar to ADA specification No. 19 was used. A total of 400 impressions were made with all combinations of impression materials, tray designs, disinfectant, and soaking times. Samples were evaluated microscopically before and after immersion and 48 hours after soaking. Results indicated that these two impression materials were dimensionally stable. Because the results emphasized the stability and accuracy of the impression complex under various conditions, dentists can perform disinfection procedures similar to the protocol of this study without concern for clinically significant distortion of the impression.

  10. Use of in Vitro HTS-Derived Concentration–Response Data as Biological Descriptors Improves the Accuracy of QSAR Models of in Vivo Toxicity

    PubMed Central

    Sedykh, Alexander; Zhu, Hao; Tang, Hao; Zhang, Liying; Richard, Ann; Rusyn, Ivan; Tropsha, Alexander

    2011-01-01

    Background Quantitative high-throughput screening (qHTS) assays are increasingly being used to inform chemical hazard identification. Hundreds of chemicals have been tested in dozens of cell lines across extensive concentration ranges by the National Toxicology Program in collaboration with the National Institutes of Health Chemical Genomics Center. Objectives Our goal was to test a hypothesis that dose–response data points of the qHTS assays can serve as biological descriptors of assayed chemicals and, when combined with conventional chemical descriptors, improve the accuracy of quantitative structure–activity relationship (QSAR) models applied to prediction of in vivo toxicity end points. Methods We obtained cell viability qHTS concentration–response data for 1,408 substances assayed in 13 cell lines from PubChem; for a subset of these compounds, rodent acute toxicity half-maximal lethal dose (LD50) data were also available. We used the k nearest neighbor classification and random forest QSAR methods to model LD50 data using chemical descriptors either alone (conventional models) or combined with biological descriptors derived from the concentration–response qHTS data (hybrid models). Critical to our approach was the use of a novel noise-filtering algorithm to treat qHTS data. Results Both the external classification accuracy and coverage (i.e., fraction of compounds in the external set that fall within the applicability domain) of the hybrid QSAR models were superior to conventional models. Conclusions Concentration–response qHTS data may serve as informative biological descriptors of molecules that, when combined with conventional chemical descriptors, may considerably improve the accuracy and utility of computational approaches for predicting in vivo animal toxicity end points. PMID:20980217

  11. Two high accuracy digital integrators for Rogowski current transducers.

    PubMed

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  12. Two high accuracy digital integrators for Rogowski current transducers

    NASA Astrophysics Data System (ADS)

    Luo, Pan-dian; Li, Hong-bin; Li, Zhen-hua

    2014-01-01

    The Rogowski current transducers have been widely used in AC current measurement, but their accuracy is mainly subject to the analog integrators, which have typical problems such as poor long-term stability and being susceptible to environmental conditions. The digital integrators can be another choice, but they cannot obtain a stable and accurate output for the reason that the DC component in original signal can be accumulated, which will lead to output DC drift. Unknown initial conditions can also result in integral output DC offset. This paper proposes two improved digital integrators used in Rogowski current transducers instead of traditional analog integrators for high measuring accuracy. A proportional-integral-derivative (PID) feedback controller and an attenuation coefficient have been applied in improving the Al-Alaoui integrator to change its DC response and get an ideal frequency response. For the special design in the field of digital signal processing, the improved digital integrators have better performance than analog integrators. Simulation models are built for the purpose of verification and comparison. The experiments prove that the designed integrators can achieve higher accuracy than analog integrators in steady-state response, transient-state response, and temperature changing condition.

  13. Cognitive accuracy and intelligent executive function in the brain and in business.

    PubMed

    Bailey, Charles E

    2007-11-01

    This article reviews research on cognition, language, organizational culture, brain, behavior, and evolution to posit the value of operating with a stable reference point based on cognitive accuracy and a rational bias. Drawing on rational-emotive behavioral science, social neuroscience, and cognitive organizational science on the one hand and a general model of brain and frontal lobe executive function on the other, I suggest implications for organizational success. Cognitive thought processes depend on specific brain structures functioning as effectively as possible under conditions of cognitive accuracy. However, typical cognitive processes in hierarchical business structures promote the adoption and application of subjective organizational beliefs and, thus, cognitive inaccuracies. Applying informed frontal lobe executive functioning to cognition, emotion, and organizational behavior helps minimize the negative effects of indiscriminate application of personal and cultural belief systems to business. Doing so enhances cognitive accuracy and improves communication and cooperation. Organizations operating with cognitive accuracy will tend to respond more nimbly to market pressures and achieve an overall higher level of performance and employee satisfaction.

  14. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  15. Evaluation of registration accuracy between Sentinel-2 and Landsat 8

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia

    2016-08-01

    Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).

  16. Excess chemical potential of small solutes across water--membrane and water--hexane interfaces

    NASA Technical Reports Server (NTRS)

    Pohorille, A.; Wilson, M. A.

    1996-01-01

    The excess chemical potentials of five small, structurally related solutes, CH4, CH3F, CH2F2, CHF3, and CF4, across the water-glycerol 1-monooleate bilayer and water-hexane interfaces were calculated at 300, 310, and 340 K using the particle insertion method. The excess chemical potentials of nonpolar molecules (CH4 and CF4) decrease monotonically or nearly monotonically from water to a nonpolar phase. In contrast, for molecules that possess permanent dipole moments (CH3F, CH2F, and CHF3), the excess chemical potentials exhibit an interfacial minimum that arises from superposition of two monotonically and oppositely changing contributions: electrostatic and nonelectrostatic. The nonelectrostatic term, dominated by the reversible work of creating a cavity that accommodates the solute, decreases, whereas the electrostatic term increases across the interface from water to the membrane interior. In water, the dependence of this term on the dipole moment is accurately described by second order perturbation theory. To achieve the same accuracy at the interface, third order terms must also be included. In the interfacial region, the molecular structure of the solvent influences both the excess chemical potential and solute orientations. The excess chemical potential across the interface increases with temperature, but this effect is rather small. Our analysis indicates that a broad range of small, moderately polar molecules should be surface active at the water-membrane and water-oil interfaces. The biological and medical significance of this result, especially in relation to the mechanism of anesthetic action, is discussed.

  17. How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy

    PubMed Central

    Weaver, Steven A.; Ucar, Zennure; Bettinger, Pete; Merry, Krista

    2015-01-01

    understanding of antenna positioning within the receiver to achieve the greatest accuracy during data collection. PMID:25923667

  18. How a GNSS Receiver Is Held May Affect Static Horizontal Position Accuracy.

    PubMed

    Weaver, Steven A; Ucar, Zennure; Bettinger, Pete; Merry, Krista

    2015-01-01

    understanding of antenna positioning within the receiver to achieve the greatest accuracy during data collection.

  19. Automated extraction of chemical structure information from digital raster images

    PubMed Central

    Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro

    2009-01-01

    Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research

  20. An automated method for the evaluation of the pointing accuracy of sun-tracking devices

    NASA Astrophysics Data System (ADS)

    Baumgartner, Dietmar J.; Rieder, Harald E.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz

    2016-04-01

    The accuracy of measurements of solar radiation (direct and diffuse radiation) depends significantly on the accuracy of the operational sun-tracking device. Thus rigid targets for instrument performance and operation are specified for international monitoring networks, such as e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices fulfilling these accuracy targets are available from various instrument manufacturers, however none of the commercially available systems comprises a secondary accuracy control system, allowing platform operators to independently validate the pointing accuracy of sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system independent and cost-effective method for evaluating the pointing accuracy of sun-tracking devices. We detail the monitoring system setup, its design and specifications and results from its application to the sun-tracking system operated at the Austrian RADiation network (ARAD) site Kanzelhöhe Observatory (KSO). Results from KSO-STREAMS (for mid-March to mid-June 2015) show that the tracking accuracy of the device operated at KSO lies well within BSRN specifications (i.e. 0.1 degree accuracy). We contrast results during clear-sky and partly cloudy conditions documenting sun-tracking performance at manufacturer specified accuracies for active tracking (0.02 degrees) and highlight accuracies achieved during passive tracking i.e. periods with less than 300 W m-2 direct radiation. Furthermore we detail limitations to tracking surveillance during overcast conditions and periods of partial solar limb coverage by clouds.

  1. Improving the accuracy of Møller-Plesset perturbation theory with neural networks

    NASA Astrophysics Data System (ADS)

    McGibbon, Robert T.; Taube, Andrew G.; Donchev, Alexander G.; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L.; Shaw, David E.

    2017-10-01

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol-1 (root-mean-square error 0.09 kcal mol-1), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  2. Improving the accuracy of Møller-Plesset perturbation theory with neural networks.

    PubMed

    McGibbon, Robert T; Taube, Andrew G; Donchev, Alexander G; Siva, Karthik; Hernández, Felipe; Hargus, Cory; Law, Ka-Hei; Klepeis, John L; Shaw, David E

    2017-10-28

    Noncovalent interactions are of fundamental importance across the disciplines of chemistry, materials science, and biology. Quantum chemical calculations on noncovalently bound complexes, which allow for the quantification of properties such as binding energies and geometries, play an essential role in advancing our understanding of, and building models for, a vast array of complex processes involving molecular association or self-assembly. Because of its relatively modest computational cost, second-order Møller-Plesset perturbation (MP2) theory is one of the most widely used methods in quantum chemistry for studying noncovalent interactions. MP2 is, however, plagued by serious errors due to its incomplete treatment of electron correlation, especially when modeling van der Waals interactions and π-stacked complexes. Here we present spin-network-scaled MP2 (SNS-MP2), a new semi-empirical MP2-based method for dimer interaction-energy calculations. To correct for errors in MP2, SNS-MP2 uses quantum chemical features of the complex under study in conjunction with a neural network to reweight terms appearing in the total MP2 interaction energy. The method has been trained on a new data set consisting of over 200 000 complete basis set (CBS)-extrapolated coupled-cluster interaction energies, which are considered the gold standard for chemical accuracy. SNS-MP2 predicts gold-standard binding energies of unseen test compounds with a mean absolute error of 0.04 kcal mol -1 (root-mean-square error 0.09 kcal mol -1 ), a 6- to 7-fold improvement over MP2. To the best of our knowledge, its accuracy exceeds that of all extant density functional theory- and wavefunction-based methods of similar computational cost, and is very close to the intrinsic accuracy of our benchmark coupled-cluster methodology itself. Furthermore, SNS-MP2 provides reliable per-conformation confidence intervals on the predicted interaction energies, a feature not available from any alternative method.

  3. Predicting carcinogenicity of diverse chemicals using probabilistic neural network modeling approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Kunwar P., E-mail: kpsingh_52@yahoo.com; Environmental Chemistry Division, CSIR-Indian Institute of Toxicology Research, Post Box 80, Mahatma Gandhi Marg, Lucknow 226 001; Gupta, Shikha

    Robust global models capable of discriminating positive and non-positive carcinogens; and predicting carcinogenic potency of chemicals in rodents were developed. The dataset of 834 structurally diverse chemicals extracted from Carcinogenic Potency Database (CPDB) was used which contained 466 positive and 368 non-positive carcinogens. Twelve non-quantum mechanical molecular descriptors were derived. Structural diversity of the chemicals and nonlinearity in the data were evaluated using Tanimoto similarity index and Brock–Dechert–Scheinkman statistics. Probabilistic neural network (PNN) and generalized regression neural network (GRNN) models were constructed for classification and function optimization problems using the carcinogenicity end point in rat. Validation of the models wasmore » performed using the internal and external procedures employing a wide series of statistical checks. PNN constructed using five descriptors rendered classification accuracy of 92.09% in complete rat data. The PNN model rendered classification accuracies of 91.77%, 80.70% and 92.08% in mouse, hamster and pesticide data, respectively. The GRNN constructed with nine descriptors yielded correlation coefficient of 0.896 between the measured and predicted carcinogenic potency with mean squared error (MSE) of 0.44 in complete rat data. The rat carcinogenicity model (GRNN) applied to the mouse and hamster data yielded correlation coefficient and MSE of 0.758, 0.71 and 0.760, 0.46, respectively. The results suggest for wide applicability of the inter-species models in predicting carcinogenic potency of chemicals. Both the PNN and GRNN (inter-species) models constructed here can be useful tools in predicting the carcinogenicity of new chemicals for regulatory purposes. - Graphical abstract: Figure (a) shows classification accuracies (positive and non-positive carcinogens) in rat, mouse, hamster, and pesticide data yielded by optimal PNN model. Figure (b) shows generalization and

  4. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  5. Screening Chemicals for Estrogen Receptor Bioactivity Using a Computational Model.

    PubMed

    Browne, Patience; Judson, Richard S; Casey, Warren M; Kleinstreuer, Nicole C; Thomas, Russell S

    2015-07-21

    The U.S. Environmental Protection Agency (EPA) is considering high-throughput and computational methods to evaluate the endocrine bioactivity of environmental chemicals. Here we describe a multistep, performance-based validation of new methods and demonstrate that these new tools are sufficiently robust to be used in the Endocrine Disruptor Screening Program (EDSP). Results from 18 estrogen receptor (ER) ToxCast high-throughput screening assays were integrated into a computational model that can discriminate bioactivity from assay-specific interference and cytotoxicity. Model scores range from 0 (no activity) to 1 (bioactivity of 17β-estradiol). ToxCast ER model performance was evaluated for reference chemicals, as well as results of EDSP Tier 1 screening assays in current practice. The ToxCast ER model accuracy was 86% to 93% when compared to reference chemicals and predicted results of EDSP Tier 1 guideline and other uterotrophic studies with 84% to 100% accuracy. The performance of high-throughput assays and ToxCast ER model predictions demonstrates that these methods correctly identify active and inactive reference chemicals, provide a measure of relative ER bioactivity, and rapidly identify chemicals with potential endocrine bioactivities for additional screening and testing. EPA is accepting ToxCast ER model data for 1812 chemicals as alternatives for EDSP Tier 1 ER binding, ER transactivation, and uterotrophic assays.

  6. Use of Cell Viability Assay Data Improves the Prediction Accuracy of Conventional Quantitative Structure–Activity Relationship Models of Animal Carcinogenicity

    PubMed Central

    Zhu, Hao; Rusyn, Ivan; Richard, Ann; Tropsha, Alexander

    2008-01-01

    Background To develop efficient approaches for rapid evaluation of chemical toxicity and human health risk of environmental compounds, the National Toxicology Program (NTP) in collaboration with the National Center for Chemical Genomics has initiated a project on high-throughput screening (HTS) of environmental chemicals. The first HTS results for a set of 1,408 compounds tested for their effects on cell viability in six different cell lines have recently become available via PubChem. Objectives We have explored these data in terms of their utility for predicting adverse health effects of the environmental agents. Methods and results Initially, the classification k nearest neighbor (kNN) quantitative structure–activity relationship (QSAR) modeling method was applied to the HTS data only, for a curated data set of 384 compounds. The resulting models had prediction accuracies for training, test (containing 275 compounds together), and external validation (109 compounds) sets as high as 89%, 71%, and 74%, respectively. We then asked if HTS results could be of value in predicting rodent carcinogenicity. We identified 383 compounds for which data were available from both the Berkeley Carcinogenic Potency Database and NTP–HTS studies. We found that compounds classified by HTS as “actives” in at least one cell line were likely to be rodent carcinogens (sensitivity 77%); however, HTS “inactives” were far less informative (specificity 46%). Using chemical descriptors only, kNN QSAR modeling resulted in 62.3% prediction accuracy for rodent carcinogenicity applied to this data set. Importantly, the prediction accuracy of the model was significantly improved (72.7%) when chemical descriptors were augmented by HTS data, which were regarded as biological descriptors. Conclusions Our studies suggest that combining NTP–HTS profiles with conventional chemical descriptors could considerably improve the predictive power of computational approaches in toxicology. PMID

  7. CON4EI: EpiOcular™ Eye Irritation Test (EpiOcular™ EIT) for hazard identification and labelling of eye irritating chemicals.

    PubMed

    Kandarova, H; Letasiova, S; Adriaens, E; Guest, R; Willoughby, J A; Drzewiecka, A; Gruszka, K; Alépée, Nathalie; Verstraelen, Sandra; Van Rompay, An R

    2018-06-01

    Assessment of the acute eye irritation potential is part of the international regulatory requirements for testing of chemicals. The objective of the CON4EI project was to develop tiered testing strategies for eye irritation assessment. A set of 80 reference chemicals (38 liquids and 42 solids) was tested with eight different methods. Here, the results obtained with the EpiOcular™ Eye Irritation Test (EIT), adopted as OECD TG 492, are shown. The primary aim of this study was to evaluate of the performance of the test method to discriminate between chemicals not requiring classification for serious eye damage/eye irritancy (No Category) and chemicals requiring classification and labelling. In addition, the predictive capacity in terms of in vivo drivers of classification (i.e. corneal opacity, conjunctival redness and persistence at day 21) was investigated. EpiOcular™ EIT achieved a sensitivity of 97%, a specificity of 87% and accuracy of 95% and also confirmed its excellent reproducibility (100%) from the original validation. The assay was applicable to all chemical categories tested in this project and its performance was not limited to the particular driver of the classification. In addition to the existing prediction model for dichotomous categorization, a new prediction model for Cat 1 is suggested. Copyright © 2017. Published by Elsevier Ltd.

  8. Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures

    NASA Astrophysics Data System (ADS)

    Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar

    2017-02-01

    A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.

  9. Estimating biodegradation half-lives for use in chemical screening.

    PubMed

    Aronson, Dallas; Boethling, Robert; Howard, Philip; Stiteler, William

    2006-06-01

    Biodegradation half-lives are needed for many applications in chemical screening, but these data are not available for most chemicals. To address this, in phase one of this work we correlated the much more abundant ready and inherent biodegradation test data with measured half-lives for water and soil. In phase two, we explored the utility of the BIOWIN models (in EPI Suite) and molecular fragments for predicting half-lives. BIOWIN model output was correlated directly with measured half-lives, and new models were developed by re-regressing the BIOWIN fragments against the half-lives. All of these approaches gave the best results when used for binary (fast/slow) classification of half-lives, with accuracy generally in the 70-80% range. In the last phase, we used the collected half-life data to examine the default half-lives assigned by EPI Suite and the PBT Profiler for use as input to their level III multimedia models. It is concluded that estimated half-lives should not be used for purposes other than binning or prioritizing chemicals unless accuracy improves significantly.

  10. BitterSweetForest: A Random Forest Based Binary Classifier to Predict Bitterness and Sweetness of Chemical Compounds

    PubMed Central

    Banerjee, Priyanka; Preissner, Robert

    2018-01-01

    Taste of a chemical compound present in food stimulates us to take in nutrients and avoid poisons. However, the perception of taste greatly depends on the genetic as well as evolutionary perspectives. The aim of this work was the development and validation of a machine learning model based on molecular fingerprints to discriminate between sweet and bitter taste of molecules. BitterSweetForest is the first open access model based on KNIME workflow that provides platform for prediction of bitter and sweet taste of chemical compounds using molecular fingerprints and Random Forest based classifier. The constructed model yielded an accuracy of 95% and an AUC of 0.98 in cross-validation. In independent test set, BitterSweetForest achieved an accuracy of 96% and an AUC of 0.98 for bitter and sweet taste prediction. The constructed model was further applied to predict the bitter and sweet taste of natural compounds, approved drugs as well as on an acute toxicity compound data set. BitterSweetForest suggests 70% of the natural product space, as bitter and 10% of the natural product space as sweet with confidence score of 0.60 and above. 77% of the approved drug set was predicted as bitter and 2% as sweet with a confidence score of 0.75 and above. Similarly, 75% of the total compounds from acute oral toxicity class were predicted only as bitter with a minimum confidence score of 0.75, revealing toxic compounds are mostly bitter. Furthermore, we applied a Bayesian based feature analysis method to discriminate the most occurring chemical features between sweet and bitter compounds using the feature space of a circular fingerprint. PMID:29696137

  11. BitterSweetForest: A random forest based binary classifier to predict bitterness and sweetness of chemical compounds

    NASA Astrophysics Data System (ADS)

    Banerjee, Priyanka; Preissner, Robert

    2018-04-01

    Taste of a chemical compounds present in food stimulates us to take in nutrients and avoid poisons. However, the perception of taste greatly depends on the genetic as well as evolutionary perspectives. The aim of this work was the development and validation of a machine learning model based on molecular fingerprints to discriminate between sweet and bitter taste of molecules. BitterSweetForest is the first open access model based on KNIME workflow that provides platform for prediction of bitter and sweet taste of chemical compounds using molecular fingerprints and Random Forest based classifier. The constructed model yielded an accuracy of 95% and an AUC of 0.98 in cross-validation. In independent test set, BitterSweetForest achieved an accuracy of 96 % and an AUC of 0.98 for bitter and sweet taste prediction. The constructed model was further applied to predict the bitter and sweet taste of natural compounds, approved drugs as well as on an acute toxicity compound data set. BitterSweetForest suggests 70% of the natural product space, as bitter and 10 % of the natural product space as sweet with confidence score of 0.60 and above. 77 % of the approved drug set was predicted as bitter and 2% as sweet with a confidence scores of 0.75 and above. Similarly, 75% of the total compounds from acute oral toxicity class were predicted only as bitter with a minimum confidence score of 0.75, revealing toxic compounds are mostly bitter. Furthermore, we applied a Bayesian based feature analysis method to discriminate the most occurring chemical features between sweet and bitter compounds from the feature space of a circular fingerprint.

  12. Toxic fables: the advertising and marketing of agricultural chemicals in the great plains, 1945-1985.

    PubMed

    Vail, David D

    2012-12-01

    This paper examines how pesticides and their technologies were sold to farmers and pilots throughout the midtwentieth century. It principally considers how marketing rhetoric and advertisement strategies used by chemical companies and aerial spraying firms influenced the practices and perspectives of farm producers in the Great Plains. In order to convince landowners and agricultural leaders to buy their pesticides, chemical companies generated advertisements that championed local crop health, mixture accuracy, livestock safety and a chemical-farming 'way of life' that kept fields healthy and productive. Combining notions of safety, accuracy and professionalism with pest eradication messages reinforced the standards that landowners, pilots and agriculturalists would hold regarding toxicity and risk when spraying their fields. As the politics of health changed in the aftermath of Rachel Carson's Silent Spring, these companies and aerial spraying outfits responded by keeping to a vision of agricultural health that required poisons for protection through technological accuracy. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Collection of quantitative chemical release field data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demirgian, J.; Macha, S.; Loyola Univ.

    1999-01-01

    Detection and quantitation of chemicals in the environment requires Fourier-transform infrared (FTIR) instruments that are properly calibrated and tested. This calibration and testing requires field testing using matrices that are representative of actual instrument use conditions. Three methods commonly used for developing calibration files and training sets in the field are a closed optical cell or chamber, a large-scale chemical release, and a small-scale chemical release. There is no best method. The advantages and limitations of each method should be considered in evaluating field results. Proper calibration characterizes the sensitivity of an instrument, its ability to detect a component inmore » different matrices, and the quantitative accuracy and precision of the results.« less

  14. Chemical leasing business models: a contribution to the effective risk management of chemical substances.

    PubMed

    Ohl, Cornelia; Moser, Frank

    2007-08-01

    Chemicals indisputably contribute greatly to the well-being of modern societies. Apart from such benefits, however, chemicals often pose serious threats to human health and the environment when improperly handled. Therefore, the European Commission has proposed a regulatory framework for the Registration, Evaluation and Authorization of Chemicals (REACH) that requires companies using chemicals to gather pertinent information on the properties of these substances. In this article, we argue that the crucial aspect of this information management may be the honesty and accuracy of the transfer of relevant knowledge from the producer of a chemical to its user. This may be particularly true if the application of potentially hazardous chemicals is not part of the user's core competency. Against this background, we maintain that the traditional sales concept provides no incentives for transferring this knowledge. The reason is that increased user knowledge of a chemical's properties may raise the efficiency of its application. That is, excessive and unnecessary usage will be eliminated. This, in turn, would lower the amount of chemicals sold and in competitive markets directly decrease profits of the producer. Through the introduction of chemical leasing business models, we attempt to present a strategy to overcome the incentive structure of classical sales models, which is counterproductive for the transfer of knowledge. By introducing two models (a Model A that differs least and a Model B that differs most from traditional sales concepts), we demonstrate that chemical leasing business models are capable of accomplishing the goal of Registration, Evaluation and Authorization of Chemicals: to effectively manage the risk of chemicals by reducing the total quantity of chemicals used, either by a transfer of applicable knowledge from the lessor to the lessee (Model A) or by efficient application of the chemical by the lessor him/herself (Model B).

  15. Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.

    2010-01-01

    A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.

  16. Accuracy of stream habitat interpolations across spatial scales

    USGS Publications Warehouse

    Sheehan, Kenneth R.; Welsh, Stuart A.

    2013-01-01

    Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.

  17. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  18. Effect of weathering on accuracy of fuel-moisture-indicator sticks in the Pacific Northwest.

    Treesearch

    William G. Morris

    1959-01-01

    How much does weathering affect accuracy of fuel-moisture indicator stick readings in different sections of Oregon and Washington? If unpainted lumber is exposed to weather for a few years, its color changes and the grain shows as much erosion as if it were sandblasted. According to the Forest Products Laboratory, chemical as well as physical changes produce these...

  19. Identification and classification of chemicals using terahertz reflective spectroscopic focal-plane imaging system.

    PubMed

    Zhong, Hua; Redo-Sanchez, Albert; Zhang, X-C

    2006-10-02

    We present terahertz (THz) reflective spectroscopic focal-plane imaging of four explosive and bio-chemical materials (2, 4-DNT, Theophylline, RDX and Glutamic Acid) at a standoff imaging distance of 0.4 m. The 2 dimension (2-D) nature of this technique enables a fast acquisition time and is very close to a camera-like operation, compared to the most commonly used point emission-detection and raster scanning configuration. The samples are identified by their absorption peaks extracted from the negative derivative of the reflection coefficient respect to the frequency (-dr/dv) of each pixel. Classification of the samples is achieved by using minimum distance classifier and neural network methods with a rate of accuracy above 80% and a false alarm rate below 8%. This result supports the future application of THz time-domain spectroscopy (TDS) in standoff distance sensing, imaging, and identification.

  20. Accuracy of refractive outcomes in myopic and hyperopic laser in situ keratomileusis: Manifest versus aberrometric refraction.

    PubMed

    Reinstein, Dan Z; Morral, Merce; Gobbe, Marine; Archer, Timothy J

    2012-11-01

    To compare the achieved refractive accuracy of laser in situ keratomileusis (LASIK) performed based on manifest refraction with the predicted accuracy that would have been achieved using WASCA aberrometric refraction with and without Seidel correction factor for sphere. London Vision Clinic, London, United Kingdom. Comparative case series. Myopic eyes and hyperopic eyes had LASIK based on manifest refraction. Two aberrometric refractions were obtained preoperatively: Seidel, which includes spherical aberration in the sphere calculation, and non-Seidel. Bland-Altman plots were used to show the agreement between aberrometric and manifest refractions. Predicted LASIK outcomes had aberrometric refraction been used were modeled by shifting the postoperative manifest refraction by the vector difference between the preoperative manifest and aberrometric refractions. This study included 869 myopic eyes and 413 hyperopic eyes. The mean differences (manifest minus aberrometric) in spherical equivalent were +0.03 diopters (D) ± 0.48 (SD) (Seidel aberrometric) and +0.45 ± 0.42 D (non-Seidel aberrometric) for myopia and -0.20 ± 0.39 D and +0.39 ± 0.34 D, respectively, for hyperopia. The mean differences in cylinder magnitude were -0.10 ± 0.27 D and 0.00 ± 0.25 D, respectively. The percentage of eyes within ±0.50 D of the attempted correction was 81% (manifest), 70% (Seidel), and 67% (non-Seidel) for myopia and 71% (manifest), 61% (Seidel), and 64% (non-Seidel) for hyperopia. The achieved refractive accuracy by manifest refraction was better than the predicted accuracy had Seidel or non-Seidel aberrometric refractions been used for surgical planning. Using the Seidel method improved the accuracy in myopic eyes but not in hyperopic eyes. Dr. Reinstein is a consultant to Carl Zeiss Meditec AG and has a proprietary interest in the Artemis technology (Arcscan Inc., Morrison, Colorado, USA) through patents administered by the Cornell Center for Technology Enterprise and

  1. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE PAGES

    Rosewater, David; Ferreira, Summer; Schoenwald, David; ...

    2018-01-25

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  2. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  3. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosewater, David; Ferreira, Summer; Schoenwald, David

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  4. CON4EI: Short Time Exposure (STE) test method for hazard identification and labelling of eye irritating chemicals.

    PubMed

    Adriaens, E; Willoughby, J A; Meyer, B R; Blakeman, L C; Alépée, N; Fochtman, P; Guest, R; Kandarova, H; Verstraelen, S; Van Rompay, A R

    2018-06-01

    Assessment of ocular irritancy is an international regulatory requirement in the safety evaluation of industrial and consumer products. Although many in vitro ocular irritation assays exist, alone they are incapable of fully categorizing chemicals. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI consortium was developed to assess the reliability of eight in vitro test methods and establish an optimal tiered-testing strategy. One assay selected was the Short Time Exposure (STE) assay. This assay measures the viability of SIRC rabbit corneal cells after 5min exposure to 5% and 0.05% solutions of test material, and is capable of categorizing of Category 1 and No Category chemicals. The accuracy of the STE test method to identify Cat 1 chemicals was 61.3% with 23.7% sensitivity and 95.2% specificity. If non-soluble chemicals and unqualified results were excluded, the performance to identify Cat 1 chemicals remained similar (accuracy 62.2% with 22.7% sensitivity and 100% specificity). The accuracy of the STE test method to identify No Cat chemicals was 72.5% with 66.2% sensitivity and 100% specificity. Excluding highly volatile chemicals, non-surfactant solids and non-qualified results resulted in an important improvement of the performance of the STE test method (accuracy 96.2% with 81.8% sensitivity and 100% specificity). Furthermore, it seems that solids are more difficult to test in the STE, 71.4% of the solids resulted in unqualified results (solubility issues and/or high variation between independent runs) whereas for liquids 13.2% of the results were not qualified, supporting the restriction of the test method regarding the testing of solids. Copyright © 2017. Published by Elsevier Ltd.

  5. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  6. Accuracy and Resolution in Micro-earthquake Tomographic Inversion Studies

    NASA Astrophysics Data System (ADS)

    Hutchings, L. J.; Ryan, J.

    2010-12-01

    Accuracy and resolution are complimentary properties necessary to interpret the results of earthquake location and tomography studies. Accuracy is the how close an answer is to the “real world”, and resolution is who small of node spacing or earthquake error ellipse one can achieve. We have modified SimulPS (Thurber, 1986) in several ways to provide a tool for evaluating accuracy and resolution of potential micro-earthquake networks. First, we provide synthetic travel times from synthetic three-dimensional geologic models and earthquake locations. We use this to calculate errors in earthquake location and velocity inversion results when we perturb these models and try to invert to obtain these models. We create as many stations as desired and can create a synthetic velocity model with any desired node spacing. We apply this study to SimulPS and TomoDD inversion studies. “Real” travel times are perturbed with noise and hypocenters are perturbed to replicate a starting location away from the “true” location, and inversion is performed by each program. We establish travel times with the pseudo-bending ray tracer and use the same ray tracer in the inversion codes. This, of course, limits our ability to test the accuracy of the ray tracer. We developed relationships for the accuracy and resolution expected as a function of the number of earthquakes and recording stations for typical tomographic inversion studies. Velocity grid spacing started at 1km, then was decreased to 500m, 100m, 50m and finally 10m to see if resolution with decent accuracy at that scale was possible. We considered accuracy to be good when we could invert a velocity model perturbed by 50% back to within 5% of the original model, and resolution to be the size of the grid spacing. We found that 100 m resolution could obtained by using 120 stations with 500 events, bu this is our current limit. The limiting factors are the size of computers needed for the large arrays in the inversion and a

  7. Silver Coating for High-Mass-Accuracy Imaging Mass Spectrometry of Fingerprints on Nanostructured Silicon.

    PubMed

    Guinan, Taryn M; Gustafsson, Ove J R; McPhee, Gordon; Kobus, Hilton; Voelcker, Nicolas H

    2015-11-17

    Nanostructure imaging mass spectrometry (NIMS) using porous silicon (pSi) is a key technique for molecular imaging of exogenous and endogenous low molecular weight compounds from fingerprints. However, high-mass-accuracy NIMS can be difficult to achieve as time-of-flight (ToF) mass analyzers, which dominate the field, cannot sufficiently compensate for shifts in measured m/z values. Here, we show internal recalibration using a thin layer of silver (Ag) sputter-coated onto functionalized pSi substrates. NIMS peaks for several previously reported fingerprint components were selected and mass accuracy was compared to theoretical values. Mass accuracy was improved by more than an order of magnitude in several cases. This straightforward method should form part of the standard guidelines for NIMS studies for spatial characterization of small molecules.

  8. Climate Change Accuracy: Requirements and Economic Value

    NASA Astrophysics Data System (ADS)

    Wielicki, B. A.; Cooke, R.; Mlynczak, M. G.; Lukashin, C.; Thome, K. J.; Baize, R. R.

    2014-12-01

    Higher than normal accuracy is required to rigorously observe decadal climate change. But what level is needed? How can this be quantified? This presentation will summarize a new more rigorous and quantitative approach to determining the required accuracy for climate change observations (Wielicki et al., 2013, BAMS). Most current global satellite observations cannot meet this accuracy level. A proposed new satellite mission to resolve this challenge is CLARREO (Climate Absolute Radiance and Refractivity Observatory). CLARREO is designed to achieve advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra (Wielicki et al., Oct. 2013 BAMS). The CLARREO spectrometers are designed to serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and to greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar passive satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, SPOT, etc). Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A study has been carried out to quantify the economic benefits of such an advance as part of a rigorous and complete climate observing system. The study concludes that the economic value is $12 Trillion U.S. dollars in Net Present Value for a nominal discount rate of 3% (Cooke et al. 2013, J. Env. Sys. Dec.). A brief summary of these two studies and their implications for the future of climate science will be presented.

  9. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  10. New generation of docking programs: Supercomputer validation of force fields and quantum-chemical methods for docking.

    PubMed

    Sulimov, Alexey V; Kutov, Danil C; Katkova, Ekaterina V; Ilin, Ivan S; Sulimov, Vladimir B

    2017-11-01

    Discovery of new inhibitors of the protein associated with a given disease is the initial and most important stage of the whole process of the rational development of new pharmaceutical substances. New inhibitors block the active site of the target protein and the disease is cured. Computer-aided molecular modeling can considerably increase effectiveness of new inhibitors development. Reliable predictions of the target protein inhibition by a small molecule, ligand, is defined by the accuracy of docking programs. Such programs position a ligand in the target protein and estimate the protein-ligand binding energy. Positioning accuracy of modern docking programs is satisfactory. However, the accuracy of binding energy calculations is too low to predict good inhibitors. For effective application of docking programs to new inhibitors development the accuracy of binding energy calculations should be higher than 1kcal/mol. Reasons of limited accuracy of modern docking programs are discussed. One of the most important aspects limiting this accuracy is imperfection of protein-ligand energy calculations. Results of supercomputer validation of several force fields and quantum-chemical methods for docking are presented. The validation was performed by quasi-docking as follows. First, the low energy minima spectra of 16 protein-ligand complexes were found by exhaustive minima search in the MMFF94 force field. Second, energies of the lowest 8192 minima are recalculated with CHARMM force field and PM6-D3H4X and PM7 quantum-chemical methods for each complex. The analysis of minima energies reveals the docking positioning accuracies of the PM7 and PM6-D3H4X quantum-chemical methods and the CHARMM force field are close to one another and they are better than the positioning accuracy of the MMFF94 force field. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Hazardous chemical tracking system (HAZ-TRAC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramlette, J D; Ewart, S M; Jones, C E

    Westinghouse Idaho Nuclear Company, Inc. (WINCO) developed and implemented a computerized hazardous chemical tracking system, referred to as Haz-Trac, for use at the Idaho Chemical Processing Plant (ICPP). Haz-Trac is designed to provide a means to improve the accuracy and reliability of chemical information, which enhances the overall quality and safety of ICPP operations. The system tracks all chemicals and chemical components from the time they enter the ICPP until the chemical changes form, is used, or becomes a waste. The system runs on a Hewlett-Packard (HP) 3000 Series 70 computer. The system is written in COBOL and uses VIEW/3000,more » TurboIMAGE/DBMS 3000, OMNIDEX, and SPEEDWARE. The HP 3000 may be accessed throughout the ICPP, and from remote locations, using data communication lines. Haz-Trac went into production in October, 1989. Currently, over 1910 chemicals and chemical components are tracked on the system. More than 2500 personnel hours were saved during the first six months of operation. Cost savings have been realized by reducing the time needed to collect and compile reporting information, identifying and disposing of unneeded chemicals, and eliminating duplicate inventories. Haz-Trac maintains information required by the Superfund Amendment Reauthorization Act (SARA), the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) and the Occupational Safety and Health Administration (OSHA).« less

  12. Patient-specific instrument can achieve same accuracy with less resection time than navigation assistance in periacetabular pelvic tumor surgery: a cadaveric study.

    PubMed

    Wong, Kwok-Chuen; Sze, Kwan-Yik; Wong, Irene Oi-Ling; Wong, Chung-Ming; Kumta, Shekhar-Madhukar

    2016-02-01

    Inaccurate resection in pelvic tumors can result in compromised margins with increase local recurrence. Navigation-assisted and patient-specific instrument (PSI) techniques have recently been reported in assisting pelvic tumor surgery with the tendency of improving surgical accuracy. We examined and compared the accuracy of transferring a virtual pelvic resection plan to actual surgery using navigation-assisted or PSI technique in a cadaver study. We performed CT scan in twelve cadaveric bodies including whole pelvic bones. Either supraacetabular or partial acetabular resection was virtually planned in a hemipelvis using engineering software. The virtual resection plan was transferred to a CT-based navigation system or was used for design and fabrication of PSI. Pelvic resections were performed using navigation assistance in six cadavers and PSI in another six. Post-resection images were co-registered with preoperative planning for comparative analysis of resection accuracy in the two techniques. The mean average deviation error from the planned resection was no different ([Formula: see text]) for the navigation and the PSI groups: 1.9 versus 1.4 mm, respectively. The mean time required for the bone resection was greater ([Formula: see text]) for the navigation group than for the PSI group: 16.2 versus 1.1 min, respectively. In simulated periacetabular pelvic tumor resections, PSI technique enabled surgeons to reproduce the virtual surgical plan with similar accuracy but with less bone resection time when compared with navigation assistance. Further studies are required to investigate the clinical benefits of PSI technique in pelvic tumor surgery.

  13. New perspectives for high accuracy SLR with second generation geodesic satellites

    NASA Technical Reports Server (NTRS)

    Lund, Glenn

    1993-01-01

    This paper reports on the accuracy limitations imposed by geodesic satellite signatures, and on the potential for achieving millimetric performances by means of alternative satellite concepts and an optimized 2-color system tradeoff. Long distance laser ranging, when performed between a ground (emitter/receiver) station and a distant geodesic satellite, is now reputed to enable short arc trajectory determinations to be achieved with an accuracy of 1 to 2 centimeters. This state-of-the-art accuracy is limited principally by the uncertainties inherent to single-color atmospheric path length correction. Motivated by the study of phenomena such as postglacial rebound, and the detailed analysis of small-scale volcanic and strain deformations, the drive towards millimetric accuracies will inevitably be felt. With the advent of short pulse (less than 50 ps) dual wavelength ranging, combined with adequate detection equipment (such as a fast-scanning streak camera or ultra-fast solid-state detectors) the atmospheric uncertainty could potentially be reduced to the level of a few millimeters, thus, exposing other less significant error contributions, of which by far the most significant will then be the morphology of the retroreflector satellites themselves. Existing geodesic satellites are simply dense spheres, several 10's of cm in diameter, encrusted with a large number (426 in the case of LAGEOS) of small cube-corner reflectors. A single incident pulse, thus, results in a significant number of randomly phased, quasi-simultaneous return pulses. These combine coherently at the receiver to produce a convolved interference waveform which cannot, on a shot to shot basis, be accurately and unambiguously correlated to the satellite center of mass. This paper proposes alternative geodesic satellite concepts, based on the use of a very small number of cube-corner retroreflectors, in which the above difficulties are eliminated while ensuring, for a given emitted pulse, the return

  14. Playing Chemical Plant Environmental Protection Games with Historical Monitoring Data

    PubMed Central

    Reniers, Genserik; Zhang, Laobing; Qiu, Xiaogang

    2017-01-01

    The chemical industry is very important for the world economy and this industrial sector represents a substantial income source for developing countries. However, existing regulations on controlling atmospheric pollutants, and the enforcement of these regulations, often are insufficient in such countries. As a result, the deterioration of surrounding ecosystems and a quality decrease of the atmospheric environment can be observed. Previous works in this domain fail to generate executable and pragmatic solutions for inspection agencies due to practical challenges. In addressing these challenges, we introduce a so-called Chemical Plant Environment Protection Game (CPEP) to generate reasonable schedules of high-accuracy air quality monitoring stations (i.e., daily management plans) for inspection agencies. First, so-called Stackelberg Security Games (SSGs) in conjunction with source estimation methods are applied into this research. Second, high-accuracy air quality monitoring stations as well as gas sensor modules are modeled in the CPEP game. Third, simplified data analysis on the regularly discharging of chemical plants is utilized to construct the CPEP game. Finally, an illustrative case study is used to investigate the effectiveness of the CPEP game, and a realistic case study is conducted to illustrate how the models and algorithms being proposed in this paper, work in daily practice. Results show that playing a CPEP game can reduce operational costs of high-accuracy air quality monitoring stations. Moreover, evidence suggests that playing the game leads to more compliance from the chemical plants towards the inspection agencies. Therefore, the CPEP game is able to assist the environmental protection authorities in daily management work and reduce the potential risks of gaseous pollutants dispersion incidents. PMID:28961188

  15. Playing Chemical Plant Environmental Protection Games with Historical Monitoring Data.

    PubMed

    Zhu, Zhengqiu; Chen, Bin; Reniers, Genserik; Zhang, Laobing; Qiu, Sihang; Qiu, Xiaogang

    2017-09-29

    The chemical industry is very important for the world economy and this industrial sector represents a substantial income source for developing countries. However, existing regulations on controlling atmospheric pollutants, and the enforcement of these regulations, often are insufficient in such countries. As a result, the deterioration of surrounding ecosystems and a quality decrease of the atmospheric environment can be observed. Previous works in this domain fail to generate executable and pragmatic solutions for inspection agencies due to practical challenges. In addressing these challenges, we introduce a so-called Chemical Plant Environment Protection Game (CPEP) to generate reasonable schedules of high-accuracy air quality monitoring stations (i.e., daily management plans) for inspection agencies. First, so-called Stackelberg Security Games (SSGs) in conjunction with source estimation methods are applied into this research. Second, high-accuracy air quality monitoring stations as well as gas sensor modules are modeled in the CPEP game. Third, simplified data analysis on the regularly discharging of chemical plants is utilized to construct the CPEP game. Finally, an illustrative case study is used to investigate the effectiveness of the CPEP game, and a realistic case study is conducted to illustrate how the models and algorithms being proposed in this paper, work in daily practice. Results show that playing a CPEP game can reduce operational costs of high-accuracy air quality monitoring stations. Moreover, evidence suggests that playing the game leads to more compliance from the chemical plants towards the inspection agencies. Therefore, the CPEP game is able to assist the environmental protection authorities in daily management work and reduce the potential risks of gaseous pollutants dispersion incidents.

  16. Modeling the Biodegradability of Chemical Compounds Using the Online CHEmical Modeling Environment (OCHEM).

    PubMed

    Vorberg, Susann; Tetko, Igor V

    2014-01-01

    Biodegradability describes the capacity of substances to be mineralized by free-living bacteria. It is a crucial property in estimating a compound's long-term impact on the environment. The ability to reliably predict biodegradability would reduce the need for laborious experimental testing. However, this endpoint is difficult to model due to unavailability or inconsistency of experimental data. Our approach makes use of the Online Chemical Modeling Environment (OCHEM) and its rich supply of machine learning methods and descriptor sets to build classification models for ready biodegradability. These models were analyzed to determine the relationship between characteristic structural properties and biodegradation activity. The distinguishing feature of the developed models is their ability to estimate the accuracy of prediction for each individual compound. The models developed using seven individual descriptor sets were combined in a consensus model, which provided the highest accuracy. The identified overrepresented structural fragments can be used by chemists to improve the biodegradability of new chemical compounds. The consensus model, the datasets used, and the calculated structural fragments are publicly available at http://ochem.eu/article/31660. © 2014 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  17. Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Zuo, C.; Xiao, X.; Hou, Q.; Li, B.

    2018-05-01

    WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.

  18. Predictive performance of the human Cell Line Activation Test (h-CLAT) for lipophilic chemicals with high octanol-water partition coefficients.

    PubMed

    Takenouchi, Osamu; Miyazawa, Masaaki; Saito, Kazutoshi; Ashikaga, Takao; Sakaguchi, Hitoshi

    2013-01-01

    To meet the urgent need for a reliable alternative test for predicting skin sensitizing potential of many chemicals, we have developed a cell-based in vitro test, human Cell Line Activation Test (h-CLAT). However, the predictive performance for lipophilic chemicals in the h-CLAT still remains relatively unknown. Moreover, it's suggested that low water solubility of chemicals might induce false negative outcomes. Thus, in this study, we tested relatively low water soluble 37 chemicals with log Kow values above and below 3.5 in the h-CLAT. The small-scale assessment resulted in nine false negative outcomes for chemicals with log Kow values greater than 3.5. We then created a dataset of 143 chemicals by combining the existing dataset of 106 chemicals and examined the predictive performance of the h-CLAT for chemicals with a log Kow of less than 3.5; a total of 112 chemicals from the 143 chemicals in the dataset. The sensitivity and overall accuracy for the 143 chemicals were 83% and 80%, respectively. In contrast, sensitivity and overall accuracy for the 112 chemicals with log Kow values below 3.5 improved to 94% and 88%, respectively. These data suggested that the h-CLAT could successfully detect sensitizers with log Kow values up to 3.5. When chemicals with log Kow values greater than 3.5 that were deemed positive by h-CLAT were included with the 112 chemicals, the sensitivity and accuracy in terms of the resulting applicable 128 chemicals out of the 143 chemicals became 95% and 88%, respectively. The use of log Kow values gave the h-CLAT a higher predictive performance. Our results demonstrated that the h-CLAT could predict sensitizing potential of various chemicals, which contain lipophilic chemicals using a large-scale chemical dataset.

  19. Chemical contamination remote sensing

    NASA Technical Reports Server (NTRS)

    Carrico, J. P.; Phelps, K. R.; Webb, E. N.; Mackay, R. A.; Murray, E. R.

    1986-01-01

    A ground mobile laser test bed system was assembled to assess the feasibility of detection of various types of chemical contamination using Differential Scattering (DISC) and Differential Absorption (DIAL) Lidar techniques. Field experiments with the test bed system using chemical simulants were performed. Topographic reflection and range resolved DIAL detection of vapors as well as DISC detection of aerosols and surface contamination were achieved. Review of detection principles, design of the test bed system, and results of the experiments are discussed.

  20. Study of Intelligent Secure Chemical Inventory Management System

    NASA Astrophysics Data System (ADS)

    Shukran, Mohd Afizi Mohd; Naim Abdullah, Muhammad; Nazri Ismail, Mohd; Maskat, Kamaruzaman; Isa, Mohd Rizal Mohd; Shahfee Ishak, Muhammad; Adib Khairuddin, Muhamad

    2017-08-01

    Chemical inventory management system has been experiencing a new revolution from traditional inventory system which is manual to an automated inventory management system. In this paper, some review of the classic and modern approaches to chemical inventory management system has been discussed. This paper also describe about both type of inventory management. After a comparative analysis of the traditional method and automated method, it can be said that both methods have some distinctive characteristics. Moreover, the automated inventory management method has higher accuracy of calculation because the calculations are handled by software, eliminating possible errors and saving time. The automated inventory system also allows users and administrators to track the availability, location and consumption of chemicals. The study of this paper can provide forceful review analysis support for the chemical inventory management related research.

  1. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., measurement accuracy, and cut-off. 53.53 Section 53.53 Protection of Environment ENVIRONMENTAL PROTECTION..., measurement accuracy, and cut-off. (a) Overview. This test procedure is designed to evaluate a candidate... measurement accuracy, coefficient of variability measurement accuracy, and the flow rate cut-off function. The...

  2. Prognostic accuracy of five simple scales in childhood bacterial meningitis.

    PubMed

    Pelkonen, Tuula; Roine, Irmeli; Monteiro, Lurdes; Cruzeiro, Manuel Leite; Pitkäranta, Anne; Kataja, Matti; Peltola, Heikki

    2012-08-01

    In childhood acute bacterial meningitis, the level of consciousness, measured with the Glasgow coma scale (GCS) or the Blantyre coma scale (BCS), is the most important predictor of outcome. The Herson-Todd scale (HTS) was developed for Haemophilus influenzae meningitis. Our objective was to identify prognostic factors, to form a simple scale, and to compare the predictive accuracy of these scales. Seven hundred and twenty-three children with bacterial meningitis in Luanda were scored by GCS, BCS, and HTS. The simple Luanda scale (SLS), based on our entire database, comprised domestic electricity, days of illness, convulsions, consciousness, and dyspnoea at presentation. The Bayesian Luanda scale (BLS) added blood glucose concentration. The accuracy of the 5 scales was determined for 491 children without an underlying condition, against the outcomes of death, severe neurological sequelae or death, or a poor outcome (severe neurological sequelae, death, or deafness), at hospital discharge. The highest accuracy was achieved with the BLS, whose area under the curve (AUC) for death was 0.83, for severe neurological sequelae or death was 0.84, and for poor outcome was 0.82. Overall, the AUCs for SLS were ≥0.79, for GCS were ≥0.76, for BCS were ≥0.74, and for HTS were ≥0.68. Adding laboratory parameters to a simple scoring system, such as the SLS, improves the prognostic accuracy only little in bacterial meningitis.

  3. Diagnostic accuracy of routine blood examinations and CSF lactate level for post-neurosurgical bacterial meningitis.

    PubMed

    Zhang, Yang; Xiao, Xiong; Zhang, Junting; Gao, Zhixian; Ji, Nan; Zhang, Liwei

    2017-06-01

    To evaluate the diagnostic accuracy of routine blood examinations and Cerebrospinal Fluid (CSF) lactate level for Post-neurosurgical Bacterial Meningitis (PBM) at a large sample-size of post-neurosurgical patients. The diagnostic accuracies of routine blood examinations and CSF lactate level to distinguish between PAM and PBM were evaluated with the values of the Area Under the Curve of the Receiver Operating Characteristic (AUC -ROC ) by retrospectively analyzing the datasets of post-neurosurgical patients in the clinical information databases. The diagnostic accuracy of routine blood examinations was relatively low (AUC -ROC <0.7). The CSF lactate level achieved rather high diagnostic accuracy (AUC -ROC =0.891; CI 95%, 0.852-0.922). The variables of patient age, operation duration, surgical diagnosis and postoperative days (the interval days between the neurosurgery and examinations) were shown to affect the diagnostic accuracy of these examinations. The variables were integrated with routine blood examinations and CSF lactate level by Fisher discriminant analysis to improve their diagnostic accuracy. As a result, the diagnostic accuracy of blood examinations and CSF lactate level was significantly improved with an AUC -ROC value=0.760 (CI 95%, 0.737-0.782) and 0.921 (CI 95%, 0.887-0.948) respectively. The PBM diagnostic accuracy of routine blood examinations was relatively low, whereas the accuracy of CSF lactate level was high. Some variables that are involved in the incidence of PBM can also affect the diagnostic accuracy for PBM. Taking into account the effects of these variables significantly improves the diagnostic accuracies of routine blood examinations and CSF lactate level. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Why relevant chemical information cannot be exchanged without disclosing structures

    NASA Astrophysics Data System (ADS)

    Filimonov, Dmitry; Poroikov, Vladimir

    2005-09-01

    Both society and industry are interested in increasing the safety of pharmaceuticals. Potentially dangerous compounds could be filtered out at early stages of R&D by computer prediction of biological activity and ADMET characteristics. Accuracy of such predictions strongly depends on the quality & quantity of information contained in a training set. Suggestion that some relevant chemical information can be added to such training sets without disclosing chemical structures was generated at the recent ACS Symposium. We presented arguments that such safety exchange of relevant chemical information is impossible. Any relevant information about chemical structures can be used for search of either a particular compound itself or its close analogues. Risk of identifying such structures is enough to prevent pharma industry from relevant chemical information exchange.

  5. Enhancing Visual Perception and Motor Accuracy among School Children through a Mindfulness and Compassion Program

    PubMed Central

    Tarrasch, Ricardo; Margalit-Shalom, Lilach; Berger, Rony

    2017-01-01

    The present study assessed the effects of the mindfulness/compassion cultivating program: “Call to Care-Israel” on the performance in visual perception (VP) and motor accuracy, as well as on anxiety levels and self-reported mindfulness among 4th and 5th grade students. One hundred and thirty-eight children participated in the program for 24 weekly sessions, while 78 children served as controls. Repeated measures ANOVA’s yielded significant interactions between time of measurement and group for VP, motor accuracy, reported mindfulness, and anxiety. Post hoc tests revealed significant improvements in the four aforementioned measures in the experimental group only. In addition, significant correlations were obtained between the improvement in motor accuracy and the reduction in anxiety and the increase in mindfulness. Since VP and motor accuracy are basic skills associated with quantifiable academic characteristics, such as reading and mathematical abilities, the results may suggest that mindfulness practice has the ability to improve academic achievements. PMID:28286492

  6. Graphene-Based Chemical Vapor Sensors for Electronic Nose Applications

    NASA Astrophysics Data System (ADS)

    Nallon, Eric C.

    chemiresistor device and used as a chemical sensor, where its resistance is temporarily modified while exposed to chemical compounds. The inherent, broad selective nature of graphene is demonstrated by testing a sensor against a diverse set of volatile organic compounds and also against a set of chemically similar compounds. The sensor exhibits excellent selectivity and is capable of achieving high classification accuracies. The kinetics of the sensor's response are further investigated revealing a relationship between the transient behavior of the response curve and physiochemical properties of the compounds, such as the molar mass and vapor pressure. This kinetic information is also shown to provide important information for further pattern recognition and classification, which is demonstrated by increased classification accuracy of very similar compounds. Covalent modification of the graphene surface is demonstrated by means of plasma treatment and free radical exchange, and sensing performance compared to an unmodified graphene sensor. Finally, the first example of a graphene-based, cross-reactive chemical sensor array is demonstrated by applying various polymers as coatings over an array of graphene sensors. The sensor array is tested against a variety of compounds, including the complex odor of Scotch whiskies, where it is capable of perfect classification of 10 Scotch whiskey variations.

  7. Comparative Diagnostic Accuracy of the ACE-III, MIS, MMSE, MoCA, and RUDAS for Screening of Alzheimer Disease.

    PubMed

    Matías-Guiu, Jordi A; Valles-Salgado, María; Rognoni, Teresa; Hamre-Gil, Frank; Moreno-Ramos, Teresa; Matías-Guiu, Jorge

    2017-01-01

    Our aim was to evaluate and compare the diagnostic properties of 5 screening tests for the diagnosis of mild Alzheimer disease (AD). We conducted a prospective and cross-sectional study of 92 patients with mild AD and of 68 healthy controls from our Department of Neurology. The diagnostic properties of the following tests were compared: Mini-Mental State Examination (MMSE), Addenbrooke's Cognitive Examination III (ACE-III), Memory Impairment Screen (MIS), Montreal Cognitive Assessment (MoCA), and Rowland Universal Dementia Assessment Scale (RUDAS). All tests yielded high diagnostic accuracy, with the ACE-III achieving the best diagnostic properties. The area under the curve was 0.897 for the ACE-III, 0.889 for the RUDAS, 0.874 for the MMSE, 0.866 for the MIS, and 0.856 for the MoCA. The Mini-ACE score from the ACE-III showed the highest diagnostic capacity (area under the curve 0.939). Memory scores of the ACE-III and of the RUDAS showed a better diagnostic accuracy than those of the MMSE and of the MoCA. All tests, especially the ACE-III, conveyed a higher diagnostic accuracy in patients with full primary education than in the less educated group. Implementing normative data improved the diagnostic accuracy of the ACE-III but not that of the other tests. The ACE-III achieved the highest diagnostic accuracy. This better discrimination was more evident in the more educated group. © 2017 S. Karger AG, Basel.

  8. Accuracy of genomic selection models in a large population of open-pollinated families in white spruce

    PubMed Central

    Beaulieu, J; Doerksen, T; Clément, S; MacKay, J; Bousquet, J

    2014-01-01

    Genomic selection (GS) is of interest in breeding because of its potential for predicting the genetic value of individuals and increasing genetic gains per unit of time. To date, very few studies have reported empirical results of GS potential in the context of large population sizes and long breeding cycles such as for boreal trees. In this study, we assessed the effectiveness of marker-aided selection in an undomesticated white spruce (Picea glauca (Moench) Voss) population of large effective size using a GS approach. A discovery population of 1694 trees representative of 214 open-pollinated families from 43 natural populations was phenotyped for 12 wood and growth traits and genotyped for 6385 single-nucleotide polymorphisms (SNPs) mined in 2660 gene sequences. GS models were built to predict estimated breeding values using all the available SNPs or SNP subsets of the largest absolute effects, and they were validated using various cross-validation schemes. The accuracy of genomic estimated breeding values (GEBVs) varied from 0.327 to 0.435 when the training and the validation data sets shared half-sibs that were on average 90% of the accuracies achieved through traditionally estimated breeding values. The trend was also the same for validation across sites. As expected, the accuracy of GEBVs obtained after cross-validation with individuals of unknown relatedness was lower with about half of the accuracy achieved when half-sibs were present. We showed that with the marker densities used in the current study, predictions with low to moderate accuracy could be obtained within a large undomesticated population of related individuals, potentially resulting in larger gains per unit of time with GS than with the traditional approach. PMID:24781808

  9. Kinematic Visual Biofeedback Improves Accuracy of Learning a Swallowing Maneuver and Accuracy of Clinician Cues During Training.

    PubMed

    Azola, Alba M; Sunday, Kirstyn L; Humbert, Ianessa A

    2017-02-01

    Submental surface electromyography (ssEMG) visual biofeedback is widely used to train swallowing maneuvers. This study compares the effect of ssEMG and videofluoroscopy (VF) visual biofeedback on hyo-laryngeal accuracy when training a swallowing maneuver. Furthermore, it examines the clinician's ability to provide accurate verbal cues during swallowing maneuver training. Thirty healthy adults performed the volitional laryngeal vestibule closure maneuver (vLVC), which involves swallowing and sustaining closure of the laryngeal vestibule for 2 s. The study included two stages: (1) first accurate demonstration of the vLVC maneuver, followed by (2) training-20 vLVC training swallows. Participants were randomized into three groups: (a) ssEMG biofeedback only, (b) VF biofeedback only, and (c) mixed biofeedback (VF for the first accurate demonstration achieving stage and ssEMG for the training stage). Participants' performances were verbally critiqued or reinforced in real time while both the clinician and participant were observing the assigned visual biofeedback. VF and ssEMG were continuously recorded for all participants. Results show that accuracy of both vLVC performance and clinician cues was greater with VF biofeedback than with either ssEMG or mixed biofeedback (p < 0.001). Using ssEMG for providing real-time biofeedback during training could lead to errors while learning and training a swallowing maneuver.

  10. The accuracy of quantum chemical methods for large noncovalent complexes

    PubMed Central

    Pitoňák, Michal; Řezáč, Jan; Pulay, Peter

    2013-01-01

    We evaluate the performance of the most widely used wavefunction, density functional theory, and semiempirical methods for the description of noncovalent interactions in a set of larger, mostly dispersion-stabilized noncovalent complexes (the L7 data set). The methods tested include MP2, MP3, SCS-MP2, SCS(MI)-MP2, MP2.5, MP2.X, MP2C, DFT-D, DFT-D3 (B3-LYP-D3, B-LYP-D3, TPSS-D3, PW6B95-D3, M06-2X-D3) and M06-2X, and semiempirical methods augmented with dispersion and hydrogen bonding corrections: SCC-DFTB-D, PM6-D, PM6-DH2 and PM6-D3H4. The test complexes are the octadecane dimer, the guanine trimer, the circumcoronene…adenine dimer, the coronene dimer, the guanine-cytosine dimer, the circumcoronene…guanine-cytosine dimer, and an amyloid fragment trimer containing phenylalanine residues. The best performing method is MP2.5 with relative root mean square deviation (rRMSD) of 4 %. It can thus be recommended as an alternative to the CCSD(T)/CBS (alternatively QCISD(T)/CBS) benchmark for molecular systems which exceed current computational capacity. The second best non-DFT method is MP2C with rRMSD of 8 %. A method with the most favorable “accuracy/cost” ratio belongs to the DFT family: BLYP-D3, with an rRMSD of 8 %. Semiempirical methods deliver less accurate results (the rRMSD exceeds 25 %). Nevertheless, their absolute errors are close to some much more expensive methods such as M06-2X, MP2 or SCS(MI)-MP2, and thus their price/performance ratio is excellent. PMID:24098094

  11. High accuracy position response calibration method for a micro-channel plate ion detector

    NASA Astrophysics Data System (ADS)

    Hong, R.; Leredde, A.; Bagdasarova, Y.; Fléchard, X.; García, A.; Müller, P.; Knecht, A.; Liénard, E.; Kossin, M.; Sternberg, M. G.; Swanson, H. E.; Zumwalt, D. W.

    2016-11-01

    We have developed a position response calibration method for a micro-channel plate (MCP) detector with a delay-line anode position readout scheme. Using an in situ calibration mask, an accuracy of 8 μm and a resolution of 85 μm (FWHM) have been achieved for MeV-scale α particles and ions with energies of ∼10 keV. At this level of accuracy, the difference between the MCP position responses to high-energy α particles and low-energy ions is significant. The improved performance of the MCP detector can find applications in many fields of AMO and nuclear physics. In our case, it helps reducing systematic uncertainties in a high-precision nuclear β-decay experiment.

  12. Design and preliminary accuracy studies of an MRI-guided transrectal prostate intervention system.

    PubMed

    Krieger, Axel; Csoma, Csaba; Iordachital, Iulian I; Guion, Peter; Singh, Anurag K; Fichtinger, Gabor; Whitcomb, Louis L

    2007-01-01

    This paper reports a novel system for magnetic resonance imaging (MRI) guided transrectal prostate interventions, such as needle biopsy, fiducial marker placement, and therapy delivery. The system utilizes a hybrid tracking method, comprised of passive fiducial tracking for initial registration and subsequent incremental motion measurement along the degrees of freedom using fiber-optical encoders and mechanical scales. Targeting accuracy of the system is evaluated in prostate phantom experiments. Achieved targeting accuracy and procedure times were found to compare favorably with existing systems using passive and active tracking methods. Moreover, the portable design of the system using only standard MRI image sequences and minimal custom scanner interfacing allows the system to be easily used on different MRI scanners.

  13. Cumulative detection probabilities and range accuracy of a pulsed Geiger-mode avalanche photodiode laser ranging system

    NASA Astrophysics Data System (ADS)

    Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan

    2017-10-01

    Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.

  14. Accuracy of WAAS-Enabled GPS-RF Warning Signals When Crossing a Terrestrial Geofence

    PubMed Central

    Grayson, Lindsay M.; Keefe, Robert F.; Tinkham, Wade T.; Eitel, Jan U. H.; Saralecos, Jarred D.; Smith, Alistair M. S.; Zimbelman, Eloise G.

    2016-01-01

    Geofences are virtual boundaries based on geographic coordinates. When combined with global position system (GPS), or more generally global navigation satellite system (GNSS) transmitters, geofences provide a powerful tool for monitoring the location and movements of objects of interest through proximity alarms. However, the accuracy of geofence alarms in GNSS-radio frequency (GNSS-RF) transmitter receiver systems has not been tested. To achieve these goals, a cart with a GNSS-RF locator was run on a straight path in a balanced factorial experiment with three levels of cart speed, three angles of geofence intersection, three receiver distances from the track, and three replicates. Locator speed, receiver distance and geofence intersection angle all affected geofence alarm accuracy in an analysis of variance (p = 0.013, p = 2.58 × 10−8, and p = 0.0006, respectively), as did all treatment interactions (p < 0.0001). Slower locator speed, acute geofence intersection angle, and closest receiver distance were associated with reduced accuracy of geofence alerts. PMID:27322287

  15. Accuracy of WAAS-Enabled GPS-RF Warning Signals When Crossing a Terrestrial Geofence.

    PubMed

    Grayson, Lindsay M; Keefe, Robert F; Tinkham, Wade T; Eitel, Jan U H; Saralecos, Jarred D; Smith, Alistair M S; Zimbelman, Eloise G

    2016-06-18

    Geofences are virtual boundaries based on geographic coordinates. When combined with global position system (GPS), or more generally global navigation satellite system (GNSS) transmitters, geofences provide a powerful tool for monitoring the location and movements of objects of interest through proximity alarms. However, the accuracy of geofence alarms in GNSS-radio frequency (GNSS-RF) transmitter receiver systems has not been tested. To achieve these goals, a cart with a GNSS-RF locator was run on a straight path in a balanced factorial experiment with three levels of cart speed, three angles of geofence intersection, three receiver distances from the track, and three replicates. Locator speed, receiver distance and geofence intersection angle all affected geofence alarm accuracy in an analysis of variance (p = 0.013, p = 2.58 × 10(-8), and p = 0.0006, respectively), as did all treatment interactions (p < 0.0001). Slower locator speed, acute geofence intersection angle, and closest receiver distance were associated with reduced accuracy of geofence alerts.

  16. Programmable chemical controllers made from DNA.

    PubMed

    Chen, Yuan-Jyue; Dalchau, Neil; Srinivas, Niranjan; Phillips, Andrew; Cardelli, Luca; Soloveichik, David; Seelig, Georg

    2013-10-01

    Biological organisms use complex molecular networks to navigate their environment and regulate their internal state. The development of synthetic systems with similar capabilities could lead to applications such as smart therapeutics or fabrication methods based on self-organization. To achieve this, molecular control circuits need to be engineered to perform integrated sensing, computation and actuation. Here we report a DNA-based technology for implementing the computational core of such controllers. We use the formalism of chemical reaction networks as a 'programming language' and our DNA architecture can, in principle, implement any behaviour that can be mathematically expressed as such. Unlike logic circuits, our formulation naturally allows complex signal processing of intrinsically analogue biological and chemical inputs. Controller components can be derived from biologically synthesized (plasmid) DNA, which reduces errors associated with chemically synthesized DNA. We implement several building-block reaction types and then combine them into a network that realizes, at the molecular level, an algorithm used in distributed control systems for achieving consensus between multiple agents.

  17. Programmable chemical controllers made from DNA

    NASA Astrophysics Data System (ADS)

    Chen, Yuan-Jyue; Dalchau, Neil; Srinivas, Niranjan; Phillips, Andrew; Cardelli, Luca; Soloveichik, David; Seelig, Georg

    2013-10-01

    Biological organisms use complex molecular networks to navigate their environment and regulate their internal state. The development of synthetic systems with similar capabilities could lead to applications such as smart therapeutics or fabrication methods based on self-organization. To achieve this, molecular control circuits need to be engineered to perform integrated sensing, computation and actuation. Here we report a DNA-based technology for implementing the computational core of such controllers. We use the formalism of chemical reaction networks as a 'programming language' and our DNA architecture can, in principle, implement any behaviour that can be mathematically expressed as such. Unlike logic circuits, our formulation naturally allows complex signal processing of intrinsically analogue biological and chemical inputs. Controller components can be derived from biologically synthesized (plasmid) DNA, which reduces errors associated with chemically synthesized DNA. We implement several building-block reaction types and then combine them into a network that realizes, at the molecular level, an algorithm used in distributed control systems for achieving consensus between multiple agents.

  18. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  19. Comparing position and orientation accuracy of different electromagnetic sensors for tracking during interventions.

    PubMed

    Nijkamp, Jasper; Schermers, Bram; Schmitz, Sander; de Jonge, Sofieke; Kuhlmann, Koert; van der Heijden, Ferdinand; Sonke, Jan-Jakob; Ruers, Theo

    2016-08-01

    To compare the position and orientation accuracy between using one 6-degree of freedom (DOF) electromagnetic (EM) sensor, or the position information of three 5DOF sensors within the scope of tumor tracking. The position accuracy of Northern Digital Inc Aurora 5DOF and 6DOF sensors was determined for a table-top field generator (TTFG) up to a distance of 52 cm. For each sensor 716 positions were measured for 10 s at 15 Hz. Orientation accuracy was determined for each of the orthogonal axis at the TTFG distances of 17, 27, 37 and 47 cm. For the 6DOF sensors, orientation was determined for sensors in-line with the orientation axis, and perpendicular. 5DOF orientation accuracy was determined for a theoretical 4 cm tumor. An optical tracking system was used as reference. Position RMSE and jitter were comparable between the sensors and increasing with distance. Jitter was within 0.1 cm SD within 45 cm distance to the TTFG. Position RMSE was approximately 0.1 cm up to 32 cm distance, increasing to 0.4 cm at 52 cm distance. Orientation accuracy of the 6DOF sensor was within 1[Formula: see text], except when the sensor was in-line with the rotation axis perpendicular to the TTFG plane (4[Formula: see text] errors at 47 cm). Orientation accuracy using 5DOF positions was within 1[Formula: see text] up to 37 cm and 2[Formula: see text] at 47 cm. The position and orientation accuracy of a 6DOF sensor was comparable with a sensor configuration consisting of three 5DOF sensors. To achieve tracking accuracy within 1 mm and 1[Formula: see text], the distance to the TTFG should be limited to approximately 30 cm.

  20. Predictive performance of the Vitrigel‐eye irritancy test method using 118 chemicals

    PubMed Central

    Yamaguchi, Hiroyuki; Kojima, Hajime

    2015-01-01

    Abstract We recently developed a novel Vitrigel‐eye irritancy test (EIT) method. The Vitrigel‐EIT method is composed of two parts, i.e., the construction of a human corneal epithelium (HCE) model in a collagen vitrigel membrane chamber and the prediction of eye irritancy by analyzing the time‐dependent profile of transepithelial electrical resistance values for 3 min after exposing a chemical to the HCE model. In this study, we estimated the predictive performance of Vitrigel‐EIT method by testing a total of 118 chemicals. The category determined by the Vitrigel‐EIT method in comparison to the globally harmonized system classification revealed that the sensitivity, specificity and accuracy were 90.1%, 65.9% and 80.5%, respectively. Here, five of seven false‐negative chemicals were acidic chemicals inducing the irregular rising of transepithelial electrical resistance values. In case of eliminating the test chemical solutions showing pH 5 or lower, the sensitivity, specificity and accuracy were improved to 96.8%, 67.4% and 84.4%, respectively. Meanwhile, nine of 16 false‐positive chemicals were classified irritant by the US Environmental Protection Agency. In addition, the disappearance of ZO‐1, a tight junction‐associated protein and MUC1, a cell membrane‐spanning mucin was immunohistologically confirmed in the HCE models after exposing not only eye irritant chemicals but also false‐positive chemicals, suggesting that such false‐positive chemicals have an eye irritant potential. These data demonstrated that the Vitrigel‐EIT method could provide excellent predictive performance to judge the widespread eye irritancy, including very mild irritant chemicals. We hope that the Vitrigel‐EIT method contributes to the development of safe commodity chemicals. Copyright © 2015 The Authors. Journal of Applied Toxicology published by John Wiley & Sons Ltd. PMID:26472347

  1. Predictive performance of the Vitrigel-eye irritancy test method using 118 chemicals.

    PubMed

    Yamaguchi, Hiroyuki; Kojima, Hajime; Takezawa, Toshiaki

    2016-08-01

    We recently developed a novel Vitrigel-eye irritancy test (EIT) method. The Vitrigel-EIT method is composed of two parts, i.e., the construction of a human corneal epithelium (HCE) model in a collagen vitrigel membrane chamber and the prediction of eye irritancy by analyzing the time-dependent profile of transepithelial electrical resistance values for 3 min after exposing a chemical to the HCE model. In this study, we estimated the predictive performance of Vitrigel-EIT method by testing a total of 118 chemicals. The category determined by the Vitrigel-EIT method in comparison to the globally harmonized system classification revealed that the sensitivity, specificity and accuracy were 90.1%, 65.9% and 80.5%, respectively. Here, five of seven false-negative chemicals were acidic chemicals inducing the irregular rising of transepithelial electrical resistance values. In case of eliminating the test chemical solutions showing pH 5 or lower, the sensitivity, specificity and accuracy were improved to 96.8%, 67.4% and 84.4%, respectively. Meanwhile, nine of 16 false-positive chemicals were classified irritant by the US Environmental Protection Agency. In addition, the disappearance of ZO-1, a tight junction-associated protein and MUC1, a cell membrane-spanning mucin was immunohistologically confirmed in the HCE models after exposing not only eye irritant chemicals but also false-positive chemicals, suggesting that such false-positive chemicals have an eye irritant potential. These data demonstrated that the Vitrigel-EIT method could provide excellent predictive performance to judge the widespread eye irritancy, including very mild irritant chemicals. We hope that the Vitrigel-EIT method contributes to the development of safe commodity chemicals. Copyright © 2015 The Authors. Journal of Applied Toxicology published by John Wiley & Sons Ltd. Copyright © 2015 The Authors. Journal of Applied Toxicology published by John Wiley & Sons Ltd.

  2. Development and Validation of Decision Forest Model for Estrogen Receptor Binding Prediction of Chemicals Using Large Data Sets.

    PubMed

    Ng, Hui Wen; Doughty, Stephen W; Luo, Heng; Ye, Hao; Ge, Weigong; Tong, Weida; Hong, Huixiao

    2015-12-21

    Some chemicals in the environment possess the potential to interact with the endocrine system in the human body. Multiple receptors are involved in the endocrine system; estrogen receptor α (ERα) plays very important roles in endocrine activity and is the most studied receptor. Understanding and predicting estrogenic activity of chemicals facilitates the evaluation of their endocrine activity. Hence, we have developed a decision forest classification model to predict chemical binding to ERα using a large training data set of 3308 chemicals obtained from the U.S. Food and Drug Administration's Estrogenic Activity Database. We tested the model using cross validations and external data sets of 1641 chemicals obtained from the U.S. Environmental Protection Agency's ToxCast project. The model showed good performance in both internal (92% accuracy) and external validations (∼ 70-89% relative balanced accuracies), where the latter involved the validations of the model across different ER pathway-related assays in ToxCast. The important features that contribute to the prediction ability of the model were identified through informative descriptor analysis and were related to current knowledge of ER binding. Prediction confidence analysis revealed that the model had both high prediction confidence and accuracy for most predicted chemicals. The results demonstrated that the model constructed based on the large training data set is more accurate and robust for predicting ER binding of chemicals than the published models that have been developed using much smaller data sets. The model could be useful for the evaluation of ERα-mediated endocrine activity potential of environmental chemicals.

  3. A cross-sectional study of mathematics achievement, estimation skills, and academic self-perception in students of varying ability.

    PubMed

    Montague, Marjorie; van Garderen, Delinda

    2003-01-01

    This study investigated students' mathematics achievement, estimation ability, use of estimation strategies, and academic self-perception. Students with learning disabilities (LD), average achievers, and intellectually gifted students (N = 135) in fourth, sixth, and eighth grade participated in the study. They were assessed to determine their mathematics achievement, ability to estimate discrete quantities, knowledge and use of estimation strategies, and perception of academic competence. The results indicated that the students with LD performed significantly lower than their peers on the math achievement measures, as expected, but viewed themselves to be as academically competent as the average achievers did. Students with LD and average achievers scored significantly lower than gifted students on all estimation measures, but they differed significantly from one another only on the estimation strategy use measure. Interestingly, even gifted students did not seem to have a well-developed understanding of estimation and, like the other students, did poorly on the first estimation measure. The accuracy of their estimates seemed to improve, however, when students were asked open-ended questions about the strategies they used to arrive at their estimates. Although students with LD did not differ from average achievers in their estimation accuracy, they used significantly fewer effective estimation strategies. Implications for instruction are discussed.

  4. Understanding dental CAD/CAM for restorations--accuracy from a mechanical engineering viewpoint.

    PubMed

    Tapie, Laurent; Lebon, Nicolas; Mawussi, Bernardin; Fron-Chabouis, Hélène; Duret, Francois; Attal, Jean-Pierre

    2015-01-01

    As is the case in the field of medicine, as well as in most areas of daily life, digital technology is increasingly being introduced into dental practice. Computer-aided design/ computer-aided manufacturing (CAD/CAM) solutions are available not only for chairside practice but also for creating inlays, crowns, fixed partial dentures (FPDs), implant abutments, and other dental prostheses. CAD/CAM dental practice can be considered as the handling of devices and software processing for the almost automatic design and creation of dental restorations. However, dentists who want to use dental CAD/CAM systems often do not have enough information to understand the variations offered by such technology practice. Knowledge of the random and systematic errors in accuracy with CAD/CAM systems can help to achieve successful restorations with this technology, and help with the purchasing of a CAD/CAM system that meets the clinical needs of restoration. This article provides a mechanical engineering viewpoint of the accuracy of CAD/ CAM systems, to help dentists understand the impact of this technology on restoration accuracy.

  5. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    PubMed Central

    Sun, Ting; Xing, Fei; You, Zheng

    2013-01-01

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers. PMID:23567527

  6. Improving the accuracy of effect-directed analysis: the role of bioavailability.

    PubMed

    You, Jing; Li, Huizhen

    2017-12-13

    Aquatic ecosystems have been suffering from contamination by multiple stressors. Traditional chemical-based risk assessment usually fails to explain the toxicity contributions from contaminants that are not regularly monitored or that have an unknown identity. Diagnosing the causes of noted adverse outcomes in the environment is of great importance in ecological risk assessment and in this regard effect-directed analysis (EDA) has been designed to fulfill this purpose. The EDA approach is now increasingly used in aquatic risk assessment owing to its specialty in achieving effect-directed nontarget analysis; however, a lack of environmental relevance makes conventional EDA less favorable. In particular, ignoring the bioavailability in EDA may cause a biased and even erroneous identification of causative toxicants in a mixture. Taking bioavailability into consideration is therefore of great importance to improve the accuracy of EDA diagnosis. The present article reviews the current status and applications of EDA practices that incorporate bioavailability. The use of biological samples is the most obvious way to include bioavailability into EDA applications, but its development is limited due to the small sample size and lack of evidence for metabolizable compounds. Bioavailability/bioaccessibility-based extraction (bioaccessibility-directed and partitioning-based extraction) and passive-dosing techniques are recommended to be used to integrate bioavailability into EDA diagnosis in abiotic samples. Lastly, the future perspectives of expanding and standardizing the use of biological samples and bioavailability-based techniques in EDA are discussed.

  7. Assessment of Delivery Accuracy in an Operational-Like Environment

    NASA Technical Reports Server (NTRS)

    Sharma, Shivanjli; Wynnyk, Mitch

    2016-01-01

    In order to enable arrival management concepts and solutions in a Next Generation Air Transportation System (NextGen) environment, ground-based sequencing and scheduling functions were developed to support metering operations in the National Airspace System. These sequencing and scheduling tools are designed to assist air traffic controllers in developing an overall arrival strategy, from enroute down to the terminal area boundary. NASA developed a ground system concept and protoype capability called Terminal Sequencing and Spacing (TSAS) to extend metering operations into the terminal area to the runway. To demonstrate the use of these scheduling and spacing tools in an operational-like environment, the FAA, NASA, and MITRE conducted an Operational Integration Assessment (OIA) of a prototype TSAS system at the FAA's William J. Hughes Technical Center (WJHTC). This paper presents an analysis of the arrival management strategies utilized and delivery accuracy achieved during the OIA. The analysis demonstrates how en route preconditioning, in various forms, and schedule disruptions impact delivery accuracy. As the simulation spanned both enroute and terminal airspace, the use of Ground Interval Management - Spacing (GIM-S) enroute speed advisories was investigated. Delivery accuracy was measured as the difference between the Scheduled Time of Arrival (STA) and the Actual Time of Arrival (ATA). The delivery accuracy was computed across all runs conducted during the OIA, which included deviations from nominal operations which are known to commonly occur in real operations, such as schedule changes and missed approaches. Overall, 83% of all flights were delivered into the terminal airspace within +/- 30 seconds of their STA and 94% of flights were delivered within +/- 60 seconds. The meter fix delivery accuracy standard deviation was found to be between 36 and 55 seconds across all arrival procedures. The data also showed when schedule disruptions were excluded, the

  8. Clinical evaluation and treatment accuracy in diabetic macular edema using navigated laser photocoagulator NAVILAS.

    PubMed

    Kozak, Igor; Oster, Stephen F; Cortes, Marco A; Dowell, Dennis; Hartmann, Kathrin; Kim, Jae Suk; Freeman, William R

    2011-06-01

    To evaluate the clinical use and accuracy of a new retinal navigating laser technology that integrates a scanning slit fundus camera system with fluorescein angiography (FA), color, red-free, and infrared imaging capabilities with a computer steerable therapeutic 532-nm laser. Interventional case series. Eighty-six eyes of 61 patients with diabetic retinopathy and macular edema treated by NAVILAS. The imaging included digital color fundus photographs and FA. The planning included graphically marking future treatment sites (microaneurysms for single-spot focal treatment and areas of diffuse leakage for grid pattern photocoagulation) on the acquired images. The preplanned treatment was visible and overlaid on the live fundus image during the actual photocoagulation. The NAVILAS automatically advances the aiming beam location from one planned treatment site to the next after each photocoagulation spot until all sites are treated. Aiming beam stabilization compensated for patient's eye movements. The pretreatment FA with the treatment plan was overlaid on top of the posttreatment color fundus images with the actual laser burns. This allowed treatment accuracy to be calculated. Independent observers evaluated the images to determine if the retinal opacification after treatment overlapped the targeted microaneurysm. Safety and accuracy of laser photocoagulation. The images were of very good quality compared with standard fundus cameras, allowing careful delineation of target areas on FA. Toggling from infrared, to monochromatic, to color view allowed evaluation and adjustment of burn intensity during treatment. There were no complications during or after photocoagulation treatment. An analysis of accuracy of 400 random focal targeted spots found that the NAVILAS achieved a microaneurysm hit rate of 92% when the placement of the treatment circle was centered by the operating surgeon on the microaneurysm. The accuracy for the control group analyzing 100 focal spots was

  9. TH-A-9A-05: Initial Setup Accuracy Comparison Between Frame-Based and Frameless Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tseng, T; Sheu, R; Todorov, B

    2014-06-15

    Purpose: To evaluate initial setup accuracy for stereotactic radiosurgery (SRS) between Brainlab frame-based and frameless immobilization system, also to discern the magnitude frameless system has on setup parameters. Methods: The correction shifts from the original setup were compared for total 157 SRS cranial treatments (69 frame-based vs. 88 frameless). All treatments were performed on a Novalis linac with ExacTrac positioning system. Localization box with isocenter overlay was used for initial setup and correction shift was determined by ExacTrac 6D auto-fusion to achieve submillimeter accuracy for treatment. For frameless treatments, mean time interval between simulation and treatment was 5.7 days (rangemore » 0–13). Pearson Chi-Square was used for univariate analysis. Results: The correctional radial shifts (mean±STD, median) for the frame and frameless system measured by ExacTrac were 1.2±1.2mm, 1.1mm and 3.1±3.3mm, 2.0mm, respectively. Treatments with frameless system had a radial shift >2mm more often than those with frames (51.1% vs. 2.9%; p<.0001). To achieve submillimeter accuracy, 85.5% frame-based treatments did not require shift and only 23.9% frameless treatment could succeed with initial setup. There was no statistical significant system offset observed in any direction for either system. For frameless treatments, those treated ≥ 3 days from simulation had statistically higher rates of radial shifts between 1–2mm and >2mm compared to patients treated in a shorter amount of time from simulation (34.3% and 56.7% vs. 28.6% and 33.3%, respectively; p=0.006). Conclusion: Although image-guided positioning system can also achieve submillimeter accuracy for frameless system, users should be cautious regarding the inherent uncertainty of its capability of immobilization. A proper quality assurance procedure for frameless mask manufacturing and a protocol for intra-fraction imaging verification will be crucial for frameless system. Time interval

  10. Making High Accuracy Null Depth Measurements for the LBTI Exozodi Survey

    NASA Technical Reports Server (NTRS)

    Mennesson, Bertrand; Defrere, Denis; Nowak, Matthias; Hinz, Philip; Millan-Gabet, Rafael; Absil, Oliver; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William C.; Kennedy, Grant M.; hide

    2016-01-01

    The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of 12 zodis per star, for a representative ensemble of 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.

  11. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  12. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  13. Fast Atomic-Scale Elemental Mapping of Crystalline Materials by STEM Energy-Dispersive X-Ray Spectroscopy Achieved with Thin Specimens [Fast Atomic-Scale Chemical Imaging of Crystalline Materials by STEM Energy-Dispersive X-ray Spectroscopy Achieved with Thin Specimens].

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Ping; Yuan, Renliang; Zuo, Jian Min

    Abstract Elemental mapping at the atomic-scale by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) provides a powerful real-space approach to chemical characterization of crystal structures. However, applications of this powerful technique have been limited by inefficient X-ray emission and collection, which require long acquisition times. Recently, using a lattice-vector translation method, we have shown that rapid atomic-scale elemental mapping using STEM-EDS can be achieved. This method provides atomic-scale elemental maps averaged over crystal areas of ~few 10 nm 2with the acquisition time of ~2 s or less. Here we report the details of this method, and, inmore » particular, investigate the experimental conditions necessary for achieving it. It shows, that in addition to usual conditions required for atomic-scale imaging, a thin specimen is essential for the technique to be successful. Phenomenological modeling shows that the localization of X-ray signals to atomic columns is a key reason. The effect of specimen thickness on the signal delocalization is studied by multislice image simulations. The results show that the X-ray localization can be achieved by choosing a thin specimen, and the thickness of less than about 22 nm is preferred for SrTiO 3in [001] projection for 200 keV electrons.« less

  14. Fast Atomic-Scale Elemental Mapping of Crystalline Materials by STEM Energy-Dispersive X-Ray Spectroscopy Achieved with Thin Specimens [Fast Atomic-Scale Chemical Imaging of Crystalline Materials by STEM Energy-Dispersive X-ray Spectroscopy Achieved with Thin Specimens].

    DOE PAGES

    Lu, Ping; Yuan, Renliang; Zuo, Jian Min

    2017-02-23

    Abstract Elemental mapping at the atomic-scale by scanning transmission electron microscopy (STEM) using energy-dispersive X-ray spectroscopy (EDS) provides a powerful real-space approach to chemical characterization of crystal structures. However, applications of this powerful technique have been limited by inefficient X-ray emission and collection, which require long acquisition times. Recently, using a lattice-vector translation method, we have shown that rapid atomic-scale elemental mapping using STEM-EDS can be achieved. This method provides atomic-scale elemental maps averaged over crystal areas of ~few 10 nm 2with the acquisition time of ~2 s or less. Here we report the details of this method, and, inmore » particular, investigate the experimental conditions necessary for achieving it. It shows, that in addition to usual conditions required for atomic-scale imaging, a thin specimen is essential for the technique to be successful. Phenomenological modeling shows that the localization of X-ray signals to atomic columns is a key reason. The effect of specimen thickness on the signal delocalization is studied by multislice image simulations. The results show that the X-ray localization can be achieved by choosing a thin specimen, and the thickness of less than about 22 nm is preferred for SrTiO 3in [001] projection for 200 keV electrons.« less

  15. Sustainability, synthetic chemicals, and human exposure.

    PubMed

    Podein, Rian J; Hernke, Michael T; Fortney, Luke W; Rakel, David P

    2010-01-01

    Public concerns regarding exposures to synthetic chemicals are increasing. Globally, there are increasing concentrations of many synthetic chemicals within the environment. The ubiquitous extent of some chemicals makes human exposure unavoidable. Biomonitoring has emerged as the optimal method for assessing exposures. The extent of human exposure and contamination occurs throughout the life cycle and is widespread. Although there is limited information on health risks for the majority of chemicals within our environment, and those identified with biomonitoring, many are known or suspected to cause human harm. Continued global and national unsustainable development regarding synthetic chemicals will increase the extent of environmental and human contamination unless precautionary action is implemented. Precautionary legislation may protect ecological and public health until societal sustainability is achieved.

  16. Innovations in Undergraduate Chemical Biology Education.

    PubMed

    Van Dyke, Aaron R; Gatazka, Daniel H; Hanania, Mariah M

    2018-01-19

    Chemical biology derives intellectual vitality from its scientific interface: applying chemical strategies and perspectives to biological questions. There is a growing need for chemical biologists to synergistically integrate their research programs with their educational activities to become holistic teacher-scholars. This review examines how course-based undergraduate research experiences (CUREs) are an innovative method to achieve this integration. Because CUREs are course-based, the review first offers strategies for creating a student-centered learning environment, which can improve students' outcomes. Exemplars of CUREs in chemical biology are then presented and organized to illustrate the five defining characteristics of CUREs: significance, scientific practices, discovery, collaboration, and iteration. Finally, strategies to overcome common barriers in CUREs are considered as well as future innovations in chemical biology education.

  17. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  18. Prospective memory mediated by interoceptive accuracy: a psychophysiological approach.

    PubMed

    Umeda, Satoshi; Tochizawa, Saiko; Shibata, Midori; Terasawa, Yuri

    2016-11-19

    Previous studies on prospective memory (PM), defined as memory for future intentions, suggest that psychological stress enhances successful PM retrieval. However, the mechanisms underlying this notion remain poorly understood. We hypothesized that PM retrieval is achieved through interaction with autonomic nervous activity, which is mediated by the individual accuracy of interoceptive awareness, as measured by the heartbeat detection task. In this study, the relationship between cardiac reactivity and retrieval of delayed intentions was evaluated using the event-based PM task. Participants were required to detect PM target letters while engaged in an ongoing 2-back working memory task. The results demonstrated that individuals with higher PM task performance had a greater increase in heart rate on PM target presentation. Also, higher interoceptive perceivers showed better PM task performance. This pattern was not observed for working memory task performance. These findings suggest that cardiac afferent signals enhance PM retrieval, which is mediated by individual levels of interoceptive accuracy.This article is part of the themed issue 'Interoception beyond homeostasis: affect, cognition and mental health'. © 2016 The Authors.

  19. Integrative Approaches for Predicting in vivo Effects of Chemicals from their Structural Descriptors and the Results of Short-term Biological Assays

    PubMed Central

    Low, Yen S.; Sedykh, Alexander; Rusyn, Ivan; Tropsha, Alexander

    2017-01-01

    Cheminformatics approaches such as Quantitative Structure Activity Relationship (QSAR) modeling have been used traditionally for predicting chemical toxicity. In recent years, high throughput biological assays have been increasingly employed to elucidate mechanisms of chemical toxicity and predict toxic effects of chemicals in vivo. The data generated in such assays can be considered as biological descriptors of chemicals that can be combined with molecular descriptors and employed in QSAR modeling to improve the accuracy of toxicity prediction. In this review, we discuss several approaches for integrating chemical and biological data for predicting biological effects of chemicals in vivo and compare their performance across several data sets. We conclude that while no method consistently shows superior performance, the integrative approaches rank consistently among the best yet offer enriched interpretation of models over those built with either chemical or biological data alone. We discuss the outlook for such interdisciplinary methods and offer recommendations to further improve the accuracy and interpretability of computational models that predict chemical toxicity. PMID:24805064

  20. Electronic spectra from TDDFT and machine learning in chemical space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Raghunathan; Hartmann, Mia; Tapavicza, Enrico

    Due to its favorable computational efficiency, time-dependent (TD) density functional theory (DFT) enables the prediction of electronic spectra in a high-throughput manner across chemical space. Its predictions, however, can be quite inaccurate. We resolve this issue with machine learning models trained on deviations of reference second-order approximate coupled-cluster (CC2) singles and doubles spectra from TDDFT counterparts, or even from DFT gap. We applied this approach to low-lying singlet-singlet vertical electronic spectra of over 20 000 synthetically feasible small organic molecules with up to eight CONF atoms. The prediction errors decay monotonously as a function of training set size. For amore » training set of 10 000 molecules, CC2 excitation energies can be reproduced to within +/- 0.1 eV for the remaining molecules. Analysis of our spectral database via chromophore counting suggests that even higher accuracies can be achieved. Based on the evidence collected, we discuss open challenges associated with data-driven modeling of high-lying spectra and transition intensities.« less

  1. Reaction Decoder Tool (RDT): extracting features from chemical reactions.

    PubMed

    Rahman, Syed Asad; Torrance, Gilliean; Baldacci, Lorenzo; Martínez Cuesta, Sergio; Fenninger, Franz; Gopal, Nimish; Choudhary, Saket; May, John W; Holliday, Gemma L; Steinbeck, Christoph; Thornton, Janet M

    2016-07-01

    Extracting chemical features like Atom-Atom Mapping (AAM), Bond Changes (BCs) and Reaction Centres from biochemical reactions helps us understand the chemical composition of enzymatic reactions. Reaction Decoder is a robust command line tool, which performs this task with high accuracy. It supports standard chemical input/output exchange formats i.e. RXN/SMILES, computes AAM, highlights BCs and creates images of the mapped reaction. This aids in the analysis of metabolic pathways and the ability to perform comparative studies of chemical reactions based on these features. This software is implemented in Java, supported on Windows, Linux and Mac OSX, and freely available at https://github.com/asad/ReactionDecoder : asad@ebi.ac.uk or s9asad@gmail.com. © The Author 2016. Published by Oxford University Press.

  2. Validation of minicams for measuring concentrations of chemical agent in environmental air

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menton, R.G.; Hayes, T.L.; Chou, Y.L.

    1993-05-13

    Environmental monitoring for chemical agents is necessary to ensure that notification and appropriate action will be taken in the, event that there is a release exceeding control limits of such agents into the workplace outside of engineering controls. Prior to implementing new analytical procedures for environmental monitoring, precision and accuracy (PA) tests are conducted to ensure that an agent monitoring system performs according to specified accuracy, precision, and sensitivity requirements. This testing not only establishes the accuracy and precision of the method, but also determines what factors can affect the method's performance. Performance measures that are particularly important in agentmore » monitoring include the Detection Limit (DL), Decision Limit (DC), Found Action Level (FAL), and the Target Action Level (TAL). PA experiments were performed at Battelle's Medical Research and Evaluation Facility (MREF) to validate the use of the miniature chemical agent monitoring system (MINICAMs) for measuring environmental air concentrations of sulfur mustard (HD). This presentation discusses the experimental and statistical approaches for characterizing the performance of MINICAMS for measuring HD in air.« less

  3. The Impact of Pushed Output on Accuracy and Fluency of Iranian EFL Learners' Speaking

    ERIC Educational Resources Information Center

    Sadeghi Beniss, Aram Reza; Edalati Bazzaz, Vahid

    2014-01-01

    The current study attempted to establish baseline quantitative data on the impacts of pushed output on two components of speaking (i.e., accuracy and fluency). To achieve this purpose, 30 female EFL learners were selected from a whole population pool of 50 based on the standard test of IELTS interview and were randomly assigned into an…

  4. Biomimetic chemical sensors using bioengineered olfactory and taste cells.

    PubMed

    Du, Liping; Zou, Ling; Zhao, Luhang; Wang, Ping; Wu, Chunsheng

    2014-01-01

    Biological olfactory and taste systems are natural chemical sensing systems with unique performances for the detection of environmental chemical signals. With the advances in olfactory and taste transduction mechanisms, biomimetic chemical sensors have achieved significant progress due to their promising prospects and potential applications. Biomimetic chemical sensors exploit the unique capability of biological functional components for chemical sensing, which are often sourced from sensing units of biological olfactory or taste systems at the tissue level, cellular level, or molecular level. Specifically, at the cellular level, there are mainly two categories of cells have been employed for the development of biomimetic chemical sensors, which are natural cells and bioengineered cells, respectively. Natural cells are directly isolated from biological olfactory and taste systems, which are convenient to achieve. However, natural cells often suffer from the undefined sensing properties and limited amount of identical cells. On the other hand, bioengineered cells have shown decisive advantages to be applied in the development of biomimetic chemical sensors due to the powerful biotechnology for the reconstruction of the cell sensing properties. Here, we briefly summarized the most recent advances of biomimetic chemical sensors using bioengineered olfactory and taste cells. The development challenges and future trends are discussed as well.

  5. CON4EI: Bovine Corneal Opacity and Permeability (BCOP) test for hazard identification and labelling of eye irritating chemicals.

    PubMed

    Verstraelen, Sandra; Maglennon, Gareth; Hollanders, Karen; Boonen, Francis; Adriaens, Els; Alépée, Nathalie; Drzewiecka, Agnieszka; Gruszka, Katarzyna; Kandarova, Helena; Willoughby, Jamin A; Guest, Robert; Schofield, Jane; Van Rompay, An R

    2017-10-01

    Assessment of ocular irritation potential is an international regulatory requirement in the safety evaluation of industrial and consumer products. None in vitro ocular irritation assays are capable of fully categorizing chemicals as stand-alone. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI consortium assessed the reliability of eight in vitro test methods and computational models as well as established a tiered-testing strategy. One of the selected assays was Bovine Corneal Opacity and Permeability (BCOP). In this project, the same corneas were used for measurement of opacity using the OP-KIT, the Laser Light-Based Opacitometer (LLBO) and for histopathological analysis. The results show that the accuracy of the BCOP OP-KIT in identifying Cat 1 chemicals was 73.8% while the accuracy was 86.3% for No Cat chemicals. BCOP OP-KIT false negative results were often related to an in vivo classification driven by conjunctival effects only. For the BCOP LLBO, the accuracy in identifying Cat 1 chemicals was 74.4% versus 88.8% for No Cat chemicals. The BCOP LLBO seems very promising for the identification of No Cat liquids but less so for the identification of solids. Histopathology as an additional endpoint to the BCOP test method does not reduce the false negative rate substantially for in vivo Cat 1 chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Variability of Diabetes Alert Dog Accuracy in a Real-World Setting

    PubMed Central

    Gonder-Frederick, Linda A.; Grabman, Jesse H.; Shepard, Jaclyn A.; Tripathi, Anand V.; Ducar, Dallas M.; McElgunn, Zachary R.

    2017-01-01

    Background: Diabetes alert dogs (DADs) are growing in popularity as an alternative method of glucose monitoring for individuals with type 1 diabetes (T1D). Only a few empirical studies have assessed DAD accuracy, with inconsistent results. The present study examined DAD accuracy and variability in performance in real-world conditions using a convenience sample of owner-report diaries. Method: Eighteen DAD owners (44.4% female; 77.8% youth) with T1D completed diaries of DAD alerts during the first year after placement. Diary entries included daily BG readings and DAD alerts. For each DAD, percentage hits (alert with BG ≤ 5.0 or ≥ 11.1 mmol/L; ≤90 or ≥200 mg/dl), percentage misses (no alert with BG out of range), and percentage false alarms (alert with BG in range) were computed. Sensitivity, specificity, positive likelihood ratio (PLR), and true positive rates were also calculated. Results: Overall comparison of DAD Hits to Misses yielded significantly more Hits for both low and high BG. Total sensitivity was 57.0%, with increased sensitivity to low BG (59.2%) compared to high BG (56.1%). Total specificity was 49.3% and PLR = 1.12. However, high variability in accuracy was observed across DADs, with low BG sensitivity ranging from 33% to 100%. Number of DADs achieving ≥ 60%, 65% and 70% true positive rates was 71%, 50% and 44%, respectively. Conclusions: DADs may be able to detect out-of-range BG, but variability across DADs is evident. Larger trials are needed to further assess DAD accuracy and to identify factors influencing the complexity of DAD accuracy in BG detection. PMID:28627305

  7. STTR Phase I: Low-Cost, High-Accuracy, Whole-Building Carbon Dioxide Monitoring for Demand Control Ventilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hallstrom, Jason O.; Ni, Zheng Richard

    This STTR Phase I project assessed the feasibility of a new CO 2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO 2 concentrations, as well as themore » electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO 2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO 2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO 2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States.« less

  8. Understanding Possibilities and Limitations of Abstract Chemical Representations for Achieving Conceptual Understanding

    ERIC Educational Resources Information Center

    Corradi, David M. J.; Elen, Jan; Schraepen, Beno; Clarebout, Geraldine

    2014-01-01

    When learning with abstract and scientific multiple external representations (MERs), low prior knowledge learners are said to have difficulties in using these MERs to achieve conceptual understanding. Yet little is known about what these limitations precisely entail. In order to understand this, we presented 101 learners with low prior knowledge…

  9. Three Decades of Precision Orbit Determination Progress, Achievements, Future Challenges and its Vital Contribution to Oceanography and Climate Research

    NASA Technical Reports Server (NTRS)

    Luthcke, Scott; Rowlands, David; Lemoine, Frank; Zelensky, Nikita; Beckley, Brian; Klosko, Steve; Chinn, Doug

    2006-01-01

    Although satellite altimetry has been around for thirty years, the last fifteen beginning with the launch of TOPEX/Poseidon (TP) have yielded an abundance of significant results including: monitoring of ENS0 events, detection of internal tides, determination of accurate global tides, unambiguous delineation of Rossby waves and their propagation characteristics, accurate determination of geostrophic currents, and a multi-decadal time series of mean sea level trend and dynamic ocean topography variability. While the high level of accuracy being achieved is a result of both instrument maturity and the quality of models and correction algorithms applied to the data, improving the quality of the Climate Data Records produced from altimetry is highly dependent on concurrent progress being made in fields such as orbit determination. The precision orbits form the reference frame from which the radar altimeter observations are made. Therefore, the accuracy of the altimetric mapping is limited to a great extent by the accuracy to which a satellite orbit can be computed. The TP mission represents the first time that the radial component of an altimeter orbit was routinely computed with an accuracy of 2-cm. Recently it has been demonstrated that it is possible to compute the radial component of Jason orbits with an accuracy of better than 1-cm. Additionally, still further improvements in TP orbits are being achieved with new techniques and algorithms largely developed from combined Jason and TP data analysis. While these recent POD achievements are impressive, the new accuracies are now revealing subtle systematic orbit error that manifest as both intra and inter annual ocean topography errors. Additionally the construction of inter-decadal time series of climate data records requires the removal of systematic differences across multiple missions. Current and future efforts must focus on the understanding and reduction of these errors in order to generate a complete and

  10. Accelerated Monte Carlo Simulation on the Chemical Stage in Water Radiolysis using GPU

    PubMed Central

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2018-01-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2. PMID:28323637

  11. Encoding of Fundamental Chemical Entities of Organic Reactivity Interest using chemical ontology and XML.

    PubMed

    Durairaj, Vijayasarathi; Punnaivanam, Sankar

    2015-09-01

    Fundamental chemical entities are identified in the context of organic reactivity and classified as appropriate concept classes namely ElectronEntity, AtomEntity, AtomGroupEntity, FunctionalGroupEntity and MolecularEntity. The entity classes and their subclasses are organized into a chemical ontology named "ChemEnt" for the purpose of assertion, restriction and modification of properties through entity relations. Individual instances of entity classes are defined and encoded as a library of chemical entities in XML. The instances of entity classes are distinguished with a unique notation and identification values in order to map them with the ontology definitions. A model GUI named Entity Table is created to view graphical representations of all the entity instances. The detection of chemical entities in chemical structures is achieved through suitable algorithms. The possibility of asserting properties to the entities at different levels and the mechanism of property flow within the hierarchical entity levels is outlined. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. A method for assessing the accuracy of surgical technique in the correction of astigmatism.

    PubMed

    Kaye, S B; Campbell, S H; Davey, K; Patterson, A

    1992-12-01

    Surgical results can be assessed as a function of what was aimed for, what was done, and what was achieved. One of the aims of refractive surgery is to reduce astigmatism; the smaller the postoperative astigmatism the better the result. Determination of what was done--that is, the surgical effect, can be calculated from the preoperative and postoperative astigmatism. A simplified formulation is described which facilitates the calculation (magnitude and direction) of this surgical effect. In addition, an expression for surgical accuracy is described, as a function of what was aimed for and what was achieved.

  13. Accuracy of torque-limiting devices: A comparative evaluation.

    PubMed

    Albayrak, Haydar; Gumus, Hasan Onder; Tursun, Funda; Kocaagaoglu, Hasan Huseyin; Kilinc, Halil Ibrahim

    2017-01-01

    To prevent the loosening of implant screws, clinicians should be aware of the output torque values needed to achieve the desired preload. Accurate torque-control devices are crucial in this regard; however, little information is currently available comparing the accuracy of mechanical with that of electronic torque-control devices. The purpose of this in vitro study was to identify and compare the accuracy of different types of torque-control devices. Devices from 5 different dental implant manufacturers were evaluated, including 2 spring-type (Straumann, Implance) mechanical devices (MTLD), 2 friction-type (Biohorizons, Dyna) MTLDs, and 1 (Megagen) electronic torque-control device (ETLD). For each manufacturer, 5 devices were tested 5 times with a digital torque tester, and the average for each device was calculated and recorded. The percentage of absolute deviations from the target torque values (PERDEV) were calculated and compared by using 1-way ANOVA. A 1-sample t test was used to evaluate the ability of each device to achieve its target torque value within a 95% confidence interval for the true population mean of measured values (α=.05 for all statistical analyses). One-way ANOVAs revealed statistically significant differences among torque-control devices (P<.001). ETLD showed higher PERDEVs (28.33 ±9.53) than MTLDs (P<.05), whereas PERDEVS of friction-type (7.56 ±3.64) and spring-type (10.85 ±4.11) MTLDs did not differ significantly. In addition, devices produced by Megagen had a significantly higher (P<.05) PERDEV (28.33 ±9.53) other devices, whereas no differences were found in devices manufactured by Biohorizons (7.31 ±5.34), Dyna (7.82 ±1.08), Implance (8.43 ±4.77), and Straumann (13.26 ±0.79). However, 1-sample t tests showed none of the torque-control devices evaluated in this study were capable of achieving their target torque values (P<.05). Within the limitations of this in vitro study, MTLDs were shown to be significantly more accurate

  14. Enhanced NMR Discrimination of Pharmaceutically Relevant Molecular Crystal Forms through Fragment-Based Ab Initio Chemical Shift Predictions.

    PubMed

    Hartman, Joshua D; Day, Graeme M; Beran, Gregory J O

    2016-11-02

    Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13 C and 15 N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Finally, these results also provide compelling evidence for the transferability of the linear regression parameters mapping predicted chemical shieldings to chemical shifts that were derived in an earlier study.

  15. Enhanced NMR Discrimination of Pharmaceutically Relevant Molecular Crystal Forms through Fragment-Based Ab Initio Chemical Shift Predictions

    PubMed Central

    2016-01-01

    Chemical shift prediction plays an important role in the determination or validation of crystal structures with solid-state nuclear magnetic resonance (NMR) spectroscopy. One of the fundamental theoretical challenges lies in discriminating variations in chemical shifts resulting from different crystallographic environments. Fragment-based electronic structure methods provide an alternative to the widely used plane wave gauge-including projector augmented wave (GIPAW) density functional technique for chemical shift prediction. Fragment methods allow hybrid density functionals to be employed routinely in chemical shift prediction, and we have recently demonstrated appreciable improvements in the accuracy of the predicted shifts when using the hybrid PBE0 functional instead of generalized gradient approximation (GGA) functionals like PBE. Here, we investigate the solid-state 13C and 15N NMR spectra for multiple crystal forms of acetaminophen, phenobarbital, and testosterone. We demonstrate that the use of the hybrid density functional instead of a GGA provides both higher accuracy in the chemical shifts and increased discrimination among the different crystallographic environments. Finally, these results also provide compelling evidence for the transferability of the linear regression parameters mapping predicted chemical shieldings to chemical shifts that were derived in an earlier study. PMID:27829821

  16. Toward Quantitative Small Animal Pinhole SPECT: Assessment of Quantitation Accuracy Prior to Image Compensations

    PubMed Central

    Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J. S.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose We assessed the quantitation accuracy of small animal pinhole single photon emission computed tomography (SPECT) under the current preclinical settings, where image compensations are not routinely applied. Procedures The effects of several common image-degrading factors and imaging parameters on quantitation accuracy were evaluated using Monte-Carlo simulation methods. Typical preclinical imaging configurations were modeled, and quantitative analyses were performed based on image reconstructions without compensating for attenuation, scatter, and limited system resolution. Results Using mouse-sized phantom studies as examples, attenuation effects alone degraded quantitation accuracy by up to −18% (Tc-99m or In-111) or −41% (I-125). The inclusion of scatter effects changed the above numbers to −12% (Tc-99m or In-111) and −21% (I-125), respectively, indicating the significance of scatter in quantitative I-125 imaging. Region-of-interest (ROI) definitions have greater impacts on regional quantitation accuracy for small sphere sources as compared to attenuation and scatter effects. For the same ROI, SPECT acquisitions using pinhole apertures of different sizes could significantly affect the outcome, whereas the use of different radii-of-rotation yielded negligible differences in quantitation accuracy for the imaging configurations simulated. Conclusions We have systematically quantified the influence of several factors affecting the quantitation accuracy of small animal pinhole SPECT. In order to consistently achieve accurate quantitation within 5% of the truth, comprehensive image compensation methods are needed. PMID:19048346

  17. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    NASA Astrophysics Data System (ADS)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  18. Ultrasound indoor positioning system based on a low-power wireless sensor network providing sub-centimeter accuracy.

    PubMed

    Medina, Carlos; Segura, José Carlos; De la Torre, Ángel

    2013-03-13

    This paper describes the TELIAMADE system, a new indoor positioning system based on time-of-flight (TOF) of ultrasonic signal to estimate the distance between a receiver node and a transmitter node. TELIAMADE system consists of a set of wireless nodes equipped with a radio module for communication and a module for the transmission and reception of ultrasound. The access to the ultrasonic channel is managed by applying a synchronization algorithm based on a time-division multiplexing (TDMA) scheme. The ultrasonic signal is transmitted using a carrier frequency of 40 kHz and the TOF measurement is estimated by applying a quadrature detector to the signal obtained at the A/D converter output. Low sampling frequencies of 17.78 kHz or even 12.31 kHz are possible using quadrature sampling in order to optimize memory requirements and to reduce the computational cost in signal processing. The distance is calculated from the TOF taking into account the speed of sound. An excellent accuracy in the estimation of the TOF is achieved using parabolic interpolation to detect of maximum of the signal envelope at the matched filter output. The signal phase information is also used for enhancing the TOF measurement accuracy. Experimental results show a root mean square error (rmse) less than 2 mm and a standard deviation less than 0.3 mm for pseudorange measurements in the range of distances between 2 and 6 m. The system location accuracy is also evaluated by applying multilateration. A sub-centimeter location accuracy is achieved with an average rmse of 9.6 mm.

  19. [Study on high accuracy detection of multi-component gas in oil-immerse power transformer].

    PubMed

    Fan, Jie; Chen, Xiao; Huang, Qi-Feng; Zhou, Yu; Chen, Gang

    2013-12-01

    In order to solve the problem of low accuracy and mutual interference in multi-component gas detection, a kind of multi-component gas detection network with high accuracy was designed. A semiconductor laser with narrow bandwidth was utilized as light source and a novel long-path gas cell was also used in this system. By taking the single sine signal to modulate the spectrum of laser and using space division multiplexing (SDM) and time division multiplexing (TDM) technique, the detection of multi-component gas was achieved. The experiments indicate that the linearity relevance coefficient is 0. 99 and the measurement relative error is less than 4%. The system dynamic response time is less than 15 s, by filling a volume of multi-component gas into the gas cell gradually. The system has advantages of high accuracy and quick response, which can be used in the fault gas on-line monitoring for power transformers in real time.

  20. Continuous Glucose Monitoring and Trend Accuracy

    PubMed Central

    Gottlieb, Rebecca; Le Compte, Aaron; Chase, J. Geoffrey

    2014-01-01

    Continuous glucose monitoring (CGM) devices are being increasingly used to monitor glycemia in people with diabetes. One advantage with CGM is the ability to monitor the trend of sensor glucose (SG) over time. However, there are few metrics available for assessing the trend accuracy of CGM devices. The aim of this study was to develop an easy to interpret tool for assessing trend accuracy of CGM data. SG data from CGM were compared to hourly blood glucose (BG) measurements and trend accuracy was quantified using the dot product. Trend accuracy results are displayed on the Trend Compass, which depicts trend accuracy as a function of BG. A trend performance table and Trend Index (TI) metric are also proposed. The Trend Compass was tested using simulated CGM data with varying levels of error and variability, as well as real clinical CGM data. The results show that the Trend Compass is an effective tool for differentiating good trend accuracy from poor trend accuracy, independent of glycemic variability. Furthermore, the real clinical data show that the Trend Compass assesses trend accuracy independent of point bias error. Finally, the importance of assessing trend accuracy as a function of BG level is highlighted in a case example of low and falling BG data, with corresponding rising SG data. This study developed a simple to use tool for quantifying trend accuracy. The resulting trend accuracy is easily interpreted on the Trend Compass plot, and if required, performance table and TI metric. PMID:24876437

  1. Trait Perception Accuracy and Acquaintance Within Groups: Tracking Accuracy Development.

    PubMed

    Brown, Jill A; Bernieri, Frank

    2017-05-01

    Previous work on trait perception has evaluated accuracy at discrete stages of relationships (e.g., strangers, best friends). A relatively limited body of literature has investigated changes in accuracy as acquaintance within a dyad or group increases. Small groups of initially unacquainted individuals spent more than 30 hr participating in a wide range of activities designed to represent common interpersonal contexts (e.g., eating, traveling). We calculated how accurately each participant judged others in their group on the big five traits across three distinct points within the acquaintance process: zero acquaintance, after a getting-to-know-you conversation, and after 10 weeks of interaction and activity. Judgments of all five traits exhibited accuracy above chance levels after 10 weeks. An examination of the trait rating stability revealed that much of the revision in judgments occurred not over the course of the 10-week relationship as suspected, but between zero acquaintance and the getting-to-know-you conversation.

  2. Extraction of CYP chemical interactions from biomedical literature using natural language processing methods.

    PubMed

    Jiao, Dazhi; Wild, David J

    2009-02-01

    This paper proposes a system that automatically extracts CYP protein and chemical interactions from journal article abstracts, using natural language processing (NLP) and text mining methods. In our system, we employ a maximum entropy based learning method, using results from syntactic, semantic, and lexical analysis of texts. We first present our system architecture and then discuss the data set for training our machine learning based models and the methods in building components in our system, such as part of speech (POS) tagging, Named Entity Recognition (NER), dependency parsing, and relation extraction. An evaluation of the system is conducted at the end, yielding very promising results: The POS, dependency parsing, and NER components in our system have achieved a very high level of accuracy as measured by precision, ranging from 85.9% to 98.5%, and the precision and the recall of the interaction extraction component are 76.0% and 82.6%, and for the overall system are 68.4% and 72.2%, respectively.

  3. Cavity ring-down spectroscopy of Doppler-broadened absorption line with sub-MHz absolute frequency accuracy.

    PubMed

    Cheng, C-F; Sun, Y R; Pan, H; Lu, Y; Li, X-F; Wang, J; Liu, A-W; Hu, S-M

    2012-04-23

    A continuous-wave cavity ring-down spectrometer has been built for precise determination of absolute frequencies of Doppler-broadened absorption lines. Using a thermo-stabilized Fabry-Pérot interferometer and Rb frequency references at the 780 nm and 795 nm, 0.1 - 0.6 MHz absolute frequency accuracy has been achieved in the 775-800 nm region. A water absorption line at 12579 cm(-1) is studied to test the performance of the spectrometer. The line position at zero-pressure limit is determined with an uncertainty of 0.3 MHz (relative accuracy of 0.8 × 10(-9)). © 2012 Optical Society of America

  4. Nuclear, Biological, Chemical (NBC) Reconnaissance Vehicles: Current Achievements and Technology Trends at SPA

    DTIC Science & Technology

    2004-10-25

    1 Servicios y Proyectos Avanzados, S.A. (SPA) was founded in June 1991 by a small group of Spanish...permit the measurement of radiation and dangerous substances, the performance of chemical and biochemical analyses and also micro - biological...For detectors, micro -fluidics as well as hyper-spectral technologies and a whole bunch of nanotechnologies show substantial promise; they are

  5. Programmable chemical controllers made from DNA

    PubMed Central

    Chen, Yuan-Jyue; Dalchau, Neil; Srinivas, Niranjan; Phillips, Andrew; Cardelli, Luca; Soloveichik, David; Seelig, Georg

    2014-01-01

    Biological organisms use complex molecular networks to navigate their environment and regulate their internal state. The development of synthetic systems with similar capabilities could lead to applications such as smart therapeutics or fabrication methods based on self-organization. To achieve this, molecular control circuits need to be engineered to perform integrated sensing, computation and actuation. Here we report a DNA-based technology for implementing the computational core of such controllers. We use the formalism of chemical reaction networks as a 'programming language', and our DNA architecture can, in principle, implement any behaviour that can be mathematically expressed as such. Unlike logic circuits, our formulation naturally allows complex signal processing of intrinsically analogue biological and chemical inputs. Controller components can be derived from biologically synthesized (plasmid) DNA, which reduces errors associated with chemically synthesized DNA. We implement several building-block reaction types and then combine them into a network that realizes, at the molecular level, an algorithm used in distributed control systems for achieving consensus between multiple agents. PMID:24077029

  6. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  7. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  8. Cognitive Predictors of Achievement Growth in Mathematics: A Five Year Longitudinal Study

    PubMed Central

    Geary, David C.

    2011-01-01

    The study's goal was to identify the beginning of first grade quantitative competencies that predict mathematics achievement start point and growth through fifth grade. Measures of number, counting, and arithmetic competencies were administered in early first grade and used to predict mathematics achievement through fifth (n = 177), while controlling for intelligence, working memory, and processing speed. Multilevel models revealed intelligence, processing speed, and the central executive component of working memory predicted achievement or achievement growth in mathematics and, as a contrast domain, word reading. The phonological loop was uniquely predictive of word reading and the visuospatial sketch pad of mathematics. Early fluency in processing and manipulating numerical set size and Arabic numerals, accurate use of sophisticated counting procedures for solving addition problems, and accuracy in making placements on a mathematical number line were uniquely predictive of mathematics achievement. Use of memory-based processes to solve addition problems predicted mathematics and reading achievement but in different ways. The results identify the early quantitative competencies that uniquely contribute to mathematics learning. PMID:21942667

  9. The effect of clock, media, and station location errors on Doppler measurement accuracy

    NASA Technical Reports Server (NTRS)

    Miller, J. K.

    1993-01-01

    Doppler tracking by the Deep Space Network (DSN) is the primary radio metric data type used by navigation to determine the orbit of a spacecraft. The accuracy normally attributed to orbits determined exclusively with Doppler data is about 0.5 microradians in geocentric angle. Recently, the Doppler measurement system has evolved to a high degree of precision primarily because of tracking at X-band frequencies (7.2 to 8.5 GHz). However, the orbit determination system has not been able to fully utilize this improved measurement accuracy because of calibration errors associated with transmission media, the location of tracking stations on the Earth's surface, the orientation of the Earth as an observing platform, and timekeeping. With the introduction of Global Positioning System (GPS) data, it may be possible to remove a significant error associated with the troposphere. In this article, the effect of various calibration errors associated with transmission media, Earth platform parameters, and clocks are examined. With the introduction of GPS calibrations, it is predicted that a Doppler tracking accuracy of 0.05 microradians is achievable.

  10. Use of a control test to aid pH assessment of chemical eye injuries.

    PubMed

    Connor, A J; Severn, P

    2009-11-01

    Chemical burns of the eye represent 7.0%-9.9% of all ocular trauma. Initial management of ocular chemical injuries is irrigation of the eye and conjunctival sac until neutralisation of the tear surface pH is achieved.We present a case of alkali injury in which the raised tear film pH seemed to be unresponsive to irrigation treatment. Suspicion was raised about the accuracy of the litmus paper used to test the tear film pH. The error was confirmed by use of a control litmus pH test of the examining doctor's eyes. Errors in litmus paper pH measurement can occur because of difficulty in matching the paper with scale colours and drying of the paper, which produces a darker colour. A small tear film sample can also create difficulty in colour matching, whereas too large a sample can wash away pigment from the litmus paper. Samples measured too quickly after irrigation can result in a falsely neutral pH measurement. Use of faulty or inappropriate materials can also result in errors. We advocate the use of control litmus pH test in all patients. This would highlight errors in pH measurements and aid in the detection of the end point of irrigation.

  11. Modeling Student Test-Taking Motivation in the Context of an Adaptive Achievement Test

    ERIC Educational Resources Information Center

    Wise, Steven L.; Kingsbury, G. Gage

    2016-01-01

    This study examined the utility of response time-based analyses in understanding the behavior of unmotivated test takers. For the data from an adaptive achievement test, patterns of observed rapid-guessing behavior and item response accuracy were compared to the behavior expected under several types of models that have been proposed to represent…

  12. Predictive Modeling of Chemical Hazard by Integrating Numerical Descriptors of Chemical Structures and Short-term Toxicity Assay Data

    PubMed Central

    Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander

    2012-01-01

    Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746

  13. The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images.

    PubMed

    Mitry, Danny; Zutis, Kris; Dhillon, Baljean; Peto, Tunde; Hayat, Shabina; Khaw, Kay-Tee; Morgan, James E; Moncur, Wendy; Trucco, Emanuele; Foster, Paul J

    2016-09-01

    Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%-74%) and 87% (95% CI, 86%-88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91-0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver timely, accurate, and cost-effective image analysis.

  14. Making High Accuracy Null Depth Measurements for the LBTI ExoZodi Survey

    NASA Technical Reports Server (NTRS)

    Mennesson, Bertrand; Defrere, Denis; Nowak, Matthew; Hinz, Philip; Millan-Gabet, Rafael; Absil, Olivier; Bailey, Vanessa; Bryden, Geoffrey; Danchi, William; Kennedy, Grant M.; hide

    2016-01-01

    The characterization of exozodiacal light emission is both important for the understanding of planetary systems evolution and for the preparation of future space missions aiming to characterize low mass planets in the habitable zone of nearby main sequence stars. The Large Binocular Telescope Interferometer (LBTI) exozodi survey aims at providing a ten-fold improvement over current state of the art, measuring dust emission levels down to a typical accuracy of approximately 12 zodis per star, for a representative ensemble of approximately 30+ high priority targets. Such measurements promise to yield a final accuracy of about 2 zodis on the median exozodi level of the targets sample. Reaching a 1 sigma measurement uncertainty of 12 zodis per star corresponds to measuring interferometric cancellation (null) levels, i.e visibilities at the few 100 ppm uncertainty level. We discuss here the challenges posed by making such high accuracy mid-infrared visibility measurements from the ground and present the methodology we developed for achieving current best levels of 500 ppm or so. We also discuss current limitations and plans for enhanced exozodi observations over the next few years at LBTI.

  15. Chemical entity recognition in patents by combining dictionary-based and statistical approaches

    PubMed Central

    Akhondi, Saber A.; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F.H.; Hettne, Kristina M.; van Mulligen, Erik M.; Kors, Jan A.

    2016-01-01

    We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small. Database URL: http://biosemantics.org/chemdner-patents PMID:27141091

  16. Temporal Control over Transient Chemical Systems using Structurally Diverse Chemical Fuels.

    PubMed

    Chen, Jack L-Y; Maiti, Subhabrata; Fortunati, Ilaria; Ferrante, Camilla; Prins, Leonard J

    2017-08-25

    The next generation of adaptive, intelligent chemical systems will rely on a continuous supply of energy to maintain the functional state. Such systems will require chemical methodology that provides precise control over the energy dissipation process, and thus, the lifetime of the transiently activated function. This manuscript reports on the use of structurally diverse chemical fuels to control the lifetime of two different systems under dissipative conditions: transient signal generation and the transient formation of self-assembled aggregates. The energy stored in the fuels is dissipated at different rates by an enzyme, which installs a dependence of the lifetime of the active system on the chemical structure of the fuel. In the case of transient signal generation, it is shown that different chemical fuels can be used to generate a vast range of signal profiles, allowing temporal control over two orders of magnitude. Regarding self-assembly under dissipative conditions, the ability to control the lifetime using different fuels turns out to be particularly important as stable aggregates are formed only at well-defined surfactant/fuel ratios, meaning that temporal control cannot be achieved by simply changing the fuel concentration. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Accuracy assessment of linear spectral mixture model due to terrain undulation

    NASA Astrophysics Data System (ADS)

    Wang, Tianxing; Chen, Songlin; Ma, Ya

    2008-12-01

    Mixture spectra are common in remote sensing due to the limitations of spatial resolution and the heterogeneity of land surface. During the past 30 years, a lot of subpixel model have developed to investigate the information within mixture pixels. Linear spectral mixture model (LSMM) is a simper and more general subpixel model. LSMM also known as spectral mixture analysis is a widely used procedure to determine the proportion of endmembers (constituent materials) within a pixel based on the endmembers' spectral characteristics. The unmixing accuracy of LSMM is restricted by variety of factors, but now the research about LSMM is mostly focused on appraisement of nonlinear effect relating to itself and techniques used to select endmembers, unfortunately, the environment conditions of study area which could sway the unmixing-accuracy, such as atmospheric scatting and terrain undulation, are not studied. This paper probes emphatically into the accuracy uncertainty of LSMM resulting from the terrain undulation. ASTER dataset was chosen and the C terrain correction algorithm was applied to it. Based on this, fractional abundances for different cover types were extracted from both pre- and post-C terrain illumination corrected ASTER using LSMM. Simultaneously, the regression analyses and the IKONOS image were introduced to assess the unmixing accuracy. Results showed that terrain undulation could dramatically constrain the application of LSMM in mountain area. Specifically, for vegetation abundances, a improved unmixing accuracy of 17.6% (regression against to NDVI) and 18.6% (regression against to MVI) for R2 was achieved respectively by removing terrain undulation. Anyway, this study indicated in a quantitative way that effective removal or minimization of terrain illumination effects was essential for applying LSMM. This paper could also provide a new instance for LSMM applications in mountainous areas. In addition, the methods employed in this study could be

  18. High-accuracy reference standards for two-photon absorption in the 680–1050 nm wavelength range

    PubMed Central

    de Reguardati, Sophie; Pahapill, Juri; Mikhailov, Alexander; Stepanenko, Yuriy; Rebane, Aleksander

    2016-01-01

    Degenerate two-photon absorption (2PA) of a series of organic fluorophores is measured using femtosecond fluorescence excitation method in the wavelength range, λ2PA = 680–1050 nm, and ~100 MHz pulse repetition rate. The function of relative 2PA spectral shape is obtained with estimated accuracy 5%, and the absolute 2PA cross section is measured at selected wavelengths with the accuracy 8%. Significant improvement of the accuracy is achieved by means of rigorous evaluation of the quadratic dependence of the fluorescence signal on the incident photon flux in the whole wavelength range, by comparing results obtained from two independent experiments, as well as due to meticulous evaluation of critical experimental parameters, including the excitation spatial- and temporal pulse shape, laser power and sample geometry. Application of the reference standards in nonlinear transmittance measurements is discussed. PMID:27137334

  19. Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol

    PubMed Central

    2015-01-01

    Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726

  20. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  1. Speed-accuracy trade-off in a trajectory-constrained self-feeding task: a quantitative index of unsuppressed motor noise in children with dystonia

    PubMed Central

    Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D.

    2015-01-01

    Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset towards longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. PMID:25895910

  2. Accuracy and speed of orthographic processing in persons with developmental dyslexia.

    PubMed

    King, Wayne M; Lombardino, Linda L; Ahmed, Sarah

    2005-08-01

    A group of 39 persons (20 male and 19 female, 11.0 to 32.5 yr.) with developmental dyslexia and 42 controls (21 male and 21 female, 11.2 to 32.3 years) were compared on computerized tests of sight word reading, nonword decoding, and spelling recognition. The subjects with developmental dyslexia performed significantly slower and less accurately than controls on all tasks. Further, the effect size of the group differences was larger for the older group. Within-group analyses showed a significant difference by age group on accuracy. Only the control group showed a significant age difference between groups on response time. Mean accuracy and response times for the reading-disabled subjects resembled shifted versions of the control group means. These results agree with previous reports that phonological deficits persist for reading-disabled adults and suggest a test of whether the discrepancy between reading-disabled and typically achieving readers may actually increase across age groups.

  3. Social Power Increases Interoceptive Accuracy

    PubMed Central

    Moeini-Jazani, Mehrad; Knoeferle, Klemens; de Molière, Laura; Gatti, Elia; Warlop, Luk

    2017-01-01

    Building on recent psychological research showing that power increases self-focused attention, we propose that having power increases accuracy in perception of bodily signals, a phenomenon known as interoceptive accuracy. Consistent with our proposition, participants in a high-power experimental condition outperformed those in the control and low-power conditions in the Schandry heartbeat-detection task. We demonstrate that the effect of power on interoceptive accuracy is not explained by participants’ physiological arousal, affective state, or general intention for accuracy. Rather, consistent with our reasoning that experiencing power shifts attentional resources inward, we show that the effect of power on interoceptive accuracy is dependent on individuals’ chronic tendency to focus on their internal sensations. Moreover, we demonstrate that individuals’ chronic sense of power also predicts interoceptive accuracy similar to, and independent of, how their situationally induced feeling of power does. We therefore provide further support on the relation between power and enhanced perception of bodily signals. Our findings offer a novel perspective–a psychophysiological account–on how power might affect judgments and behavior. We highlight and discuss some of these intriguing possibilities for future research. PMID:28824501

  4. Spinal intra-operative three-dimensional navigation with infra-red tool tracking: correlation between clinical and absolute engineering accuracy

    NASA Astrophysics Data System (ADS)

    Guha, Daipayan; Jakubovic, Raphael; Gupta, Shaurya; Yang, Victor X. D.

    2017-02-01

    Computer-assisted navigation (CAN) may guide spinal surgeries, reliably reducing screw breach rates. Definitions of screw breach, if reported, vary widely across studies. Absolute quantitative error is theoretically a more precise and generalizable metric of navigation accuracy, but has been computed variably and reported in fewer than 25% of clinical studies of CAN-guided pedicle screw accuracy. We reviewed a prospectively-collected series of 209 pedicle screws placed with CAN guidance to characterize the correlation between clinical pedicle screw accuracy, based on postoperative imaging, and absolute quantitative navigation accuracy. We found that acceptable screw accuracy was achieved for significantly fewer screws based on 2mm grade vs. Heary grade, particularly in the lumbar spine. Inter-rater agreement was good for the Heary classification and moderate for the 2mm grade, significantly greater among radiologists than surgeon raters. Mean absolute translational/angular accuracies were 1.75mm/3.13° and 1.20mm/3.64° in the axial and sagittal planes, respectively. There was no correlation between clinical and absolute navigation accuracy, in part because surgeons appear to compensate for perceived translational navigation error by adjusting screw medialization angle. Future studies of navigation accuracy should therefore report absolute translational and angular errors. Clinical screw grades based on post-operative imaging, if reported, may be more reliable if performed in multiple by radiologist raters.

  5. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  6. Reprint of "CON4EI: Bovine Corneal Opacity and Permeability (BCOP) test for hazard identification and labelling of eye irritating chemicals".

    PubMed

    Verstraelen, Sandra; Maglennon, Gareth; Hollanders, Karen; Boonen, Francis; Adriaens, Els; Alépée, Nathalie; Drzewiecka, Agnieszka; Gruszka, Katarzyna; Kandarova, Helena; Willoughby, Jamin A; Guest, Robert; Schofield, Jane; Van Rompay, An R

    2018-06-01

    Assessment of ocular irritation potential is an international regulatory requirement in the safety evaluation of industrial and consumer products. None in vitro ocular irritation assays are capable of fully categorizing chemicals as stand-alone. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI consortium assessed the reliability of eight in vitro test methods and computational models as well as established a tiered-testing strategy. One of the selected assays was Bovine Corneal Opacity and Permeability (BCOP). In this project, the same corneas were used for measurement of opacity using the OP-KIT, the Laser Light-Based Opacitometer (LLBO) and for histopathological analysis. The results show that the accuracy of the BCOP OP-KIT in identifying Cat 1 chemicals was 73.8% while the accuracy was 86.3% for No Cat chemicals. BCOP OP-KIT false negative results were often related to an in vivo classification driven by conjunctival effects only. For the BCOP LLBO, the accuracy in identifying Cat 1 chemicals was 74.4% versus 88.8% for No Cat chemicals. The BCOP LLBO seems very promising for the identification of No Cat liquids but less so for the identification of solids. Histopathology as an additional endpoint to the BCOP test method does not reduce the false negative rate substantially for in vivo Cat 1 chemicals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    PubMed

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  8. Accuracy of maxillary positioning after standard and inverted orthognathic sequencing.

    PubMed

    Ritto, Fabio G; Ritto, Thiago G; Ribeiro, Danilo Passeado; Medeiros, Paulo José; de Moraes, Márcio

    2014-05-01

    This study aimed to compare the accuracy of maxillary positioning after bimaxillary orthognathic surgery, using 2 sequences. A total of 80 cephalograms (40 preoperative and 40 postoperative) from 40 patients were analyzed. Group 1 included radiographs of patients submitted to conventional sequence, whereas group 2 patients were submitted to inverted sequence. The final position of the maxillary central incisor was obtained after vertical and horizontal measurements of the tracings, and it was compared with what had been planned. The null hypothesis, which stated that there would be no difference between the groups, was tested. After applying the Welch t test for comparison of mean differences between maxillary desired and achieved position, considering a statistical significance of 5% and a 2-tailed test, the null hypothesis was not rejected (P > .05). Thus, there was no difference in the accuracy of maxillary positioning between groups. Conventional and inverted sequencing proved to be reliable in positioning the maxilla after LeFort I osteotomy in bimaxillary orthognathic surgeries. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Peristalticity-driven banded chemical garden

    NASA Astrophysics Data System (ADS)

    Pópity-Tóth, É.; Schuszter, G.; Horváth, D.; Tóth, Á.

    2018-05-01

    Complex structures in nature are often formed by self-assembly. In order to mimic the formation, to enhance the production, or to modify the structures, easy-to-use methods are sought to couple engineering and self-assembly. Chemical-garden-like precipitation reactions are frequently used to study such couplings because of the intrinsic chemical and hydrodynamic interplays. In this work, we present a simple method of applying periodic pressure fluctuations given by a peristaltic pump which can be used to achieve regularly banded precipitate membranes in the copper-phosphate system.

  10. Enhancing chemistry problem-solving achievement using problem categorization

    NASA Astrophysics Data System (ADS)

    Bunce, Diane M.; Gabel, Dorothy L.; Samuel, John V.

    The enhancement of chemistry students' skill in problem solving through problem categorization is the focus of this study. Twenty-four students in a freshman chemistry course for health professionals are taught how to solve problems using the explicit method of problem solving (EMPS) (Bunce & Heikkinen, 1986). The EMPS is an organized approach to problem analysis which includes encoding the information given in a problem (Given, Asked For), relating this to what is already in long-term memory (Recall), and planning a solution (Overall Plan) before a mathematical solution is attempted. In addition to the EMPS training, treatment students receive three 40-minute sessions following achievement tests in which they are taught how to categorize problems. Control students use this time to review the EMPS solutions of test questions. Although problem categorization is involved in one section of the EMPS (Recall), treatment students who received specific training in problem categorization demonstrate significantly higher achievement on combination problems (those problems requiring the use of more than one chemical topic for their solution) at (p = 0.01) than their counterparts. Significantly higher achievement for treatment students is also measured on an unannounced test (p = 0.02). Analysis of interview transcripts of both treatment and control students illustrates a Rolodex approach to problem solving employed by all students in this study. The Rolodex approach involves organizing equations used to solve problems on mental index cards and flipping through them, matching units given when a new problem is to be solved. A second phenomenon observed during student interviews is the absence of a link in the conceptual understanding of the chemical concepts involved in a problem and the problem-solving skills employed to correctly solve problems. This study shows that explicit training in categorization skills and the EMPS can lead to higher achievement in complex problem

  11. Analysis of deformable image registration accuracy using computational modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results showmore » that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that

  12. Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency

    NASA Astrophysics Data System (ADS)

    Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu

    2018-03-01

    Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of

  13. Accuracy of Perceptual and Acoustic Methods for the Detection of Inspiratory Loci in Spontaneous Speech

    PubMed Central

    Wang, Yu-Tsai; Nip, Ignatius S. B.; Green, Jordan R.; Kent, Ray D.; Kent, Jane Finley; Ullman, Cara

    2012-01-01

    The current study investigates the accuracy of perceptually and acoustically determined inspiratory loci in spontaneous speech for the purpose of identifying breath groups. Sixteen participants were asked to talk about simple topics in daily life at a comfortable speaking rate and loudness while connected to a pneumotach and audio microphone. The locations of inspiratory loci were determined based on the aerodynamic signal, which served as a reference for loci identified perceptually and acoustically. Signal detection theory was used to evaluate the accuracy of the methods. The results showed that the greatest accuracy in pause detection was achieved (1) perceptually based on the agreement between at least 2 of the 3 judges; (2) acoustically using a pause duration threshold of 300 ms. In general, the perceptually-based method was more accurate than was the acoustically-based method. Inconsistencies among perceptually-determined, acoustically-determined, and aerodynamically-determined inspiratory loci for spontaneous speech should be weighed in selecting a method of breath-group determination. PMID:22362007

  14. Moderate-to-Vigorous Physical Activity, Indices of Cognitive Control, and Academic Achievement in Preadolescents.

    PubMed

    Pindus, Dominika M; Drollette, Eric S; Scudder, Mark R; Khan, Naiman A; Raine, Lauren B; Sherar, Lauren B; Esliger, Dale W; Kramer, Arthur F; Hillman, Charles H

    2016-06-01

    To assess whether preadolescents' objectively measured moderate-to-vigorous physical activity (MVPA) is associated with cognitive control and academic achievement, independent of aerobic fitness. A sample of 74 children (Meanage = 8.64 years, SD = .58, 46% girls) were included in the analyses. Daily MVPA (min/d) was measured over 7 days using ActiGraph wGT3X+ accelerometer. Aerobic fitness was measured using a maximal graded exercise test and expressed as maximal oxygen uptake (mL*kg(-1)*min(-1)). Inhibitory control was measured with a modified Eriksen flanker task (reaction time and accuracy), and working memory with an Operation Span Task (accuracy scores). Academic achievement (in reading, mathematics, and spelling) was expressed as standardized scores on the Kaufman Test of Educational Achievement. The relationships were assessed using hierarchical regression models adjusting for aerobic fitness and other covariates. No significant associations were found between MVPA and inhibition, working memory, or academic achievement. Aerobic fitness was positively associated with inhibitory control (P = .02) and spelling (P = .04) but not with other cognitive or academic variables (all P > .05). Aerobic fitness, rather than daily MVPA, is positively associated with childhood ability to manage perceptual interference and spelling. Further research into the associations between objectively measured MVPA and cognitive and academic outcomes in children while controlling for important covariates is needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  16. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2017-04-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  17. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU.

    PubMed

    Tian, Zhen; Jiang, Steve B; Jia, Xun

    2017-04-21

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  18. Unifying Speed-Accuracy Trade-Off and Cost-Benefit Trade-Off in Human Reaching Movements.

    PubMed

    Peternel, Luka; Sigaud, Olivier; Babič, Jan

    2017-01-01

    Two basic trade-offs interact while our brain decides how to move our body. First, with the cost-benefit trade-off, the brain trades between the importance of moving faster toward a target that is more rewarding and the increased muscular cost resulting from a faster movement. Second, with the speed-accuracy trade-off, the brain trades between how accurate the movement needs to be and the time it takes to achieve such accuracy. So far, these two trade-offs have been well studied in isolation, despite their obvious interdependence. To overcome this limitation, we propose a new model that is able to simultaneously account for both trade-offs. The model assumes that the central nervous system maximizes the expected utility resulting from the potential reward and the cost over the repetition of many movements, taking into account the probability to miss the target. The resulting model is able to account for both the speed-accuracy and the cost-benefit trade-offs. To validate the proposed hypothesis, we confront the properties of the computational model to data from an experimental study where subjects have to reach for targets by performing arm movements in a horizontal plane. The results qualitatively show that the proposed model successfully accounts for both cost-benefit and speed-accuracy trade-offs.

  19. Moisture Damage Modeling in Lime and Chemically Modified Asphalt at Nanolevel Using Ensemble Computational Intelligence

    PubMed Central

    2018-01-01

    This paper measures the adhesion/cohesion force among asphalt molecules at nanoscale level using an Atomic Force Microscopy (AFM) and models the moisture damage by applying state-of-the-art Computational Intelligence (CI) techniques (e.g., artificial neural network (ANN), support vector regression (SVR), and an Adaptive Neuro Fuzzy Inference System (ANFIS)). Various combinations of lime and chemicals as well as dry and wet environments are used to produce different asphalt samples. The parameters that were varied to generate different asphalt samples and measure the corresponding adhesion/cohesion forces are percentage of antistripping agents (e.g., Lime and Unichem), AFM tips K values, and AFM tip types. The CI methods are trained to model the adhesion/cohesion forces given the variation in values of the above parameters. To achieve enhanced performance, the statistical methods such as average, weighted average, and regression of the outputs generated by the CI techniques are used. The experimental results show that, of the three individual CI methods, ANN can model moisture damage to lime- and chemically modified asphalt better than the other two CI techniques for both wet and dry conditions. Moreover, the ensemble of CI along with statistical measurement provides better accuracy than any of the individual CI techniques. PMID:29849551

  20. High-accuracy contouring using projection moiré

    NASA Astrophysics Data System (ADS)

    Sciammarella, Cesar A.; Lamberti, Luciano; Sciammarella, Federico M.

    2005-09-01

    Shadow and projection moiré are the oldest forms of moiré to be used in actual technical applications. In spite of this fact and the extensive number of papers that have been published on this topic, the use of shadow moiré as an accurate tool that can compete with alternative devices poses very many problems that go to the very essence of the mathematical models used to obtain contour information from fringe pattern data. In this paper some recent developments on the projection moiré method are presented. Comparisons between the results obtained with the projection method and the results obtained by mechanical devices that operate with contact probes are presented. These results show that the use of projection moiré makes it possible to achieve the same accuracy that current mechanical touch probe devices can provide.

  1. Using meta-analysis to inform the design of subsequent studies of diagnostic test accuracy.

    PubMed

    Hinchliffe, Sally R; Crowther, Michael J; Phillips, Robert S; Sutton, Alex J

    2013-06-01

    An individual diagnostic accuracy study rarely provides enough information to make conclusive recommendations about the accuracy of a diagnostic test; particularly when the study is small. Meta-analysis methods provide a way of combining information from multiple studies, reducing uncertainty in the result and hopefully providing substantial evidence to underpin reliable clinical decision-making. Very few investigators consider any sample size calculations when designing a new diagnostic accuracy study. However, it is important to consider the number of subjects in a new study in order to achieve a precise measure of accuracy. Sutton et al. have suggested previously that when designing a new therapeutic trial, it could be more beneficial to consider the power of the updated meta-analysis including the new trial rather than of the new trial itself. The methodology involves simulating new studies for a range of sample sizes and estimating the power of the updated meta-analysis with each new study added. Plotting the power values against the range of sample sizes allows the clinician to make an informed decision about the sample size of a new trial. This paper extends this approach from the trial setting and applies it to diagnostic accuracy studies. Several meta-analytic models are considered including bivariate random effects meta-analysis that models the correlation between sensitivity and specificity. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Factors Governing Surface Form Accuracy In Diamond Machined Components

    NASA Astrophysics Data System (ADS)

    Myler, J. K.; Page, D. A.

    1988-10-01

    Manufacturing methods for diamond machined optical surfaces, for application at infrared wavelengths, require that a new set of criteria must be recognised for the specification of surface form. Appropriate surface form parameters are discussed with particular reference to an XY cartesian geometry CNC machine. Methods for reducing surface form errors in diamond machining are discussed for certain areas such as tool wear, tool centring, and the fixturing of the workpiece. Examples of achievable surface form accuracy are presented. Traditionally, optical surfaces have been produced by use of random polishing techniques using polishing compounds and lapping tools. For lens manufacture, the simplest surface which could be created corresponded to a sphere. The sphere is a natural outcome of a random grinding and polishing process. The measurement of the surface form accuracy would most commonly be performed using a contact test gauge plate, polished to a sphere of known radius of curvature. QA would simply be achieved using a diffuse monochromatic source and looking for residual deviations between the polished surface and the test plate. The specifications governing the manufacture of surfaces using these techniques would call for the accuracy to which the generated surface should match the test plate as defined by a spherical deviations from the required curvature and a non spherical astigmatic error. Consequently, optical design software has tolerancing routines which specifically allow the designer to assess the influence of spherical error and astigmatic error on the optical performance. The creation of general aspheric surfaces is not so straightforward using conventional polishing techniques since the surface profile is non spherical and a good approximation to a power series. For infra red applications (X = 8-12p,m) numerically controlled single point diamond turning is an alternative manufacturing technology capable of creating aspheric profiles as well as

  3. Overlay accuracy on a flexible web with a roll printing process based on a roll-to-roll system.

    PubMed

    Chang, Jaehyuk; Lee, Sunggun; Lee, Ki Beom; Lee, Seungjun; Cho, Young Tae; Seo, Jungwoo; Lee, Sukwon; Jo, Gugrae; Lee, Ki-yong; Kong, Hyang-Shik; Kwon, Sin

    2015-05-01

    For high-quality flexible devices from printing processes based on Roll-to-Roll (R2R) systems, overlay alignment during the patterning of each functional layer poses a major challenge. The reason is because flexible substrates have a relatively low stiffness compared with rigid substrates, and they are easily deformed during web handling in the R2R system. To achieve a high overlay accuracy for a flexible substrate, it is important not only to develop web handling modules (such as web guiding, tension control, winding, and unwinding) and a precise printing tool but also to control the synchronization of each unit in the total system. A R2R web handling system and reverse offset printing process were developed in this work, and an overlay between the 1st and 2nd layers of ±5μm on a 500 mm-wide film was achieved at a σ level of 2.4 and 2.8 (x and y directions, respectively) in a continuous R2R printing process. This paper presents the components and mechanisms used in reverse offset printing based on a R2R system and the printing results including positioning accuracy and overlay alignment accuracy.

  4. Passive fit and accuracy of three dental implant impression techniques.

    PubMed

    Al Quran, Firas A; Rashdan, Bashar A; Zomar, AbdelRahman A Abu; Weiner, Saul

    2012-02-01

    To reassess the accuracy of three impression techniques relative to the passive fit of the prosthesis. An edentulous maxillary cast was fabricated in epoxy resin with four dental implants embedded and secured with heat-cured acrylic resin. Three techniques were tested: closed tray, open tray nonsplinted, and open tray splinted. One light-cured custom acrylic tray was fabricated for each impression technique, and transfer copings were attached to the implants. Fifteen impressions for each technique were prepared with medium-bodied consistency polyether. Subsequently, the impressions were poured in type IV die stone. The distances between the implants were measured using a digital micrometer. The statistical analysis of the data was performed with ANOVA and a one-sample t test at a 95% confidence interval. The lowest mean difference in dimensional accuracy was found within the direct (open tray) splinted technique. Also, the one-sample t test showed that the direct splinted technique has the least statistical significant difference from direct nonsplinted and indirect (closed tray) techniques. All discrepancies were less than 100 Μm. Within the limitations of this study, the best accuracy of the definitive prosthesis was achieved when the impression copings were splinted with autopolymerized acrylic resin, sectioned, and rejoined. However, the errors associated with all of these techniques were less than 100 Μm, and based on the current definitions of passive fit, they all would be clinically acceptable.

  5. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  6. The inorganic side of chemical biology.

    PubMed

    Lippard, Stephen J

    2006-10-01

    Bioinorganic chemistry remains a vibrant discipline at the interface of chemistry and the biological sciences. Metal ions function in numerous metalloenzymes, are incorporated into pharmaceuticals and imaging agents, and inspire the synthesis of catalysts used to achieve many chemical transformations.

  7. ADMET Evaluation in Drug Discovery. 18. Reliable Prediction of Chemical-Induced Urinary Tract Toxicity by Boosting Machine Learning Approaches.

    PubMed

    Lei, Tailong; Sun, Huiyong; Kang, Yu; Zhu, Feng; Liu, Hui; Zhou, Wenfang; Wang, Zhe; Li, Dan; Li, Youyong; Hou, Tingjun

    2017-11-06

    Xenobiotic chemicals and their metabolites are mainly excreted out of our bodies by the urinary tract through the urine. Chemical-induced urinary tract toxicity is one of the main reasons that cause failure during drug development, and it is a common adverse event for medications, natural supplements, and environmental chemicals. Despite its importance, there are only a few in silico models for assessing urinary tract toxicity for a large number of compounds with diverse chemical structures. Here, we developed a series of qualitative and quantitative structure-activity relationship (QSAR) models for predicting urinary tract toxicity. In our study, the recursive feature elimination method incorporated with random forests (RFE-RF) was used for dimension reduction, and then eight machine learning approaches were used for QSAR modeling, i.e., relevance vector machine (RVM), support vector machine (SVM), regularized random forest (RRF), C5.0 trees, eXtreme gradient boosting (XGBoost), AdaBoost.M1, SVM boosting (SVMBoost), and RVM boosting (RVMBoost). For building classification models, the synthetic minority oversampling technique was used to handle the imbalance data set problem. Among all the machine learning approaches, SVMBoost based on the RBF kernel achieves both the best quantitative (q ext 2 = 0.845) and qualitative predictions for the test set (MCC of 0.787, AUC of 0.893, sensitivity of 89.6%, specificity of 94.1%, and global accuracy of 90.8%). The application domains were then analyzed, and all of the tested chemicals fall within the application domain coverage. We also examined the structure features of the chemicals with large prediction errors. In brief, both the regression and classification models developed by the SVMBoost approach have reliable prediction capability for assessing chemical-induced urinary tract toxicity.

  8. Increasing Efficiency and Effectiveness in Predicting Second-Grade Achievement Using a Kindergarten Screening Battery.

    ERIC Educational Resources Information Center

    Gordon, Roberta R.

    1988-01-01

    Investigation into the most effective use of a kindergarten screening battery to predict second-grade reading and mathematics achievement found that a combination of 10 readiness subtests resulted in the same degree of accuracy as that obtained using the entire battery. However, neither version was accurate enough to be useful. (Author/CB)

  9. Dependence of Dynamic Modeling Accuracy on Sensor Measurements, Mass Properties, and Aircraft Geometry

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    The NASA Generic Transport Model (GTM) nonlinear simulation was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of identified parameters in mathematical models describing the flight dynamics and determined from flight data. Measurements from a typical flight condition and system identification maneuver were systematically and progressively deteriorated by introducing noise, resolution errors, and bias errors. The data were then used to estimate nondimensional stability and control derivatives within a Monte Carlo simulation. Based on these results, recommendations are provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using additional flight conditions and parameter estimation methods, as well as a nonlinear flight simulation of the General Dynamics F-16 aircraft, were compared with these recommendations

  10. Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol

    DOE PAGES

    Kale, Seyit; Sode, Olaseni; Weare, Jonathan; ...

    2014-11-07

    Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys. 2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum undermore » DFT by several fold. In conclusion, the approach also shows promise for free energy calculations when thermal noise can be controlled.« less

  11. Feasibility and accuracy of nasal alar pulse oximetry.

    PubMed

    Morey, T E; Rice, M J; Vasilopoulos, T; Dennis, D M; Melker, R J

    2014-06-01

    The nasal ala is an attractive site for pulse oximetry because of perfusion by branches of the external and internal carotid arteries. We evaluated the accuracy of a novel pulse oximetry sensor custom designed for the nasal ala. After IRB approval, healthy non-smoking subjects [n=12; aged 28 (23-41) yr; 6M/6F] breathed hypoxic mixtures of fresh gas by a facemask to achieve oxyhaemoglobin saturations of 70-100% measured by traditional co-oximetry from radial artery samples. Concurrent alar and finger pulse oximetry values were measured using probes designed for these sites. Data were analysed using the Bland-Altman method for multiple observations per subject. Bias, precision, and accuracy root mean square error (ARMS) over a range of 70-100% were significantly better for the alar probe compared with a standard finger probe. The mean bias for the alar and finger probes was 0.73% and 1.90% (P<0.001), respectively, with corresponding precision values of 1.65 and 1.83 (P=0.015) and ARMS values of 1.78% and 2.72% (P=0.047). The coefficients of determination were 0.96 and 0.96 for the alar and finger probes, respectively. The within/between-subject variation for the alar and finger probes were 1.14/1.57% and 1.87/1.47%, respectively. The limits of agreement were 3.96/-2.50% and 5.48/-1.68% for the alar and finger probes, respectively. Nasal alar pulse oximetry is feasible and demonstrates accurate pulse oximetry values over a range of 70-100%. The alar probe demonstrated greater accuracy compared with a conventional finger pulse oximeter. © The Author [2014]. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Accuracy of genomic breeding values for meat tenderness in Polled Nellore cattle.

    PubMed

    Magnabosco, C U; Lopes, F B; Fragoso, R C; Eifert, E C; Valente, B D; Rosa, G J M; Sainz, R D

    2016-07-01

    .22 (Bayes Cπ) to 0.25 (Bayes B). When preselecting SNP based on GWAS results, the highest correlation (0.27) between WBSF and the genomic breeding value was achieved using the Bayesian LASSO model with 15,030 (3%) markers. Although this study used relatively few animals, the design of the segregating population ensured wide genetic variability for meat tenderness, which was important to achieve acceptable accuracy of genomic prediction. Although all models showed similar levels of prediction accuracy, some small advantages were observed with the Bayes B approach when higher numbers of markers were preselected based on their -values resulting from a GWAS analysis.

  13. Smart Device-Supported BDS/GNSS Real-Time Kinematic Positioning for Sub-Meter-Level Accuracy in Urban Location-Based Services.

    PubMed

    Wang, Liang; Li, Zishen; Zhao, Jiaojiao; Zhou, Kai; Wang, Zhiyu; Yuan, Hong

    2016-12-21

    Using mobile smart devices to provide urban location-based services (LBS) with sub-meter-level accuracy (around 0.5 m) is a major application field for future global navigation satellite system (GNSS) development. Real-time kinematic (RTK) positioning, which is a widely used GNSS-based positioning approach, can improve the accuracy from about 10-20 m (achieved by the standard positioning services) to about 3-5 cm based on the geodetic receivers. In using the smart devices to achieve positioning with sub-meter-level accuracy, a feasible solution of combining the low-cost GNSS module and the smart device is proposed in this work and a user-side GNSS RTK positioning software was developed from scratch based on the Android platform. Its real-time positioning performance was validated by BeiDou Navigation Satellite System/Global Positioning System (BDS/GPS) combined RTK positioning under the conditions of a static and kinematic (the velocity of the rover was 50-80 km/h) mode in a real urban environment with a SAMSUNG Galaxy A7 smartphone. The results show that the fixed-rates of ambiguity resolution (the proportion of epochs of ambiguities fixed) for BDS/GPS combined RTK in the static and kinematic tests were about 97% and 90%, respectively, and the average positioning accuracies (RMS) were better than 0.15 m (horizontal) and 0.25 m (vertical) for the static test, and 0.30 m (horizontal) and 0.45 m (vertical) for the kinematic test.

  14. Study of pulsations of chemically peculiar a stars

    NASA Astrophysics Data System (ADS)

    Sachkov, M. E.

    2014-01-01

    Rapidly oscillating chemically peculiar A stars (roAp) pulsate in high-overtone, low degree p-modes and form a sub-group of chemically peculiar magnetic A stars (Ap). Until recently, the classical asteroseismic research, i.e., frequency analysis, of these stars was based on photometric observations both ground-based and space-based. Significant progress has been achieved by obtaining uninterrupted, ultra-high precision data from the MOST, COROT, and Kepler satellites. Over the last ten years, a real breakthrough was achieved in the study of roAp stars due to the time-resolved, high spectral resolution spectroscopic observations. Unusual pulsational characteristics of these stars, caused by the interaction between propagating pulsationwaves and strong stratification of chemical elements, provide an opportunity to study the upper roAp star atmosphere in more detail than is possible for any star but the Sun, using spectroscopic data. In this paper the results of recent pulsation studies of these stars are reviewed.

  15. Tailoring the surface chemical bond states of the NbN films by doping Ag: Achieving hard hydrophobic surface

    NASA Astrophysics Data System (ADS)

    Ren, Ping; Zhang, Kan; Du, Suxuan; Meng, Qingnan; He, Xin; Wang, Shuo; Wen, Mao; Zheng, Weitao

    2017-06-01

    Robust hydrophobic surfaces based on ceramics capable of withstanding harsh conditions such as abrasion, erosion and high temperature, are required in a broad range of applications. The metal cations with coordinative saturation or low electronegativity are commonly chosen to achieve the intrinsically hydrophobic ceramic by reducing Lewis acidity, and thus the ceramic systems are limited. In this work, we present a different picture that robust hydrophobic surface with high hardness (≥20 GPa) can be fabricated through doping Ag atoms into intrinsically hydrophilic ceramic film NbN by reactive co-sputtering. The transition of wettability from hydrophilic to hydrophobic of Nb-Ag-N films induced by Ag doping results from the appearance of Ag2O groups on the films surfaces through self-oxidation, because Ag cations (Ag+) in Ag2O are the filled-shell (4d105S0) electronic structure with coordinative saturation that have no tendency to interact with water. The results show that surface Ag2O benefited for hydrophobicity comes from the solute Ag atoms rather than precipitate metal Ag, in which the more Ag atoms incorporated into Nb-sublattice are able to further improve the hydrophobicity, whereas the precipitation of Ag nanoclusters would worsen it. The present work opens a window for fabricating robust hydrophobic surface through tailoring surface chemical bond states by doping Ag into transition metal nitrides.

  16. Chemical entity recognition in patents by combining dictionary-based and statistical approaches.

    PubMed

    Akhondi, Saber A; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F H; Hettne, Kristina M; van Mulligen, Erik M; Kors, Jan A

    2016-01-01

    We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small.Database URL: http://biosemantics.org/chemdner-patents. © The Author(s) 2016. Published by Oxford University Press.

  17. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.

    PubMed

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-10-13

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.

  18. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Brunner, Thomas A.; Gentile, Nicholas A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositionsmore » will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.« less

  19. Increased genomic prediction accuracy in wheat breeding using a large Australian panel.

    PubMed

    Norman, Adam; Taylor, Julian; Tanaka, Emi; Telfer, Paul; Edwards, James; Martinant, Jean-Pierre; Kuchel, Haydn

    2017-12-01

    Genomic prediction accuracy within a large panel was found to be substantially higher than that previously observed in smaller populations, and also higher than QTL-based prediction. In recent years, genomic selection for wheat breeding has been widely studied, but this has typically been restricted to population sizes under 1000 individuals. To assess its efficacy in germplasm representative of commercial breeding programmes, we used a panel of 10,375 Australian wheat breeding lines to investigate the accuracy of genomic prediction for grain yield, physical grain quality and other physiological traits. To achieve this, the complete panel was phenotyped in a dedicated field trial and genotyped using a custom Axiom TM Affymetrix SNP array. A high-quality consensus map was also constructed, allowing the linkage disequilibrium present in the germplasm to be investigated. Using the complete SNP array, genomic prediction accuracies were found to be substantially higher than those previously observed in smaller populations and also more accurate compared to prediction approaches using a finite number of selected quantitative trait loci. Multi-trait genetic correlations were also assessed at an additive and residual genetic level, identifying a negative genetic correlation between grain yield and protein as well as a positive genetic correlation between grain size and test weight.

  20. Improvement of Dimensional Accuracy of 3-D Printed Parts using an Additive/Subtractive Based Hybrid Prototyping Approach

    NASA Astrophysics Data System (ADS)

    Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.

    2017-11-01

    At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).

  1. Speed-Accuracy Trade-Off in a Trajectory-Constrained Self-Feeding Task: A Quantitative Index of Unsuppressed Motor Noise in Children With Dystonia.

    PubMed

    Lunardini, Francesca; Bertucco, Matteo; Casellato, Claudia; Bhanpuri, Nasir; Pedrocchi, Alessandra; Sanger, Terence D

    2015-10-01

    Motor speed and accuracy are both affected in childhood dystonia. Thus, deriving a speed-accuracy function is an important metric for assessing motor impairments in dystonia. Previous work in dystonia studied the speed-accuracy trade-off during point-to-point tasks. To achieve a more relevant measurement of functional abilities in dystonia, the present study investigates upper-limb kinematics and electromyographic activity of 8 children with dystonia and 8 healthy children during a trajectory-constrained child-relevant task that emulates self-feeding with a spoon and requires continuous monitoring of accuracy. The speed-accuracy trade-off is examined by changing the spoon size to create different accuracy demands. Results demonstrate that the trajectory-constrained speed-accuracy relation is present in both groups, but it is altered in dystonia in terms of increased slope and offset toward longer movement times. Findings are consistent with the hypothesis of increased signal-dependent noise in dystonia, which may partially explain the slow and variable movements observed in dystonia. © The Author(s) 2015.

  2. The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis.

    PubMed

    Wixted, John T; Wells, Gary L

    2017-05-01

    The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).

  3. Protein engineering approaches to chemical biotechnology.

    PubMed

    Chen, Zhen; Zeng, An-Ping

    2016-12-01

    Protein engineering for the improvement of properties of biocatalysts and for the generation of novel metabolic pathways plays more and more important roles in chemical biotechnology aiming at the production of chemicals from biomass. Although widely used in single-enzyme catalysis process, protein engineering is only being increasingly explored in recent years to achieve more complex in vitro and in vivo biocatalytic processes. This review focuses on major contributions of protein engineering to chemical biotechnology in the field of multi-enzymatic cascade catalysis and metabolic engineering. Especially, we discuss and highlight recent strategies for combining pathway design and protein engineering for the production of novel products. Copyright © 2016. Published by Elsevier Ltd.

  4. Micro Thermal and Chemical Systems for In Situ Resource Utilization on Mars

    NASA Technical Reports Server (NTRS)

    Wegeng, Robert S.; Sanders, Gerald

    2000-01-01

    Robotic sample return missions and postulated human missions to Mars can be greatly aided through the development and utilization of compact chemical processing systems that process atmospheric gases and other indigenous resources to produce hydrocarbon propellants/fuels, oxygen, and other needed chemicals. When used to reduce earth launch mass, substantial cost savings can result. Process Intensification and Process Miniaturization can simultaneously be achieved through the application of microfabricated chemical process systems, based on the rapid heat and mass transport in engineered microchannels. Researchers at NASA's Johnson Space Center (JSC) and the Department of Energy's Pacific Northwest National Laboratory (PNNL) are collaboratively developing micro thermal and chemical systems for NASA's Mission to Mars program. Preliminary results show that many standard chemical process components (e.g., heat exchangers, chemical reactors and chemical separations units) can be reduced in hardware volume without a corresponding reduction in chemical production rates. Low pressure drops are also achievable when appropriate scaling rules are applied. This paper will discuss current progress in the development of engineered microchemical systems for space and terrestrial applications, including fabrication methods, expected operating characteristics, and specific experimental results.

  5. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium

    PubMed Central

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619

  6. Laser ranging with the MéO telescope to improve orbital accuracy of space debris

    NASA Astrophysics Data System (ADS)

    Hennegrave, L.; Pyanet, M.; Haag, H.; Blanchet, G.; Esmiller, B.; Vial, S.; Samain, E.; Paris, J.; Albanese, D.

    2013-05-01

    Improving orbital accuracy of space debris is one of the major prerequisite to performing reliable collision prediction in low earth orbit. The objective is to avoid false alarms and useless maneuvers for operational satellites. This paper shows how laser ranging on debris can improve the accuracy of orbit determination. In March 2012 a joint OCA-Astrium team had the first laser echoes from space debris using the MéO (Métrologie Optique) telescope of the Observatoire de la Côte d'Azur (OCA), upgraded with a nanosecond pulsed laser. The experiment was conducted in full compliance with the procedures dictated by the French Civil Aviation Authorities. To perform laser ranging measurement on space debris, the laser link budget needed to be improved. Related technical developments were supported by implementation of a 2J pulsed laser purchased by ASTRIUM and an adapted photo detection. To achieve acquisition of the target from low accuracy orbital data such as Two Lines Elements, a 2.3-degree field of view telescope was coupled to the original MéO telescope 3-arcmin narrow field of view. The wide field of view telescope aimed at pointing, adjusting and acquiring images of the space debris for astrometry measurement. The achieved set-up allowed performing laser ranging and angular measurements in parallel, on several rocket stages from past launches. After a brief description of the set-up, development issues and campaigns, the paper discusses added-value of laser ranging measurement when combined to angular measurement for accurate orbit determination. Comparison between different sets of experimental results as well as simulation results is given.

  7. Dimensional accuracy of aluminium extrusions in mechanical calibration

    NASA Astrophysics Data System (ADS)

    Raknes, Christian Arne; Welo, Torgeir; Paulsen, Frode

    2018-05-01

    Reducing dimensional variations in the extrusion process without increasing cost is challenging due to the nature of the process itself. An alternative approach—also from a cost perspective—is using extruded profiles with standard tolerances and utilize downstream processes, and thus calibrate the part within tolerance limits that are not achievable directly from the extrusion process. In this paper, two mechanical calibration strategies for the extruded product are investigated, utilizing the forming lines of the manufacturer. The first calibration strategy is based on global, longitudinal stretching in combination with local bending, while the second strategy utilizes the principle of transversal stretching and local bending of the cross-section. An extruded U-profile is used to make a comparison between the two methods using numerical analyses. To provide response surfaces with the FEA program, ABAQUS is used in combination with Design of Experiment (DOE). DOE is conducted with a two-level fractional factorial design to collect the appropriate data. The aim is to find the main factors affecting the dimension accuracy of the final part obtained by the two calibration methods. The results show that both calibration strategies have proven to reduce cross-sectional variations effectively form standard extrusion tolerances. It is concluded that mechanical calibration is a viable, low-cost alternative for aluminium parts that demand high dimensional accuracy, e.g. due to fit-up or welding requirements.

  8. On the Accuracy Potential in Underwater/Multimedia Photogrammetry

    PubMed Central

    Maas, Hans-Gerd

    2015-01-01

    Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell’s Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions. PMID:26213942

  9. Climate Change Observation Accuracy: Requirements and Economic Value

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce; Cooke, Roger; Golub, Alexander; Baize, Rosemary; Mlynczak, Martin; Lukashin, Constantin; Thome, Kurt; Shea, Yolanda; Kopp, Greg; Pilewskie, Peter; hide

    2016-01-01

    This presentation will summarize a new quantitative approach to determining the required accuracy for climate change observations. Using this metric, most current global satellite observations struggle to meet this accuracy level. CLARREO (Climate Absolute Radiance and Refractivity Observatory) is a new satellite mission designed to resolve this challenge is by achieving advances of a factor of 10 for reflected solar spectra and a factor of 3 to 5 for thermal infrared spectra. The CLARREO spectrometers can serve as SI traceable benchmarks for the Global Satellite Intercalibration System (GSICS) and greatly improve the utility of a wide range of LEO and GEO infrared and reflected solar satellite sensors for climate change observations (e.g. CERES, MODIS, VIIIRS, CrIS, IASI, Landsat, etc). A CLARREO Pathfinder mission for flight on the International Space Station is included in the U.S. Presidentâ€"TM"s fiscal year 2016 budget, with launch in 2019 or 2020. Providing more accurate decadal change trends can in turn lead to more rapid narrowing of key climate science uncertainties such as cloud feedback and climate sensitivity. A new study has been carried out to quantify the economic benefits of such an advance and concludes that the economic value is $9 Trillion U.S. dollars. The new value includes the cost of carbon emissions reductions.

  10. On the Accuracy Potential in Underwater/Multimedia Photogrammetry.

    PubMed

    Maas, Hans-Gerd

    2015-07-24

    Underwater applications of photogrammetric measurement techniques usually need to deal with multimedia photogrammetry aspects, which are characterized by the necessity of handling optical rays that are refracted at interfaces between optical media with different refractive indices according to Snell's Law. This so-called multimedia geometry has to be incorporated into geometric models in order to achieve correct measurement results. The paper shows a flexible yet strict geometric model for the handling of refraction effects on the optical path, which can be implemented as a module into photogrammetric standard tools such as spatial resection, spatial intersection, bundle adjustment or epipolar line computation. The module is especially well suited for applications, where an object in water is observed by cameras in air through one or more planar glass interfaces, as it allows for some simplifications here. In the second part of the paper, several aspects, which are relevant for an assessment of the accuracy potential in underwater/multimedia photogrammetry, are discussed. These aspects include network geometry and interface planarity issues as well as effects caused by refractive index variations and dispersion and diffusion under water. All these factors contribute to a rather significant degradation of the geometric accuracy potential in underwater/multimedia photogrammetry. In practical experiments, a degradation of the quality of results by a factor two could be determined under relatively favorable conditions.

  11. Chemical Vapor Detection using Single-walled Carbon Nanotubes

    DTIC Science & Technology

    2006-05-01

    1 / f noise , and achieving chemical specificity. Recently, researchers have developed approaches to...nanoscale materials, exhibit a large component of 1 / f noise .11 Such 1 / f noise is a particular concern for chemical detection, because the sensors operate at...low frequencies. We discuss how SWNT networks can be designed to reduce the level of 1 / f noise to acceptable levels.12 Lastly, we discuss the issue

  12. Influence of stimulated Brillouin scattering on positioning accuracy of long-range dual Mach-Zehnder interferometric vibration sensors

    NASA Astrophysics Data System (ADS)

    He, Xiangge; Xie, Shangran; Cao, Shan; Liu, Fei; Zheng, Xiaoping; Zhang, Min; Yan, Han; Chen, Guocai

    2016-11-01

    The properties of noise induced by stimulated Brillouin scattering (SBS) in long-range interferometers and their influences on the positioning accuracy of dual Mach-Zehnder interferometric (DMZI) vibration sensing systems are studied. The SBS noise is found to be white and incoherent between the two arms of the interferometer in a 1-MHz bandwidth range. Experiments on 25-km long fibers show that the root mean square error (RMSE) of the positioning accuracy is consistent with the additive noise model for the time delay estimation theory. A low-pass filter can be properly designed to suppress the SBS noise and further achieve a maximum RMSE reduction of 6.7 dB.

  13. RELEASE OF CHEMICALS FROM CONTAMINATED SOILS. (R822721C529)

    EPA Science Inventory

    At sites that contain contaminated soils, there can be questions about the magnitude of risk posed by the chemicals in the soils and about the cleanup levels that should be achieved. Knowledge about the rate of release of chemicals is important to answers to such questions. Th...

  14. Analysis of the Chemical Representations in Secondary Lebanese Chemistry Textbooks

    ERIC Educational Resources Information Center

    Shehab, Saadeddine Salim; BouJaoude, Saouma

    2017-01-01

    This study focused on the requirements that chemical representations should meet in textbooks in order to enhance conceptual understanding. Specifically, the purpose of this study was to evaluate the chemical representations that are present in 7 secondary Lebanese chemistry textbooks. To achieve the latter purpose, an instrument adapted from…

  15. Enhancing the Undergraduate Computing Experience in Chemical Engineering CACHE Corporation

    ERIC Educational Resources Information Center

    Edgar, Thomas F.

    2006-01-01

    This white paper focuses on the integration and enhancement of the computing experience for undergraduates throughout the chemical engineering curriculum. The computing experience for undergraduates in chemical engineering should have continuity and be coordinated from course to course, because a single software solution is difficult to achieve in…

  16. Improving LUC estimation accuracy with multiple classification system for studying impact of urbanization on watershed flood

    NASA Astrophysics Data System (ADS)

    Dou, P.

    2017-12-01

    Guangzhou has experienced a rapid urbanization period called "small change in three years and big change in five years" since the reform of China, resulting in significant land use/cover changes(LUC). To overcome the disadvantages of single classifier for remote sensing image classification accuracy, a multiple classifier system (MCS) is proposed to improve the quality of remote sensing image classification. The new method combines advantages of different learning algorithms, and achieves higher accuracy (88.12%) than any single classifier did. With the proposed MCS, land use/cover (LUC) on Landsat images from 1987 to 2015 was obtained, and the LUCs were used on three watersheds (Shijing river, Chebei stream, and Shahe stream) to estimate the impact of urbanization on water flood. The results show that with the high accuracy LUC, the uncertainty in flood simulations are reduced effectively (for Shijing river, Chebei stream, and Shahe stream, the uncertainty reduced 15.5%, 17.3% and 19.8% respectively).

  17. Genomic selection accuracies within and between environments and small breeding groups in white spruce.

    PubMed

    Beaulieu, Jean; Doerksen, Trevor K; MacKay, John; Rainville, André; Bousquet, Jean

    2014-12-02

    Genomic selection (GS) may improve selection response over conventional pedigree-based selection if markers capture more detailed information than pedigrees in recently domesticated tree species and/or make it more cost effective. Genomic prediction accuracies using 1748 trees and 6932 SNPs representative of as many distinct gene loci were determined for growth and wood traits in white spruce, within and between environments and breeding groups (BG), each with an effective size of Ne ≈ 20. Marker subsets were also tested. Model fits and/or cross-validation (CV) prediction accuracies for ridge regression (RR) and the least absolute shrinkage and selection operator models approached those of pedigree-based models. With strong relatedness between CV sets, prediction accuracies for RR within environment and BG were high for wood (r = 0.71-0.79) and moderately high for growth (r = 0.52-0.69) traits, in line with trends in heritabilities. For both classes of traits, these accuracies achieved between 83% and 92% of those obtained with phenotypes and pedigree information. Prediction into untested environments remained moderately high for wood (r ≥ 0.61) but dropped significantly for growth (r ≥ 0.24) traits, emphasizing the need to phenotype in all test environments and model genotype-by-environment interactions for growth traits. Removing relatedness between CV sets sharply decreased prediction accuracies for all traits and subpopulations, falling near zero between BGs with no known shared ancestry. For marker subsets, similar patterns were observed but with lower prediction accuracies. Given the need for high relatedness between CV sets to obtain good prediction accuracies, we recommend to build GS models for prediction within the same breeding population only. Breeding groups could be merged to build genomic prediction models as long as the total effective population size does not exceed 50 individuals in order to obtain high prediction accuracy such as that

  18. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  19. TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net.

    PubMed

    Yoo, Jin Eun

    2018-01-01

    A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective.

  20. TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net

    PubMed Central

    Yoo, Jin Eun

    2018-01-01

    A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective. PMID:29599736

  1. Chemical free cotton defoliation and dessication

    USDA-ARS?s Scientific Manuscript database

    Preliminary results are presented for new techniques to achieve chemical free means of cotton defoliation and desiccation. Report will cover test results, for several different methods, as tested on; greenhouse, outdoor grown potted plants, and field grown cotton plants, that were grown under commer...

  2. The accuracy of transvaginal sonography to detect endometriosis cyst

    NASA Astrophysics Data System (ADS)

    Diantika, M.; Gunardi, E. R.

    2017-08-01

    Endometriosis is common in women of reproductive age. Late diagnosis is still the main concern. Currently, noninvasive diagnostic testing, such as transvaginal sonography, is recommended. The aim of the current study was to evaluate the accuracy of transvaginal sonography in diagnosing endometrial cysts in patients in Cipto Mangunkusumo Hospital, Jakarta, Indonesia. This diagnostic study was carried out at Cipto Mangunkusumo Hospital between January 2014 and June 2015. Outpatients suspected have an endometrial cyst based on the patient history and a clinical examination was recruited. The patients were then evaluated using transvaginal sonography by an experienced sonologist, according to the research protocol. The gold standard test was a histological finding in the removed surgical mass. Ninety-eight patients were analyzed. An endometrial cyst was confirmed by histology in 85 patients (87%). The accuracy, sensitivity, specificity, positive predictive value and negative predictive value of transvaginal sonography was established to be 85% (a range of 71-99%), 93%, 77%, 96%, and 63%, respectively. A significantly higher area under the curve was identified using transvaginal sonogpraphy compared to that achieved with a clinical examination alone (85% versus 79%). Transvaginal sonography was useful in diagnosing endometrial cysts in outpatients and is recommended in daily clinical practice.

  3. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  4. Overall View of Chemical and Biochemical Weapons

    PubMed Central

    Pitschmann, Vladimír

    2014-01-01

    This article describes a brief history of chemical warfare, which culminated in the signing of the Chemical Weapons Convention. It describes the current level of chemical weapons and the risk of using them. Furthermore, some traditional technology for the development of chemical weapons, such as increasing toxicity, methods of overcoming chemical protection, research on natural toxins or the introduction of binary technology, has been described. In accordance with many parameters, chemical weapons based on traditional technologies have achieved the limit of their development. There is, however, a big potential of their further development based on the most recent knowledge of modern scientific and technical disciplines, particularly at the boundary of chemistry and biology. The risk is even higher due to the fact that already, today, there is a general acceptance of the development of non-lethal chemical weapons at a technologically higher level. In the future, the chemical arsenal will be based on the accumulation of important information from the fields of chemical, biological and toxin weapons. Data banks obtained in this way will be hardly accessible and the risk of their materialization will persist. PMID:24902078

  5. Overall view of chemical and biochemical weapons.

    PubMed

    Pitschmann, Vladimír

    2014-06-04

    This article describes a brief history of chemical warfare, which culminated in the signing of the Chemical Weapons Convention. It describes the current level of chemical weapons and the risk of using them. Furthermore, some traditional technology for the development of chemical weapons, such as increasing toxicity, methods of overcoming chemical protection, research on natural toxins or the introduction of binary technology, has been described. In accordance with many parameters, chemical weapons based on traditional technologies have achieved the limit of their development. There is, however, a big potential of their further development based on the most recent knowledge of modern scientific and technical disciplines, particularly at the boundary of chemistry and biology. The risk is even higher due to the fact that already, today, there is a general acceptance of the development of non-lethal chemical weapons at a technologically higher level. In the future, the chemical arsenal will be based on the accumulation of important information from the fields of chemical, biological and toxin weapons. Data banks obtained in this way will be hardly accessible and the risk of their materialization will persist.

  6. Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China

    NASA Astrophysics Data System (ADS)

    Yang, Bo

    2016-06-01

    Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.

  7. Probe-level linear model fitting and mixture modeling results in high accuracy detection of differential gene expression.

    PubMed

    Lemieux, Sébastien

    2006-08-25

    The identification of differentially expressed genes (DEGs) from Affymetrix GeneChips arrays is currently done by first computing expression levels from the low-level probe intensities, then deriving significance by comparing these expression levels between conditions. The proposed PL-LM (Probe-Level Linear Model) method implements a linear model applied on the probe-level data to directly estimate the treatment effect. A finite mixture of Gaussian components is then used to identify DEGs using the coefficients estimated by the linear model. This approach can readily be applied to experimental design with or without replication. On a wholly defined dataset, the PL-LM method was able to identify 75% of the differentially expressed genes within 10% of false positives. This accuracy was achieved both using the three replicates per conditions available in the dataset and using only one replicate per condition. The method achieves, on this dataset, a higher accuracy than the best set of tools identified by the authors of the dataset, and does so using only one replicate per condition.

  8. Approximate Algorithms for Computing Spatial Distance Histograms with Accuracy Guarantees

    PubMed Central

    Grupcev, Vladimir; Yuan, Yongke; Tu, Yi-Cheng; Huang, Jin; Chen, Shaoping; Pandit, Sagar; Weng, Michael

    2014-01-01

    Particle simulation has become an important research tool in many scientific and engineering fields. Data generated by such simulations impose great challenges to database storage and query processing. One of the queries against particle simulation data, the spatial distance histogram (SDH) query, is the building block of many high-level analytics, and requires quadratic time to compute using a straightforward algorithm. Previous work has developed efficient algorithms that compute exact SDHs. While beating the naive solution, such algorithms are still not practical in processing SDH queries against large-scale simulation data. In this paper, we take a different path to tackle this problem by focusing on approximate algorithms with provable error bounds. We first present a solution derived from the aforementioned exact SDH algorithm, and this solution has running time that is unrelated to the system size N. We also develop a mathematical model to analyze the mechanism that leads to errors in the basic approximate algorithm. Our model provides insights on how the algorithm can be improved to achieve higher accuracy and efficiency. Such insights give rise to a new approximate algorithm with improved time/accuracy tradeoff. Experimental results confirm our analysis. PMID:24693210

  9. A synthetic visual plane algorithm for visibility computation in consideration of accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang

    2017-12-01

    Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.

  10. Geoid undulation accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  11. Magnetic constraints on early lunar evolution revisited: Limits on accuracy imposed by methods of paleointensity measurements

    NASA Technical Reports Server (NTRS)

    Banerjee, S. K.

    1984-01-01

    It is impossible to carry out conventional paleointensity experiments requiring repeated heating and cooling to 770 C without chemical, physical or microstructural changes on lunar samples. Non-thermal methods of paleointensity determination have been sought: the two anhysteretic remanent magnetization (ARM) methods, and the saturation isothermal remanent magnetization (IRMS) method. Experimental errors inherent in these alternative approaches have been investigated to estimate the accuracy limits on the calculated paleointensities. Results are indicated in this report.

  12. The Accuracy and Reliability of Crowdsource Annotations of Digital Retinal Images

    PubMed Central

    Mitry, Danny; Zutis, Kris; Dhillon, Baljean; Peto, Tunde; Hayat, Shabina; Khaw, Kay-Tee; Morgan, James E.; Moncur, Wendy; Trucco, Emanuele; Foster, Paul J.

    2016-01-01

    Purpose Crowdsourcing is based on outsourcing computationally intensive tasks to numerous individuals in the online community who have no formal training. Our aim was to develop a novel online tool designed to facilitate large-scale annotation of digital retinal images, and to assess the accuracy of crowdsource grading using this tool, comparing it to expert classification. Methods We used 100 retinal fundus photograph images with predetermined disease criteria selected by two experts from a large cohort study. The Amazon Mechanical Turk Web platform was used to drive traffic to our site so anonymous workers could perform a classification and annotation task of the fundus photographs in our dataset after a short training exercise. Three groups were assessed: masters only, nonmasters only and nonmasters with compulsory training. We calculated the sensitivity, specificity, and area under the curve (AUC) of receiver operating characteristic (ROC) plots for all classifications compared to expert grading, and used the Dice coefficient and consensus threshold to assess annotation accuracy. Results In total, we received 5389 annotations for 84 images (excluding 16 training images) in 2 weeks. A specificity and sensitivity of 71% (95% confidence interval [CI], 69%–74%) and 87% (95% CI, 86%–88%) was achieved for all classifications. The AUC in this study for all classifications combined was 0.93 (95% CI, 0.91–0.96). For image annotation, a maximal Dice coefficient (∼0.6) was achieved with a consensus threshold of 0.25. Conclusions This study supports the hypothesis that annotation of abnormalities in retinal images by ophthalmologically naive individuals is comparable to expert annotation. The highest AUC and agreement with expert annotation was achieved in the nonmasters with compulsory training group. Translational Relevance The use of crowdsourcing as a technique for retinal image analysis may be comparable to expert graders and has the potential to deliver

  13. Matters of accuracy and conventionality: prior accuracy guides children's evaluations of others' actions.

    PubMed

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-03-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clément, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and 4-year-olds were asked to endorse and imitate one of two actors performing an unfamiliar action, one actor who was unconventional but successful and one who was conventional but unsuccessful. These data demonstrated that children preferred endorsing and imitating the unconventional but successful actor. Results suggest that when the accuracy and conventionality of a source are put into conflict, children may give priority to accuracy over conventionality when estimating the source's reliability and, ultimately, when deciding who to trust.

  14. Motor Inhibition Affects the Speed But Not Accuracy of Aimed Limb Movements in an Insect

    PubMed Central

    Calas-List, Delphine; Clare, Anthony J.; Komissarova, Alexandra; Nielsen, Thomas A.

    2014-01-01

    When reaching toward a target, human subjects use slower movements to achieve higher accuracy, and this can be accompanied by increased limb impedance (stiffness, viscosity) that stabilizes movements against motor noise and external perturbation. In arthropods, the activity of common inhibitory motor neurons influences limb impedance, so we hypothesized that this might provide a mechanism for speed and accuracy control of aimed movements in insects. We recorded simultaneously from excitatory leg motor neurons and from an identified common inhibitory motor neuron (CI1) in locusts that performed natural aimed scratching movements. We related limb movement kinematics to recorded motor activity and demonstrate that imposed alterations in the activity of CI1 influenced these kinematics. We manipulated the activity of CI1 by injecting depolarizing or hyperpolarizing current or killing the cell using laser photoablation. Naturally higher levels of inhibitory activity accompanied faster movements. Experimentally biasing the firing rate downward, or stopping firing completely, led to slower movements mediated by changes at several joints of the limb. Despite this, we found no effect on overall movement accuracy. We conclude that inhibitory modulation of joint stiffness has effects across most of the working range of the insect limb, with a pronounced effect on the overall velocity of natural movements independent of their accuracy. Passive joint forces that are greatest at extreme joint angles may enhance accuracy and are not affected by motor inhibition. PMID:24872556

  15. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  16. Chemical pleurodesis for spontaneous pneumothorax.

    PubMed

    How, Cheng-Hung; Hsu, Hsao-Hsun; Chen, Jin-Shing

    2013-12-01

    Pneumothorax is defined as the presence of air in the pleural cavity. Spontaneous pneumothorax, occurring without antecedent traumatic or iatrogenic cause, is sub-divided into primary and secondary. The severity of pneumothorax could be varied from asymptomatic to hemodynamically compromised. Optimal management of this benign disease has been a matter of debate. In addition to evacuating air from the pleural space by simple aspiration or chest tube drainage, the management of spontaneous pneumothorax also focused on ceasing air leakage and preventing recurrences by surgical intervention or chemical pleurodesis. Chemical pleurodesis is a procedure to achieve symphysis between the two layers of pleura by sclerosing agents. In the current practice guidelines, chemical pleurodesis is reserved for patients unable or unwilling to receive surgery. Recent researches have found that chemical pleurodesis is also safe and effective in preventing pneumothorax recurrence in patients with the first episode of spontaneous pneumothorax or after thoracoscopic surgery and treating persistent air leakage after thoracoscopic surgery. In this article we aimed at exploring the role of chemical pleurodesis for spontaneous pneumothorax, including ceasing air leakage and preventing recurrence. The indications, choice of sclerosants, safety, effects, and possible side effects or complications of chemical pleurodesis are also reviewed here. Copyright © 2013. Published by Elsevier B.V.

  17. Interstitial assessment of aggressive prostate cancer by physio-chemical photoacoustics: an ex vivo study with intact human prostates.

    PubMed

    Huang, Shengsong; Qin, Yu; Chen, Yingna; Pan, Jing; Xu, Chengdang; Wu, Denglong; Chao, Wan-Yu; Wei, John T; Tomlins, Scott A; Wang, Xueding; Brian Fowlkes, J; Carson, Paul L; Cheng, Qian; Xu, Guan

    2018-06-23

    Transrectal ultrasound (TRUS) guided biopsy is the standard procedure for evaluating the presence and aggressiveness of prostate cancer. TRUS biopsy involves tissue removal, and suffers from low core yield as well as high false negative rate. A less invasive and more accurate diagnostic procedure for prostate cancer is therefore highly desired. Combining the optical sensitivity and ultrasonic resolution to resolve the spatial distribution of the major molecular components in tissue, photoacoustic (PA) technology could be an alternative approach for the diagnosis of prostate cancer. The purpose of this study is to examine the feasibility of identifying aggressive prostate cancer using interstitial PA measurements. 17 patients with pre-biopsy magnetic resonance imaging (MRI), TRUS biopsies and planned prostatectomies were enrolled in this study. The interstitial PA measurements were achieved using our recently developed needle PA probe, which was inserted into the ex vivo prostates in the fashion of a biopsy needle. A total of 70 interstitial PA measurements were acquired. The PA measurements were quantified by a previously established PA physio-chemical analysis (PAPCA) method. The histology has confirmed the nonaggressive and aggressive cancerous conditions at the insertion locations. The diagnostic accuracy was also compared to that provided by the pre-biopsy MRI. The quantitative study shows significant differences between the individual parameters of the nonaggressive and the aggressive cancerous regions (p<0.005). Multivariate analysis of the quantitative features achieved a diagnostic accuracy of 78.6% for differentiating nonaggressive and aggressive prostate cancer tissues CONCLUSIONS: The proposed procedure has shown promises in the diagnosis of aggressive prostate cancer. This article is protected by copyright. All rights reserved.

  18. New chemical-DSMC method in numerical simulation of axisymmetric rarefied reactive flow

    NASA Astrophysics Data System (ADS)

    Zakeri, Ramin; Kamali Moghadam, Ramin; Mani, Mahmoud

    2017-04-01

    The modified quantum kinetic (MQK) chemical reaction model introduced by Zakeri et al. is developed for applicable cases in axisymmetric reactive rarefied gas flows using the direct simulation Monte Carlo (DSMC) method. Although, the MQK chemical model uses some modifications in the quantum kinetic (QK) method, it also employs the general soft sphere collision model and Stockmayer potential function to properly select the collision pairs in the DSMC algorithm and capture both the attraction and repulsion intermolecular forces in rarefied gas flows. For assessment of the presented model in the simulation of more complex and applicable reacting flows, first, the air dissociation is studied in a single cell for equilibrium and non-equilibrium conditions. The MQK results agree well with the analytical and experimental data and they accurately predict the characteristics of the rarefied flowfield with chemical reaction. To investigate accuracy of the MQK chemical model in the simulation of the axisymmetric flow, air dissociation is also assessed in an axial hypersonic flow around two geometries, the sphere as a benchmark case and the blunt body (STS-2) as an applicable test case. The computed results including the transient, rotational and vibrational temperatures, species concentration in the stagnation line, and also the heat flux and pressure coefficient on the surface are compared with those of the other chemical methods like the QK and total collision energy (TCE) models and available analytical and experimental data. Generally, the MQK chemical model properly simulates the chemical reactions and predicts flowfield characteristics more accurate rather than the typical QK model. Although in some cases, results of the MQK approaches match with those of the TCE method, the main point is that the MQK does not need any experimental data or unrealistic assumption of specular boundary condition as used in the TCE method. Another advantage of the MQK model is the

  19. Three-dimensional single-molecule localization with nanometer accuracy using Metal-Induced Energy Transfer (MIET) imaging

    NASA Astrophysics Data System (ADS)

    Karedla, Narain; Chizhik, Anna M.; Stein, Simon C.; Ruhlandt, Daja; Gregor, Ingo; Chizhik, Alexey I.; Enderlein, Jörg

    2018-05-01

    Our paper presents the first theoretical and experimental study using single-molecule Metal-Induced Energy Transfer (smMIET) for localizing single fluorescent molecules in three dimensions. Metal-Induced Energy Transfer describes the resonant energy transfer from the excited state of a fluorescent emitter to surface plasmons in a metal nanostructure. This energy transfer is strongly distance-dependent and can be used to localize an emitter along one dimension. We have used Metal-Induced Energy Transfer in the past for localizing fluorescent emitters with nanometer accuracy along the optical axis of a microscope. The combination of smMIET with single-molecule localization based super-resolution microscopy that provides nanometer lateral localization accuracy offers the prospect of achieving isotropic nanometer localization accuracy in all three spatial dimensions. We give a thorough theoretical explanation and analysis of smMIET, describe its experimental requirements, also in its combination with lateral single-molecule localization techniques, and present first proof-of-principle experiments using dye molecules immobilized on top of a silica spacer, and of dye molecules embedded in thin polymer films.

  20. Three-dimensional accuracy of different impression techniques for dental implants

    PubMed Central

    Nakhaei, Mohammadreza; Madani, Azam S; Moraditalab, Azizollah; Haghi, Hamidreza Rajati

    2015-01-01

    Background: Accurate impression making is an essential prerequisite for achieving a passive fit between the implant and the superstructure. The aim of this in vitro study was to compare the three-dimensional accuracy of open-tray and three closed-tray impression techniques. Materials and Methods: Three acrylic resin mandibular master models with four parallel implants were used: Biohorizons (BIO), Straumann tissue-level (STL), and Straumann bone-level (SBL). Forty-two putty/wash polyvinyl siloxane impressions of the models were made using open-tray and closed-tray techniques. Closed-tray impressions were made using snap-on (STL model), transfer coping (TC) (BIO model) and TC plus plastic cap (TC-Cap) (SBL model). The impressions were poured with type IV stone, and the positional accuracy of the implant analog heads in each dimension (x, y and z axes), and the linear displacement (ΔR) were evaluated using a coordinate measuring machine. Data were analyzed using ANOVA and post-hoc Tukey tests (α = 0.05). Results: The ΔR values of the snap-on technique were significantly lower than those of TC and TC-Cap techniques (P < 0.001). No significant differences were found between closed and open impression techniques for STL in Δx, Δy, Δz and ΔR values (P = 0.444, P = 0.181, P = 0.835 and P = 0.911, respectively). Conclusion: Considering the limitations of this study, the snap-on implant-level impression technique resulted in more three-dimensional accuracy than TC and TC-Cap, but it was similar to the open-tray technique. PMID:26604956

  1. Validating continuous digital light processing (cDLP) additive manufacturing accuracy and tissue engineering utility of a dye-initiator package.

    PubMed

    Wallace, Jonathan; Wang, Martha O; Thompson, Paul; Busso, Mallory; Belle, Vaijayantee; Mammoser, Nicole; Kim, Kyobum; Fisher, John P; Siblani, Ali; Xu, Yueshuo; Welter, Jean F; Lennon, Donald P; Sun, Jiayang; Caplan, Arnold I; Dean, David

    2014-03-01

    This study tested the accuracy of tissue engineering scaffold rendering via the continuous digital light processing (cDLP) light-based additive manufacturing technology. High accuracy (i.e., <50 µm) allows the designed performance of features relevant to three scale spaces: cell-scaffold, scaffold-tissue, and tissue-organ interactions. The biodegradable polymer poly (propylene fumarate) was used to render highly accurate scaffolds through the use of a dye-initiator package, TiO2 and bis (2,4,6-trimethylbenzoyl)phenylphosphine oxide. This dye-initiator package facilitates high accuracy in the Z dimension. Linear, round, and right-angle features were measured to gauge accuracy. Most features showed accuracies between 5.4-15% of the design. However, one feature, an 800 µm diameter circular pore, exhibited a 35.7% average reduction of patency. Light scattered in the x, y directions by the dye may have reduced this feature's accuracy. Our new fine-grained understanding of accuracy could be used to make further improvements by including corrections in the scaffold design software. Successful cell attachment occurred with both canine and human mesenchymal stem cells (MSCs). Highly accurate cDLP scaffold rendering is critical to the design of scaffolds that both guide bone regeneration and that fully resorb. Scaffold resorption must occur for regenerated bone to be remodeled and, thereby, achieve optimal strength.

  2. Real-Time Monitoring of Critical Care Analytes in the Bloodstream with Chemical Sensors: Progress and Challenges.

    PubMed

    Frost, Megan C; Meyerhoff, Mark E

    2015-01-01

    We review approaches and challenges in developing chemical sensor-based methods to accurately and continuously monitor levels of key analytes in blood related directly to the status of critically ill hospitalized patients. Electrochemical and optical sensor-based technologies have been pursued to measure important critical care species in blood [i.e., oxygen, carbon dioxide, pH, electrolytes (K(+), Na(+), Cl(-), etc.), glucose, and lactate] in real-time or near real-time. The two main configurations examined to date for achieving this goal have been intravascular catheter sensors and patient attached ex vivo sensors with intermittent blood sampling via an attached indwelling catheter. We discuss the status of these configurations and the main issues affecting the accuracy of the measurements, including cell adhesion and thrombus formation on the surface of the sensors, sensor drift, sensor selectivity, etc. Recent approaches to mitigate these nagging performance issues that have prevented these technologies from clinical use are also discussed.

  3. Technical Note: Quantitative accuracy evaluation for spectral images from a detector-based spectral CT scanner using an iodine phantom.

    PubMed

    Duan, Xinhui; Arbique, Gary; Guild, Jeffrey; Xi, Yin; Anderson, Jon

    2018-05-01

    The purpose of this study was to evaluate the quantitative accuracy of spectral images from a detector-based spectral CT scanner using a phantom with iodine-loaded inserts. A 40-cm long-body phantom with seven iodine inserts (2-20 mg/ml of iodine) was used in the study. The inserts could be placed at 5.5 or 10.5 cm from the phantom axis. The phantom was scanned five times for each insert configuration using 120 kVp tube voltage. A set of iodine, virtual noncontrast, effective atomic number, and virtual monoenergetic spectral CT images were generated and measurements were made for all the iodine rods. Measured values were compared with reference values calculated from the chemical composition information provided by the phantom manufacturer. Radiation dose from the spectral CT was compared to a conventional CT using a CTDI (32 cm) phantom. Good agreement between measurements and reference values was achieved for all types of spectral images. The differences ranged from -0.46 to 0.1 mg/ml for iodine concentration, -9.95 to 6.41 HU for virtual noncontrast images, 0.12 to 0.35 for effective Z images, and -17.7 to 55.7 HU for virtual monoenergetic images. For a similar CTDIvol, image noise from the conventional CT was 10% lower than the spectral CT. The detector-based spectral CT can achieve accurate spectral measurements on iodine concentration, virtual non-contrast images, effective atomic numbers, and virtual monoenergetic images. © 2018 American Association of Physicists in Medicine.

  4. Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network

    PubMed Central

    Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng

    2016-01-01

    Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386

  5. Functionalized apertures for the detection of chemical and biological materials

    DOEpatents

    Letant, Sonia E.; van Buuren, Anthony W.; Terminello, Louis J.; Thelen, Michael P.; Hope-Weeks, Louisa J.; Hart, Bradley R.

    2010-12-14

    Disclosed are nanometer to micron scale functionalized apertures constructed on a substrate made of glass, carbon, semiconductors or polymeric materials that allow for the real time detection of biological materials or chemical moieties. Many apertures can exist on one substrate allowing for the simultaneous detection of numerous chemical and biological molecules. One embodiment features a macrocyclic ring attached to cross-linkers, wherein the macrocyclic ring has a biological or chemical probe extending through the aperture. Another embodiment achieves functionalization by attaching chemical or biological anchors directly to the walls of the apertures via cross-linkers.

  6. Achieving biopolymer synergy in systems chemistry.

    PubMed

    Bai, Yushi; Chotera, Agata; Taran, Olga; Liang, Chen; Ashkenasy, Gonen; Lynn, David G

    2018-05-31

    Synthetic and materials chemistry initiatives have enabled the translation of the macromolecular functions of biology into synthetic frameworks. These explorations into alternative chemistries of life attempt to capture the versatile functionality and adaptability of biopolymers in new orthogonal scaffolds. Information storage and transfer, however, so beautifully represented in the central dogma of biology, require multiple components functioning synergistically. Over a single decade, the emerging field of systems chemistry has begun to catalyze the construction of mutualistic biopolymer networks, and this review begins with the foundational small-molecule-based dynamic chemical networks and peptide amyloid-based dynamic physical networks on which this effort builds. The approach both contextualizes the versatile approaches that have been developed to enrich chemical information in synthetic networks and highlights the properties of amyloids as potential alternative genetic elements. The successful integration of both chemical and physical networks through β-sheet assisted replication processes further informs the synergistic potential of these networks. Inspired by the cooperative synergies of nucleic acids and proteins in biology, synthetic nucleic-acid-peptide chimeras are now being explored to extend their informational content. With our growing range of synthetic capabilities, structural analyses, and simulation technologies, this foundation is radically extending the structural space that might cross the Darwinian threshold for the origins of life as well as creating an array of alternative systems capable of achieving the progressive growth of novel informational materials.

  7. Smart Device-Supported BDS/GNSS Real-Time Kinematic Positioning for Sub-Meter-Level Accuracy in Urban Location-Based Services

    PubMed Central

    Wang, Liang; Li, Zishen; Zhao, Jiaojiao; Zhou, Kai; Wang, Zhiyu; Yuan, Hong

    2016-01-01

    Using mobile smart devices to provide urban location-based services (LBS) with sub-meter-level accuracy (around 0.5 m) is a major application field for future global navigation satellite system (GNSS) development. Real-time kinematic (RTK) positioning, which is a widely used GNSS-based positioning approach, can improve the accuracy from about 10–20 m (achieved by the standard positioning services) to about 3–5 cm based on the geodetic receivers. In using the smart devices to achieve positioning with sub-meter-level accuracy, a feasible solution of combining the low-cost GNSS module and the smart device is proposed in this work and a user-side GNSS RTK positioning software was developed from scratch based on the Android platform. Its real-time positioning performance was validated by BeiDou Navigation Satellite System/Global Positioning System (BDS/GPS) combined RTK positioning under the conditions of a static and kinematic (the velocity of the rover was 50–80 km/h) mode in a real urban environment with a SAMSUNG Galaxy A7 smartphone. The results show that the fixed-rates of ambiguity resolution (the proportion of epochs of ambiguities fixed) for BDS/GPS combined RTK in the static and kinematic tests were about 97% and 90%, respectively, and the average positioning accuracies (RMS) were better than 0.15 m (horizontal) and 0.25 m (vertical) for the static test, and 0.30 m (horizontal) and 0.45 m (vertical) for the kinematic test. PMID:28009835

  8. Comparison of dermal and inhalation routes of entry for organic chemicals

    NASA Technical Reports Server (NTRS)

    Jepson, Gary W.; Mcdougal, James N.; Clewell, Harvey J., III

    1992-01-01

    The quantitative comparison of the chemical concentration inside the body as the result of a dermal exposure versus an inhalation exposure is useful for assessing human health risks and deciding on an appropriate protective posture. In order to describe the relationship between dermal and inhalation routes of exposure, a variety of organic chemicals were evaluated. The types of chemicals chosen for the study were halogenated hydrocarbons, aromatic compounds, non-polar hydrocarbons and inhalation anesthetics. Both dermal and inhalation exposures were conducted in rats and the chemicals were in the form of vapors. Prior to the dermal exposure, rat fur was closely clipped and during the exposure rats were provided fresh breathing air through latex masks. Blood samples were taken during 4-hour exposures and analyzed for the chemical of interest. A physiologically based pharmacokinetic model was used to predict permeability constants (cm/hr) consistent with the observed blood concentrations of the chemical. The ratio of dermal exposure to inhalation exposure required to achieve the same internal dose of chemical was calculated for each test chemical. The calculated ratio in humans ranged from 18 for styrene to 1180 for isoflurane. This methodology can be used to estimate the dermal exposure required to reach the internal dose achieved by a specific inhalation exposure. Such extrapolation is important since allowable exposure standards are often set for inhalation exposures, but occupational exposures may be dermal.

  9. Modeling the binding affinity of structurally diverse industrial chemicals to carbon using the artificial intelligence approaches.

    PubMed

    Gupta, Shikha; Basant, Nikita; Rai, Premanjali; Singh, Kunwar P

    2015-11-01

    Binding affinity of chemical to carbon is an important characteristic as it finds vast industrial applications. Experimental determination of the adsorption capacity of diverse chemicals onto carbon is both time and resource intensive, and development of computational approaches has widely been advocated. In this study, artificial intelligence (AI)-based ten different qualitative and quantitative structure-property relationship (QSPR) models (MLPN, RBFN, PNN/GRNN, CCN, SVM, GEP, GMDH, SDT, DTF, DTB) were established for the prediction of the adsorption capacity of structurally diverse chemicals to activated carbon following the OECD guidelines. Structural diversity of the chemicals and nonlinear dependence in the data were evaluated using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation procedures performed employing a wide series of statistical checks. In complete dataset, the qualitative models rendered classification accuracies between 97.04 and 99.93%, while the quantitative models yielded correlation (R(2)) values of 0.877-0.977 between the measured and the predicted endpoint values. The quantitative prediction accuracies for the higher molecular weight (MW) compounds (class 4) were relatively better than those for the low MW compounds. Both in the qualitative and quantitative models, the Polarizability was the most influential descriptor. Structural alerts responsible for the extreme adsorption behavior of the compounds were identified. Higher number of carbon and presence of higher halogens in a molecule rendered higher binding affinity. Proposed QSPR models performed well and outperformed the previous reports. A relatively better performance of the ensemble learning models (DTF, DTB) may be attributed to the strengths of the bagging and boosting algorithms which enhance the predictive accuracies. The

  10. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  11. Test expectancy affects metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Wiley, Jennifer; Griffin, Thomas D

    2011-06-01

    Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and practice tests. The purpose of the present study was to examine whether the accuracy metacognitive monitoring was affected by the nature of the test expected. Students (N= 59) were randomly assigned to one of two test expectancy groups (memory vs. inference). Then after reading texts, judging learning, completed both memory and inference tests. Test performance and monitoring accuracy were superior when students received the kind of test they had been led to expect rather than the unexpected test. Tests influence students' perceptions of what constitutes learning. Our findings suggest that this could affect how students prepare for tests and how they monitoring their own learning. ©2010 The British Psychological Society.

  12. Robust sub-millihertz-level offset locking for transferring optical frequency accuracy and for atomic two-photon spectroscopy.

    PubMed

    Cheng, Wang-Yau; Chen, Ting-Ju; Lin, Chia-Wei; Chen, Bo-Wei; Yang, Ya-Po; Hsu, Hung Yi

    2017-02-06

    Robust sub-millihertz-level offset locking was achieved with a simple scheme, by which we were able to transfer the laser frequency stability and accuracy from either cesium-stabilized diode laser or comb laser to the other diode lasers who had serious frequency jitter previously. The offset lock developed in this paper played an important role in atomic two-photon spectroscopy with which record resolution and new determination on the hyperfine constants of cesium atom were achieved. A quantum-interference experiment was performed to show the improvement of light coherence as an extended design was implemented.

  13. Mining chemical information from open patents

    PubMed Central

    2011-01-01

    Linked Open Data presents an opportunity to vastly improve the quality of science in all fields by increasing the availability and usability of the data upon which it is based. In the chemical field, there is a huge amount of information available in the published literature, the vast majority of which is not available in machine-understandable formats. PatentEye, a prototype system for the extraction and semantification of chemical reactions from the patent literature has been implemented and is discussed. A total of 4444 reactions were extracted from 667 patent documents that comprised 10 weeks' worth of publications from the European Patent Office (EPO), with a precision of 78% and recall of 64% with regards to determining the identity and amount of reactants employed and an accuracy of 92% with regards to product identification. NMR spectra reported as product characterisation data are additionally captured. PMID:21999425

  14. High accuracy GNSS based navigation in GEO

    NASA Astrophysics Data System (ADS)

    Capuano, Vincenzo; Shehaj, Endrit; Blunt, Paul; Botteron, Cyril; Farine, Pierre-André

    2017-07-01

    Although significant improvements in efficiency and performance of communication satellites have been achieved in the past decades, it is expected that the demand for new platforms in Geostationary Orbit (GEO) and for the On-Orbit Servicing (OOS) on the existing ones will continue to rise. Indeed, the GEO orbit is used for many applications including direct broadcast as well as communications. At the same time, Global Navigation Satellites System (GNSS), originally designed for land, maritime and air applications, has been successfully used as navigation system in Low Earth Orbit (LEO) and its further utilization for navigation of geosynchronous satellites becomes a viable alternative offering many advantages over present ground based methods. Following our previous studies of GNSS signal characteristics in Medium Earth Orbit (MEO), GEO and beyond, in this research we specifically investigate the processing of different GNSS signals, with the goal to determine the best navigation performance they can provide in a GEO mission. Firstly, a detailed selection among different GNSS signals and different combinations of them is discussed, taking into consideration the L1 and L5 frequency bands, and the GPS and Galileo constellations. Then, the implementation of an Orbital Filter is summarized, which adaptively fuses the GN1SS observations with an accurate orbital forces model. Finally, simulation tests of the navigation performance achievable by processing the selected combination of GNSS signals are carried out. The results obtained show an achievable positioning accuracy of less than one meter. In addition, hardware-in-the-loop tests are presented using a COTS receiver connected to our GNSS Spirent simulator, in order to collect real-time hardware-in-the-loop observations and process them by the proposed navigation module.

  15. Does a Sensory Processing Deficit Explain Counting Accuracy on Rapid Visual Sequencing Tasks in Adults with and without Dyslexia?

    ERIC Educational Resources Information Center

    Conlon, Elizabeth G.; Wright, Craig M.; Norris, Karla; Chekaluk, Eugene

    2011-01-01

    The experiments conducted aimed to investigate whether reduced accuracy when counting stimuli presented in rapid temporal sequence in adults with dyslexia could be explained by a sensory processing deficit, a general slowing in processing speed or difficulties shifting attention between stimuli. To achieve these aims, the influence of the…

  16. Theoretical research program to study chemical reactions in AOTV bow shock tubes

    NASA Technical Reports Server (NTRS)

    Taylor, Peter R.

    1993-01-01

    The main focus was the development, implementation, and calibration of methods for performing molecular electronic structure calculations to high accuracy. These various methods were then applied to a number of chemical reactions and species of interest to NASA, notably in the area of combustion chemistry. Among the development work undertaken was a collaborative effort to develop a program to efficiently predict molecular structures and vibrational frequencies using energy derivatives. Another major development effort involved the design of new atomic basis sets for use in chemical studies: these sets were considerably more accurate than those previously in use. Much effort was also devoted to calibrating methods for computing accurate molecular wave functions, including the first reliable calibrations for realistic molecules using full CI results. A wide variety of application calculations were undertaken. One area of interest was the spectroscopy and thermochemistry of small molecules, including establishing small molecule binding energies to an accuracy rivaling, or even on occasion surpassing, the experiment. Such binding energies are essential input to modeling chemical reaction processes, such as combustion. Studies of large molecules and processes important in both hydrogen and hydrocarbon combustion chemistry were also carried out. Finally, some effort was devoted to the structure and spectroscopy of small metal clusters, with applications to materials science problems.

  17. Discrimination Enhancement with Transient Feature Analysis of a Graphene Chemical Sensor.

    PubMed

    Nallon, Eric C; Schnee, Vincent P; Bright, Collin J; Polcha, Michael P; Li, Qiliang

    2016-01-19

    A graphene chemical sensor is subjected to a set of structurally and chemically similar hydrocarbon compounds consisting of toluene, o-xylene, p-xylene, and mesitylene. The fractional change in resistance of the sensor upon exposure to these compounds exhibits a similar response magnitude among compounds, whereas large variation is observed within repetitions for each compound, causing a response overlap. Therefore, traditional features depending on maximum response change will cause confusion during further discrimination and classification analysis. More robust features that are less sensitive to concentration, sampling, and drift variability would provide higher quality information. In this work, we have explored the advantage of using transient-based exponential fitting coefficients to enhance the discrimination of similar compounds. The advantages of such feature analysis to discriminate each compound is evaluated using principle component analysis (PCA). In addition, machine learning-based classification algorithms were used to compare the prediction accuracies when using fitting coefficients as features. The additional features greatly enhanced the discrimination between compounds while performing PCA and also improved the prediction accuracy by 34% when using linear discrimination analysis.

  18. Accuracy improvement of quantitative analysis by spatial confinement in laser-induced breakdown spectroscopy.

    PubMed

    Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F

    2013-07-29

    To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

  19. Feature instructions improve face-matching accuracy

    PubMed Central

    Bindemann, Markus

    2018-01-01

    Identity comparisons of photographs of unfamiliar faces are prone to error but important for applied settings, such as person identification at passport control. Finding techniques to improve face-matching accuracy is therefore an important contemporary research topic. This study investigated whether matching accuracy can be improved by instruction to attend to specific facial features. Experiment 1 showed that instruction to attend to the eyebrows enhanced matching accuracy for optimized same-day same-race face pairs but not for other-race faces. By contrast, accuracy was unaffected by instruction to attend to the eyes, and declined with instruction to attend to ears. Experiment 2 replicated the eyebrow-instruction improvement with a different set of same-race faces, comprising both optimized same-day and more challenging different-day face pairs. These findings suggest that instruction to attend to specific features can enhance face-matching accuracy, but feature selection is crucial and generalization across face sets may be limited. PMID:29543822

  20. Relationship between resolution and accuracy of four intraoral scanners in complete-arch impressions

    PubMed Central

    Pascual-Moscardó, Agustín; Camps, Isabel

    2018-01-01

    Background The scanner does not measure the dental surface continually. Instead, it generates a point cloud, and these points are then joined to form the scanned object. This approximation will depend on the number of points generated (resolution), which can lead to low accuracy (trueness and precision) when fewer points are obtained. The purpose of this study is to determine the resolution of four intraoral digital imaging systems and to demonstrate the relationship between accuracy and resolution of the intraoral scanner in impressions of a complete dental arch. Material and Methods A master cast of the complete maxillary arch was prepared with different dental preparations. Using four digital impression systems, the cast was scanned inside of a black methacrylate box, obtaining a total of 40 digital impressions from each scanner. The resolution was obtained by dividing the number of points of each digital impression by the total surface area of the cast. Accuracy was evaluated using a three-dimensional measurement software, using the “best alignment” method of the casts with a highly faithful reference model obtained from an industrial scanner. Pearson correlation was used for statistical analysis of the data. Results Of the intraoral scanners, Omnicam is the system with the best resolution, with 79.82 points per mm2, followed by True Definition with 54.68 points per mm2, Trios with 41.21 points per mm2, and iTero with 34.20 points per mm2. However, the study found no relationship between resolution and accuracy of the study digital impression systems (P >0.05), except for Omnicam and its precision. Conclusions The resolution of the digital impression systems has no relationship with the accuracy they achieve in the impression of a complete dental arch. The study found that the Omnicam scanner is the system that obtains the best resolution, and that as the resolution increases, its precision increases. Key words:Trueness, precision, accuracy, resolution

  1. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  2. Delphi based consensus study into planning for chemical incidents.

    PubMed

    Crawford, I W F; Mackway-Jones, K; Russell, D R; Carley, S D

    2004-01-01

    To achieve consensus in all phases of chemical incident planning and response. A three round Delphi study was conducted using a panel of 39 experts from specialties involved in the management of chemical incidents. Areas that did not reach consensus in the Delphi study were presented as synopsis statements for discussion in four syndicate groups at a conference hosted by the Department of Health Emergency Planning Co-ordination Unit. A total of 183 of 322 statements had reached consensus upon completion of the Delphi study. This represented 56.8% of the total number of statements. Of these, 148 reached consensus at >94% and 35 reached consensus at >89%. The results of the process are presented as a series of synopsis consensus statements that cover all phases of chemical incident planning and response. The use of a Delphi study and subsequent syndicate group discussions achieved consensus in aspects of all phases of chemical incident planning and response that can be translated into practical guidance for use at regional prehospital and hospital level. Additionally, areas of non-consensus have been identified where further work is required.

  3. Delphi based consensus study into planning for chemical incidents

    PubMed Central

    Crawford, I; Mackway-Jones, K; Russell, D; Carley, S

    2004-01-01

    Objective: To achieve consensus in all phases of chemical incident planning and response. Design: A three round Delphi study was conducted using a panel of 39 experts from specialties involved in the management of chemical incidents. Areas that did not reach consensus in the Delphi study were presented as synopsis statements for discussion in four syndicate groups at a conference hosted by the Department of Health Emergency Planning Co-ordination Unit. Results: A total of 183 of 322 statements had reached consensus upon completion of the Delphi study. This represented 56.8% of the total number of statements. Of these, 148 reached consensus at >94% and 35 reached consensus at >89%. The results of the process are presented as a series of synopsis consensus statements that cover all phases of chemical incident planning and response. Conclusions: The use of a Delphi study and subsequent syndicate group discussions achieved consensus in aspects of all phases of chemical incident planning and response that can be translated into practical guidance for use at regional prehospital and hospital level. Additionally, areas of non-consensus have been identified where further work is required. PMID:14734369

  4. Exploring Chemical Space with the Alchemical Derivatives.

    PubMed

    Balawender, Robert; Welearegay, Meressa A; Lesiuk, Michał; De Proft, Frank; Geerlings, Paul

    2013-12-10

    In this paper, we verify the usefulness of the alchemical derivatives in the prediction of chemical properties. We concentrate on the stability of the transmutation products, where the term "transmutation" means the change of the nuclear charge at an atomic site at constant number of electrons. As illustrative transmutations showing the potential of the method in exploring chemical space, we present some examples of increasing complexity starting with the deprotonation, continuing with the transmutation of the nitrogen molecule, and ending with the substitution of isoelectronic B-N units for C-C units and N units for C-H units in carbocyclic systems. The basis set influence on the qualitative and quantitative accuracies of the alchemical predictions was investigated. The alchemical deprotonation energy (from the second order Taylor expansion) correlates well with the vertical deprotonation energy and can be used as a preliminary indicator for the experimental deprotonation energy. The results of calculations for the BN derivatives of benzene and pyrene show that this method has great potential for efficient and accurate scanning of chemical space.

  5. The mathematical model accuracy estimation of the oil storage tank foundation soil moistening

    NASA Astrophysics Data System (ADS)

    Gildebrandt, M. I.; Ivanov, R. N.; Gruzin, AV; Antropova, L. B.; Kononov, S. A.

    2018-04-01

    The oil storage tanks foundations preparation technologies improvement is the relevant objective which achievement will make possible to reduce the material costs and spent time for the foundation preparing while providing the required operational reliability. The laboratory research revealed the nature of sandy soil layer watering with a given amount of water. The obtained data made possible developing the sandy soil layer moistening mathematical model. The performed estimation of the oil storage tank foundation soil moistening mathematical model accuracy showed the experimental and theoretical results acceptable convergence.

  6. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  7. Accuracy of genomic prediction in switchgrass ( Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE PAGES

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...

    2016-02-11

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  8. Accuracy of genomic prediction in switchgrass ( Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  9. Chemical OSSEs in Global Modeling and Assimilation Office (GMAO)

    NASA Technical Reports Server (NTRS)

    Pawson, Steven

    2008-01-01

    This presentation will summarize ongoing 'chemical observing system simulation experiment (OSSE)' work in the Global Modeling and Assimilation Office (GMAO). Weather OSSEs are being studied in detail, with a 'nature run' based on the European Centre for Medium-Range Weather Forecasts (ECMWF) model that can be sampled by a synthesized suite of satellites that reproduces present-day observations. Chemical OSSEs are based largely on the carbon-cycle project and aim to study (1) how well we can reproduce the observed carbon distribution with the Atmospheric Infrared Sounder (AIRS) and Orbiting Carbon Observatory (OCO) sensors and (2) with what accuracy can we deduce surface sources and sinks of carbon species in an assimilation system.

  10. Pulsation in Chemically Peculiar Stars

    NASA Astrophysics Data System (ADS)

    Sachkov, M.

    2015-04-01

    Chemically peculiar stars offer the opportunity to study the interaction of strong magnetic fields, rotation, and pulsation. The rapidly oscillating chemically peculiar A stars (roAp) are a subgroup of the chemically peculiar magnetic A stars. They are high-overtone, low-degree p-mode pulsators. Until recently, the classical asteroseismic analysis, i.e., frequency analysis, of these stars was based on ground and space photometric observations. Significant progress was achieved through the access to the uninterrupted, ultra-high-precision data from the MOST, COROT, and Kepler satellites. Over the last ten years, the studies of roAp stars have been altered drastically from the observational point of view through the usage of time-resolved, high-resolution spectra. Their unusual pulsation characteristics, caused by the interplay between short vertical lengths of pulsation waves and strong stratification of chemical elements, allow us to examine the upper roAp atmosphere in more detail than is possible for any star except the Sun. In this paper a review of the results of recent studies of the pulsations of roAp stars is presented.

  11. Improved classification accuracy in 1- and 2-dimensional NMR metabolomics data using the variance stabilising generalised logarithm transformation

    PubMed Central

    Parsons, Helen M; Ludwig, Christian; Günther, Ulrich L; Viant, Mark R

    2007-01-01

    Background Classifying nuclear magnetic resonance (NMR) spectra is a crucial step in many metabolomics experiments. Since several multivariate classification techniques depend upon the variance of the data, it is important to first minimise any contribution from unwanted technical variance arising from sample preparation and analytical measurements, and thereby maximise any contribution from wanted biological variance between different classes. The generalised logarithm (glog) transform was developed to stabilise the variance in DNA microarray datasets, but has rarely been applied to metabolomics data. In particular, it has not been rigorously evaluated against other scaling techniques used in metabolomics, nor tested on all forms of NMR spectra including 1-dimensional (1D) 1H, projections of 2D 1H, 1H J-resolved (pJRES), and intact 2D J-resolved (JRES). Results Here, the effects of the glog transform are compared against two commonly used variance stabilising techniques, autoscaling and Pareto scaling, as well as unscaled data. The four methods are evaluated in terms of the effects on the variance of NMR metabolomics data and on the classification accuracy following multivariate analysis, the latter achieved using principal component analysis followed by linear discriminant analysis. For two of three datasets analysed, classification accuracies were highest following glog transformation: 100% accuracy for discriminating 1D NMR spectra of hypoxic and normoxic invertebrate muscle, and 100% accuracy for discriminating 2D JRES spectra of fish livers sampled from two rivers. For the third dataset, pJRES spectra of urine from two breeds of dog, the glog transform and autoscaling achieved equal highest accuracies. Additionally we extended the glog algorithm to effectively suppress noise, which proved critical for the analysis of 2D JRES spectra. Conclusion We have demonstrated that the glog and extended glog transforms stabilise the technical variance in NMR metabolomics

  12. Achieving behavioral control with millisecond resolution in a high-level programming environment.

    PubMed

    Asaad, Wael F; Eskandar, Emad N

    2008-08-30

    The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.

  13. Achieving behavioral control with millisecond resolution in a high-level programming environment

    PubMed Central

    Asaad, Wael F.; Eskandar, Emad N.

    2008-01-01

    The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188

  14. Analysis of thermo-chemical nonequilibrium models for carbon dioxide flows

    NASA Technical Reports Server (NTRS)

    Rock, Stacey G.; Candler, Graham V.; Hornung, Hans G.

    1992-01-01

    The aerothermodynamics of thermochemical nonequilibrium carbon dioxide flows is studied. The chemical kinetics models of McKenzie and Park are implemented in separate three-dimensional computational fluid dynamics codes. The codes incorporate a five-species gas model characterized by a translational-rotational and a vibrational temperature. Solutions are obtained for flow over finite length elliptical and circular cylinders. The computed flowfields are then employed to calculate Mach-Zehnder interferograms for comparison with experimental data. The accuracy of the chemical kinetics models is determined through this comparison. Also, the methodology of the three-dimensional thermochemical nonequilibrium code is verified by the reproduction of the experiments.

  15. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  16. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  17. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage errormore » of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.« less

  18. Accuracy of automated classification of major depressive disorder as a function of symptom severity.

    PubMed

    Ramasubbu, Rajamannar; Brown, Matthew R G; Cortese, Filmeno; Gaxiola, Ismael; Goodyear, Bradley; Greenshaw, Andrew J; Dursun, Serdar M; Greiner, Russell

    2016-01-01

    Growing evidence documents the potential of machine learning for developing brain based diagnostic methods for major depressive disorder (MDD). As symptom severity may influence brain activity, we investigated whether the severity of MDD affected the accuracies of machine learned MDD-vs-Control diagnostic classifiers. Forty-five medication-free patients with DSM-IV defined MDD and 19 healthy controls participated in the study. Based on depression severity as determined by the Hamilton Rating Scale for Depression (HRSD), MDD patients were sorted into three groups: mild to moderate depression (HRSD 14-19), severe depression (HRSD 20-23), and very severe depression (HRSD ≥ 24). We collected functional magnetic resonance imaging (fMRI) data during both resting-state and an emotional-face matching task. Patients in each of the three severity groups were compared against controls in separate analyses, using either the resting-state or task-based fMRI data. We use each of these six datasets with linear support vector machine (SVM) binary classifiers for identifying individuals as patients or controls. The resting-state fMRI data showed statistically significant classification accuracy only for the very severe depression group (accuracy 66%, p = 0.012 corrected), while mild to moderate (accuracy 58%, p = 1.0 corrected) and severe depression (accuracy 52%, p = 1.0 corrected) were only at chance. With task-based fMRI data, the automated classifier performed at chance in all three severity groups. Binary linear SVM classifiers achieved significant classification of very severe depression with resting-state fMRI, but the contribution of brain measurements may have limited potential in differentiating patients with less severe depression from healthy controls.

  19. Privacy-Preserving Accountable Accuracy Management Systems (PAAMS)

    NASA Astrophysics Data System (ADS)

    Thomas, Roshan K.; Sandhu, Ravi; Bertino, Elisa; Arpinar, Budak; Xu, Shouhuai

    We argue for the design of “Privacy-preserving Accountable Accuracy Management Systems (PAAMS)”. The designs of such systems recognize from the onset that accuracy, accountability, and privacy management are intertwined. As such, these systems have to dynamically manage the tradeoffs between these (often conflicting) objectives. For example, accuracy in such systems can be improved by providing better accountability links between structured and unstructured information. Further, accuracy may be enhanced if access to private information is allowed in controllable and accountable ways. Our proposed approach involves three key elements. First, a model to link unstructured information such as that found in email, image and document repositories with structured information such as that in traditional databases. Second, a model for accuracy management and entity disambiguation by proactively preventing, detecting and tracing errors in information bases. Third, a model to provide privacy-governed operation as accountability and accuracy are managed.

  20. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  1. Lab-on-a-Disc Platform for Automated Chemical Cell Lysis.

    PubMed

    Seo, Moo-Jung; Yoo, Jae-Chern

    2018-02-26

    Chemical cell lysis is an interesting topic in the research to Lab-on-a-Disc (LOD) platforms on account of its perfect compatibility with the centrifugal spin column format. However, standard procedures followed in chemical cell lysis require sophisticated non-contact temperature control as well as the use of pressure resistant valves. These requirements pose a significant challenge thereby making the automation of chemical cell lysis on an LOD extremely difficult to achieve. In this study, an LOD capable of performing fully automated chemical cell lysis is proposed, where a combination of chemical and thermal methods has been used. It comprises a sample inlet, phase change material sheet (PCMS)-based temperature sensor, heating chamber, and pressure resistant valves. The PCMS melts and solidifies at a certain temperature and thus is capable of indicating whether the heating chamber has reached a specific temperature. Compared to conventional cell lysis systems, the proposed system offers advantages of reduced manual labor and a compact structure that can be readily integrated onto an LOD. Experiments using Salmonella typhimurium strains were conducted to confirm the performance of the proposed cell lysis system. The experimental results demonstrate that the proposed system has great potential in realizing chemical cell lysis on an LOD whilst achieving higher throughput in terms of purity and yield of DNA thereby providing a good alternative to conventional cell lysis systems.

  2. Predicting chemical bioavailability using microarray gene expression data and regression modeling: A tale of three explosive compounds.

    PubMed

    Gong, Ping; Nan, Xiaofei; Barker, Natalie D; Boyd, Robert E; Chen, Yixin; Wilkins, Dawn E; Johnson, David R; Suedel, Burton C; Perkins, Edward J

    2016-03-08

    Chemical bioavailability is an important dose metric in environmental risk assessment. Although many approaches have been used to evaluate bioavailability, not a single approach is free from limitations. Previously, we developed a new genomics-based approach that integrated microarray technology and regression modeling for predicting bioavailability (tissue residue) of explosives compounds in exposed earthworms. In the present study, we further compared 18 different regression models and performed variable selection simultaneously with parameter estimation. This refined approach was applied to both previously collected and newly acquired earthworm microarray gene expression datasets for three explosive compounds. Our results demonstrate that a prediction accuracy of R(2) = 0.71-0.82 was achievable at a relatively low model complexity with as few as 3-10 predictor genes per model. These results are much more encouraging than our previous ones. This study has demonstrated that our approach is promising for bioavailability measurement, which warrants further studies of mixed contamination scenarios in field settings.

  3. Simulation approach for the evaluation of tracking accuracy in radiotherapy: a preliminary study.

    PubMed

    Tanaka, Rie; Ichikawa, Katsuhiro; Mori, Shinichiro; Sanada, Sigeru

    2013-01-01

    Real-time tumor tracking in external radiotherapy can be achieved by diagnostic (kV) X-ray imaging with a dynamic flat-panel detector (FPD). It is important to keep the patient dose as low as possible while maintaining tracking accuracy. A simulation approach would be helpful to optimize the imaging conditions. This study was performed to develop a computer simulation platform based on a noise property of the imaging system for the evaluation of tracking accuracy at any noise level. Flat-field images were obtained using a direct-type dynamic FPD, and noise power spectrum (NPS) analysis was performed. The relationship between incident quantum number and pixel value was addressed, and a conversion function was created. The pixel values were converted into a map of quantum number using the conversion function, and the map was then input into the random number generator to simulate image noise. Simulation images were provided at different noise levels by changing the incident quantum numbers. Subsequently, an implanted marker was tracked automatically and the maximum tracking errors were calculated at different noise levels. The results indicated that the maximum tracking error increased with decreasing incident quantum number in flat-field images with an implanted marker. In addition, the range of errors increased with decreasing incident quantum number. The present method could be used to determine the relationship between image noise and tracking accuracy. The results indicated that the simulation approach would aid in determining exposure dose conditions according to the necessary tracking accuracy.

  4. Chemical tailoring of teicoplanin with site-selective reactions.

    PubMed

    Pathak, Tejas P; Miller, Scott J

    2013-06-05

    Semisynthesis of natural product derivatives combines the power of fermentation with orthogonal chemical reactions. Yet, chemical modification of complex structures represents an unmet challenge, as poor selectivity often undermines efficiency. The complex antibiotic teicoplanin eradicates bacterial infections. However, as resistance emerges, the demand for improved analogues grows. We have discovered chemical reactions that achieve site-selective alteration of teicoplanin. Utilizing peptide-based additives that alter reaction selectivities, certain bromo-teicoplanins are accessible. These new compounds are also scaffolds for selective cross-coupling reactions, enabling further molecular diversification. These studies enable two-step access to glycopeptide analogues not available through either biosynthesis or rapid total chemical synthesis alone. The new compounds exhibit a spectrum of activities, revealing that selective chemical alteration of teicoplanin may lead to analogues with attenuated or enhanced antibacterial properties, in particular against vancomycin- and teicoplanin-resistant strains.

  5. Extended-Interval Gentamicin Dosing in Achieving Therapeutic Concentrations in Malaysian Neonates

    PubMed Central

    Tan, Sin Li; Wan, Angeline SL

    2015-01-01

    OBJECTIVE: To evaluate the usefulness of extended-interval gentamicin dosing practiced in neonatal intensive care unit (NICU) and special care nursery (SCN) of a Malaysian hospital. METHODS: Cross-sectional observational study with pharmacokinetic analysis of all patients aged ≤28 days who received gentamicin treatment in NICU/SCN. Subjects received dosing according to a regimen modified from an Australian-based pediatric guideline. During a study period of 3 months, subjects were evaluated for gestational age, body weight, serum creatinine concentration, gentamicin dose/interval, serum peak and trough concentrations, and pharmacokinetic parameters. Descriptive percentages were used to determine the overall dosing accuracy, while analysis of variance (ANOVA) was conducted to compare the accuracy rates among different gestational ages. Pharmacokinetic profile among different gestational age and body weight groups were compared by using ANOVA. RESULTS: Of the 113 subjects included, 82.3% (n = 93) achieved therapeutic concentrations at the first drug-monitoring assessment. There was no significant difference found between the percentage of term neonates who achieved therapeutic concentrations and the premature group (87.1% vs. 74.4%), p = 0.085. A total of 112 subjects (99.1%) achieved desired therapeutic trough concentration of <2 mg/L. Mean gentamicin peak concentration was 8.52 mg/L (95% confidence interval [Cl], 8.13–8.90 mg/L) and trough concentration was 0.54 mg/L (95% CI, 0.48–0.60 mg/L). Mean volume of distribution, half-life, and elimination rate were 0.65 L/kg (95% CI, 0.62–0.68 L/kg), 6.96 hours (95% CI, 6.52–7.40 hours), and 0.11 hour−1 (95% CI, 0.10–0.11 hour−1), respectively. CONCLUSION: The larger percentage of subjects attaining therapeutic range with extended-interval gentamicin dosing suggests that this regimen is appropriate and can be safely used among Malaysian neonates. PMID:25964729

  6. Graphene-based quantum Hall resistance standards grown by chemical vapor deposition on silicon carbide

    NASA Astrophysics Data System (ADS)

    Ribeiro-Palau, Rebeca; Lafont, Fabien; Kazazis, Dimitris; Michon, Adrien; Couturaud, Olivier; Consejo, Christophe; Jouault, Benoit; Poirier, Wilfrid; Schopfer, Felicien

    2015-03-01

    Replace GaAs-based quantum Hall resistance standards (GaAs-QHRS) by a more convenient one, based on graphene (Gr-QHRS), is an ongoing goal in metrology. The new Gr-QHRS are expected to work in less demanding experimental conditions than GaAs ones. It will open the way to a broad dissemination of quantum standards, potentially towards industrial end-users, and it will support the implementation of a new International System of Units based on fixed fundamental constants. Here, we present accurate quantum Hall resistance measurements in large graphene Hall bars, grown by the hybrid scalable technique of propane/hydrogen chemical vapor deposition (CVD) on silicon carbide (SiC). This new Gr-QHRS shows a relative accuracy of 1 ×10-9 of the Hall resistance under the lowest magnetic field ever achieved in graphene. These experimental conditions surpass those of the most wildely used GaAs-QHRS. These results confirm the promises of graphene for resistance metrology applications and emphasizes the quality of the graphene produced by the CVD on SiC for applications as demanding as the resistance metrology.

  7. Automated and continual determination of radio telescope reference points with sub-mm accuracy: results from a campaign at the Onsala Space Observatory

    NASA Astrophysics Data System (ADS)

    Lösler, Michael; Haas, Rüdiger; Eschelbach, Cornelia

    2013-08-01

    The Global Geodetic Observing System (GGOS) requires sub-mm accuracy, automated and continual determinations of the so-called local tie vectors at co-location stations. Co-location stations host instrumentation for several space geodetic techniques and the local tie surveys involve the relative geometry of the reference points of these instruments. Thus, these reference points need to be determined in a common coordinate system, which is a particular challenge for rotating equipment like radio telescopes for geodetic Very Long Baseline Interferometry. In this work we describe a concept to achieve automated and continual determinations of radio telescope reference points with sub-mm accuracy. We developed a monitoring system, including Java-based sensor communication for automated surveys, network adjustment and further data analysis. This monitoring system was tested during a monitoring campaign performed at the Onsala Space Observatory in the summer of 2012. The results obtained in this campaign show that it is possible to perform automated determination of a radio telescope reference point during normal operations of the telescope. Accuracies on the sub-mm level can be achieved, and continual determinations can be realized by repeated determinations and recursive estimation methods.

  8. CERENA: ChEmical REaction Network Analyzer--A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics.

    PubMed

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/.

  9. CERENA: ChEmical REaction Network Analyzer—A Toolbox for the Simulation and Analysis of Stochastic Chemical Kinetics

    PubMed Central

    Kazeroonian, Atefeh; Fröhlich, Fabian; Raue, Andreas; Theis, Fabian J.; Hasenauer, Jan

    2016-01-01

    Gene expression, signal transduction and many other cellular processes are subject to stochastic fluctuations. The analysis of these stochastic chemical kinetics is important for understanding cell-to-cell variability and its functional implications, but it is also challenging. A multitude of exact and approximate descriptions of stochastic chemical kinetics have been developed, however, tools to automatically generate the descriptions and compare their accuracy and computational efficiency are missing. In this manuscript we introduced CERENA, a toolbox for the analysis of stochastic chemical kinetics using Approximations of the Chemical Master Equation solution statistics. CERENA implements stochastic simulation algorithms and the finite state projection for microscopic descriptions of processes, the system size expansion and moment equations for meso- and macroscopic descriptions, as well as the novel conditional moment equations for a hybrid description. This unique collection of descriptions in a single toolbox facilitates the selection of appropriate modeling approaches. Unlike other software packages, the implementation of CERENA is completely general and allows, e.g., for time-dependent propensities and non-mass action kinetics. By providing SBML import, symbolic model generation and simulation using MEX-files, CERENA is user-friendly and computationally efficient. The availability of forward and adjoint sensitivity analyses allows for further studies such as parameter estimation and uncertainty analysis. The MATLAB code implementing CERENA is freely available from http://cerenadevelopers.github.io/CERENA/. PMID:26807911

  10. Spatial accuracy of a simplified disaggregation method for traffic emissions applied in seven mid-sized Chilean cities

    NASA Astrophysics Data System (ADS)

    Ossés de Eicker, Margarita; Zah, Rainer; Triviño, Rubén; Hurni, Hans

    The spatial accuracy of top-down traffic emission inventory maps obtained with a simplified disaggregation method based on street density was assessed in seven mid-sized Chilean cities. Each top-down emission inventory map was compared against a reference, namely a more accurate bottom-up emission inventory map from the same study area. The comparison was carried out using a combination of numerical indicators and visual interpretation. Statistically significant differences were found between the seven cities with regard to the spatial accuracy of their top-down emission inventory maps. In compact cities with a simple street network and a single center, a good accuracy of the spatial distribution of emissions was achieved with correlation values>0.8 with respect to the bottom-up emission inventory of reference. In contrast, the simplified disaggregation method is not suitable for complex cities consisting of interconnected nuclei, resulting in correlation values<0.5. Although top-down disaggregation of traffic emissions generally exhibits low accuracy, the accuracy is significantly higher in compact cities and might be further improved by applying a correction factor for the city center. Therefore, the method can be used by local environmental authorities in cities with limited resources and with little knowledge on the pollution situation to get an overview on the spatial distribution of the emissions generated by traffic activities.

  11. Predicting the chemical stability of monatomic chains

    NASA Astrophysics Data System (ADS)

    Lin, Zheng-Zhe; Chen, Xi

    2013-02-01

    A simple model for evaluating the thermal atomic transfer rates in nanosystems (Lin Z.-Z. et al., EPL, 94 (2011) 40002) was developed to predict the chemical reaction rates of nanosystems with small gas molecules. The accuracy of the model was verified by MD simulations for molecular adsorption and desorption on a monatomic chain. By the prediction, a monatomic carbon chain should survive for 1.2 × 102 years in the ambient of 1 atm O2 at room temperature, and it is very invulnerable to N2, H2O, NO2, CO and CO2, while a monatomic gold chain quickly ruptures in vacuum. It is worth noting that since the model can be easily applied via common ab initio calculations, it could be widely used in the prediction of chemical stability of nanosystems.

  12. Natural selection in chemical evolution.

    PubMed

    Fernando, Chrisantha; Rowe, Jonathan

    2007-07-07

    We propose that chemical evolution can take place by natural selection if a geophysical process is capable of heterotrophic formation of liposomes that grow at some base rate, divide by external agitation, and are subject to stochastic chemical avalanches, in the absence of nucleotides or any monomers capable of modular heredity. We model this process using a simple hill-climbing algorithm, and an artificial chemistry that is unique in exhibiting conservation of mass and energy in an open thermodynamic system. Selection at the liposome level results in the stabilization of rarely occurring molecular autocatalysts that either catalyse or are consumed in reactions that confer liposome level fitness; typically they contribute in parallel to an increasingly conserved intermediary metabolism. Loss of competing autocatalysts can sometimes be adaptive. Steady-state energy flux by the individual increases due to the energetic demands of growth, but also of memory, i.e. maintaining variations in the chemical network. Self-organizing principles such as those proposed by Kauffman, Fontana, and Morowitz have been hypothesized as an ordering principle in chemical evolution, rather than chemical evolution by natural selection. We reject those notions as either logically flawed or at best insufficient in the absence of natural selection. Finally, a finite population model without elitism shows the practical evolutionary constraints for achieving chemical evolution by natural selection in the lab.

  13. Accuracy assessment of Precise Point Positioning with multi-constellation GNSS data under ionospheric scintillation effects

    NASA Astrophysics Data System (ADS)

    Marques, Haroldo Antonio; Marques, Heloísa Alves Silva; Aquino, Marcio; Veettil, Sreeja Vadakke; Monico, João Francisco Galera

    2018-02-01

    GPS and GLONASS are currently the Global Navigation Satellite Systems (GNSS) with full operational capacity. The integration of GPS, GLONASS and future GNSS constellations can provide better accuracy and more reliability in geodetic positioning, in particular for kinematic Precise Point Positioning (PPP), where the satellite geometry is considered a limiting factor to achieve centimeter accuracy. The satellite geometry can change suddenly in kinematic positioning in urban areas or under conditions of strong atmospheric effects such as for instance ionospheric scintillation that may degrade satellite signal quality, causing cycle slips and even loss of lock. Scintillation is caused by small scale irregularities in the ionosphere and is characterized by rapid changes in amplitude and phase of the signal, which are more severe in equatorial and high latitudes geomagnetic regions. In this work, geodetic positioning through the PPP method was evaluated with integrated GPS and GLONASS data collected in the equatorial region under varied scintillation conditions. The GNSS data were processed in kinematic PPP mode and the analyses show accuracy improvements of up to 60% under conditions of strong scintillation when using multi-constellation data instead of GPS data alone. The concepts and analyses related to the ionospheric scintillation effects, the mathematical model involved in PPP with GPS and GLONASS data integration as well as accuracy assessment with data collected under ionospheric scintillation effects are presented.

  14. Magnetorheological finishing of chemical-vapor deposited zinc sulfide via chemically and mechanically modified fluids.

    PubMed

    Salzman, Sivan; Romanofsky, Henry J; Giannechini, Lucca J; Jacobs, Stephen D; Lambropoulos, John C

    2016-02-20

    We describe the anisotropy in the material removal rate (MRR) of the polycrystalline, chemical-vapor deposited zinc sulfide (ZnS). We define the polycrystalline anisotropy via microhardness and chemical erosion tests for four crystallographic orientations of ZnS: (100), (110), (111), and (311). Anisotropy in the MRR was studied under magnetorheological finishing (MRF) conditions. Three chemically and mechanically modified magnetorheological (MR) fluids at pH values of 4, 5, and 6 were used to test the MRR variations among the four single-crystal planes. When polishing the single-crystal planes and the polycrystalline with pH 5 and pH 6 MR fluids, variations were found in the MRR among the four single-crystal planes and surface artifacts were observed on the polycrystalline material. When polishing the single-crystal planes and the polycrystalline with the modified MR fluid at pH 4, however, minimal variation was observed in the MRR among the four orientations and a reduction in surface artifacts was achieved on the polycrystalline material.

  15. Chemical preparation of graphene-based nanomaterials and their applications in chemical and biological sensors.

    PubMed

    Jiang, Hongji

    2011-09-05

    Graphene is a flat monolayer of carbon atoms packed tightly into a 2D honeycomb lattice that shows many intriguing properties meeting the key requirements for the implementation of highly excellent sensors, and all kinds of proof-of-concept sensors have been devised. To realize the potential sensor applications, the key is to synthesize graphene in a controlled way to achieve enhanced solution-processing capabilities, and at the same time to maintain or even improve the intrinsic properties of graphene. Several production techniques for graphene-based nanomaterials have been developed, ranging from the mechanical cleavage and chemical exfoliation of high-quality graphene to direct growth onto different substrates and the chemical routes using graphite oxide as a precusor to the newly developed bottom-up approach at the molecular level. The current review critically explores the recent progress on the chemical preparation of graphene-based nanomaterials and their applications in sensors. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Voice Identification: Levels-of-Processing and the Relationship between Prior Description Accuracy and Recognition Accuracy.

    ERIC Educational Resources Information Center

    Walter, Todd J.

    A study examined whether a person's ability to accurately identify a voice is influenced by factors similar to those proposed by the Supreme Court for eyewitness identification accuracy. In particular, the Supreme Court has suggested that a person's prior description accuracy of a suspect, degree of attention to a suspect, and confidence in…

  17. Evaluating the accuracy of orthophotos and 3D models from UAV photogrammetry

    NASA Astrophysics Data System (ADS)

    Julge, Kalev; Ellmann, Artu

    2015-04-01

    Rapid development of unmanned aerial vehicles (UAV) in recent years has made their use for various applications more feasible. This contribution evaluates the accuracy and quality of different UAV remote sensing products (i.e. orthorectified image, point cloud and 3D model). Two different autonomous fixed wing UAV systems were used to collect the aerial photographs. One is a mass-produced commercial UAV system, the other is a similar state-of-the-art UAV system. Three different study areas with varying sizes and characteristics (including urban areas, forests, fields, etc.) were surveyed. The UAV point clouds, 3D models and orthophotos were generated with three different commercial and free-ware software. The performance of each of these was evaluated. The effect of flying height on the accuracy of the results was explored, as well as the optimum number and placement of ground control points. Also the achieved results, when the only georeferencing data originates from the UAV system's on-board GNSS and inertial measurement unit, are investigated. Problems regarding the alignment of certain types of aerial photos (e.g. captured over forested areas) are discussed. The quality and accuracy of UAV photogrammetry products are evaluated by comparing them with control measurements made with GNSS-measurements on the ground, as well as high-resolution airborne laser scanning data and other available orthophotos (e.g. those acquired for large scale national mapping). Vertical comparisons are made on surfaces that have remained unchanged in all campaigns, e.g. paved roads. Planar comparisons are performed by control surveys of objects that are clearly identifiable on orthophotos. The statistics of these differences are used to evaluate the accuracy of UAV remote sensing. Some recommendations are given on how to conduct UAV mapping campaigns cost-effectively and with minimal time-consumption while still ensuring the quality and accuracy of the UAV data products. Also the

  18. The Effects of Direct Written Corrective Feedback on Improvement of Grammatical Accuracy of High-Proficient L2 Learners

    ERIC Educational Resources Information Center

    Farrokhi, Farahman; Sattarpour, Simin

    2012-01-01

    The present article reports the findings of a study that explored(1) whether direct written corrective feedback (CF) can help high-proficient L2 learners, who has already achieved a rather high level of accuracy in English, improve in the accurate use of two functions of English articles (the use of "a" for first mention and…

  19. A new ultra-high-accuracy angle generator: current status and future direction

    NASA Astrophysics Data System (ADS)

    Guertin, Christian F.; Geckeler, Ralf D.

    2017-09-01

    Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.

  20. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  1. Millimeter-Wave Chemical Sensor Using Substrate-Integrated-Waveguide Cavity

    PubMed Central

    Memon, Muhammad Usman; Lim, Sungjoon

    2016-01-01

    This research proposes a substrate-integrated waveguide (SIW) cavity sensor to detect several chemicals using the millimeter-wave frequency range. The frequency response of the presented SIW sensor is switched by filling a very small quantity of chemical inside of the fluidic channel, which also causes a difference in the effective permittivity. The fluidic channel on this structure is either empty or filled with a chemical; when it is empty the structure resonates at 17.08 GHz. There is always a different resonant frequency when any chemical is injected into the fluidic channel. The maximum amount of chemical after injection is held in the center of the SIW structure, which has the maximum magnitude of the electric field distribution. Thus, the objective of sensing chemicals in this research is achieved by perturbing the electric fields of the SIW structure. PMID:27809240

  2. Accuracy of Novel Computed Tomography-Guided Frameless Stereotactic Drilling and Catheter System in Human Cadavers.

    PubMed

    Sankey, Eric W; Butler, Eric; Sampson, John H

    2017-10-01

    To evaluate accuracy of a computed tomography (CT)-guided frameless stereotactic drilling and catheter system. A prospective, single-arm study was performed using human cadaver heads to evaluate placement accuracy of a novel, flexible intracranial catheter and stabilizing bone anchor system and drill kit. There were 20 catheter placements included in the analysis. The primary endpoint was accuracy of catheter tip location on intraoperative CT. Secondary endpoints included target registration error and entry and target point error before and after drilling. Measurements are reported as mean ± SD (median, range). Target registration error was 0.46 mm ± 0.26 (0.50 mm, -1.00 to 1.00 mm). Two (10%) target point trajectories were negatively impacted by drilling. Intracranial catheter depth was 59.8 mm ± 9.4 (60.5 mm, 38.0-80.0 mm). Drilling angle was 22° ± 9 (21°, 7°-45°). Deviation between planned and actual entry point on CT was 1.04 mm ± 0.38 (1.00 mm, 0.40-2.00 mm). Deviation between planned and actual target point on CT was 1.60 mm ± 0.98 (1.40 mm, 0.40-4.00 mm). No correlation was observed between intracranial catheter depth and target point deviation (accuracy) (Pearson coefficient 0.018) or between technician experience and accuracy (Pearson coefficient 0.020). There was no significant difference in accuracy with trajectories performed for different cadaver heads (P = 0.362). Highly accurate catheter placement is achievable using this novel flexible catheter and bone anchor system placed via frameless stereotaxy, with an average deviation between planned and actual target point of 1.60 mm ± 0.98 (1.40 mm, 0.40-4.00 mm). Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Optical depth localization of nitrogen-vacancy centers in diamond with nanometer accuracy.

    PubMed

    Häußler, Andreas J; Heller, Pascal; McGuinness, Liam P; Naydenov, Boris; Jelezko, Fedor

    2014-12-01

    Precise positioning of nitrogen-vacancy (NV) centers is crucial for their application in sensing and quantum information. Here we present a new purely optical technique enabling determination of the NV position with nanometer resolution. We use a confocal microscope to determine the position of individual emitters along the optical axis. Using two separate detection channels, it is possible to simultaneously measure reflected light from the diamond surface and fluorescent light from the NV center and statistically evaluate both signals. An accuracy of 2.6 nm for shallow NV centers was achieved and is consistent with other techniques for depth determination.

  4. Potential accuracy of translation estimation between radar and optical images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2015-10-01

    This paper investigates the potential accuracy achievable for optical to radar image registration by area-based approach. The analysis is carried out mainly based on the Cramér-Rao Lower Bound (CRLB) on translation estimation accuracy previously proposed by the authors and called CRLBfBm. This bound is now modified to take into account radar image speckle noise properties: spatial correlation and signal-dependency. The newly derived theoretical bound is fed with noise and texture parameters estimated for the co-registered pair of optical Landsat 8 and radar SIR-C images. It is found that difficulty of optical to radar image registration stems more from speckle noise influence than from dissimilarity of the considered kinds of images. At finer scales (and higher speckle noise level), probability of finding control fragments (CF) suitable for registration is low (1% or less) but overall number of such fragments is high thanks to image size. Conversely, at the coarse scale, where speckle noise level is reduced, probability of finding CFs suitable for registration can be as high as 40%, but overall number of such CFs is lower. Thus, the study confirms and supports area-based multiresolution approach for optical to radar registration where coarse scales are used for fast registration "lock" and finer scales for reaching higher registration accuracy. The CRLBfBm is found inaccurate for the main scale due to intensive speckle noise influence. For other scales, the validity of the CRLBfBm bound is confirmed by calculating statistical efficiency of area-based registration method based on normalized correlation coefficient (NCC) measure that takes high values of about 25%.

  5. Can Nanofluidic Chemical Release Enable Fast, High Resolution Neurotransmitter-Based Neurostimulation?

    PubMed

    Jones, Peter D; Stelzle, Martin

    2016-01-01

    Artificial chemical stimulation could provide improvements over electrical neurostimulation. Physiological neurotransmission between neurons relies on the nanoscale release and propagation of specific chemical signals to spatially-localized receptors. Current knowledge of nanoscale fluid dynamics and nanofluidic technology allows us to envision artificial mechanisms to achieve fast, high resolution neurotransmitter release. Substantial technological development is required to reach this goal. Nanofluidic technology-rather than microfluidic-will be necessary; this should come as no surprise given the nanofluidic nature of neurotransmission. This perspective reviews the state of the art of high resolution electrical neuroprostheses and their anticipated limitations. Chemical release rates from nanopores are compared to rates achieved at synapses and with iontophoresis. A review of microfluidic technology justifies the analysis that microfluidic control of chemical release would be insufficient. Novel nanofluidic mechanisms are discussed, and we propose that hydrophobic gating may allow control of chemical release suitable for mimicking neurotransmission. The limited understanding of hydrophobic gating in artificial nanopores and the challenges of fabrication and large-scale integration of nanofluidic components are emphasized. Development of suitable nanofluidic technology will require dedicated, long-term efforts over many years.

  6. A Rat α-Fetoprotein Binding Activity Prediction Model to Facilitate Assessment of the Endocrine Disruption Potential of Environmental Chemicals.

    PubMed

    Hong, Huixiao; Shen, Jie; Ng, Hui Wen; Sakkiah, Sugunadevi; Ye, Hao; Ge, Weigong; Gong, Ping; Xiao, Wenming; Tong, Weida

    2016-03-25

    Endocrine disruptors such as polychlorinated biphenyls (PCBs), diethylstilbestrol (DES) and dichlorodiphenyltrichloroethane (DDT) are agents that interfere with the endocrine system and cause adverse health effects. Huge public health concern about endocrine disruptors has arisen. One of the mechanisms of endocrine disruption is through binding of endocrine disruptors with the hormone receptors in the target cells. Entrance of endocrine disruptors into target cells is the precondition of endocrine disruption. The binding capability of a chemical with proteins in the blood affects its entrance into the target cells and, thus, is very informative for the assessment of potential endocrine disruption of chemicals. α-fetoprotein is one of the major serum proteins that binds to a variety of chemicals such as estrogens. To better facilitate assessment of endocrine disruption of environmental chemicals, we developed a model for α-fetoprotein binding activity prediction using the novel pattern recognition method (Decision Forest) and the molecular descriptors calculated from two-dimensional structures by Mold² software. The predictive capability of the model has been evaluated through internal validation using 125 training chemicals (average balanced accuracy of 69%) and external validations using 22 chemicals (balanced accuracy of 71%). Prediction confidence analysis revealed the model performed much better at high prediction confidence. Our results indicate that the model is useful (when predictions are in high confidence) in endocrine disruption risk assessment of environmental chemicals though improvement by increasing number of training chemicals is needed.

  7. Optimizing cyanobacteria growth conditions in a sealed environment to enable chemical inhibition tests with volatile chemicals.

    PubMed

    Johnson, Tylor J; Zahler, Jacob D; Baldwin, Emily L; Zhou, Ruanbao; Gibbons, William R

    2016-07-01

    Cyanobacteria are currently being engineered to photosynthetically produce next-generation biofuels and high-value chemicals. Many of these chemicals are highly toxic to cyanobacteria, thus strains with increased tolerance need to be developed. The volatility of these chemicals may necessitate that experiments be conducted in a sealed environment to maintain chemical concentrations. Therefore, carbon sources such as NaHCO3 must be used for supporting cyanobacterial growth instead of CO2 sparging. The primary goal of this study was to determine the optimal initial concentration of NaHCO3 for use in growth trials, as well as if daily supplementation of NaHCO3 would allow for increased growth. The secondary goal was to determine the most accurate method to assess growth of Anabaena sp. PCC 7120 in a sealed environment with low biomass titers and small sample volumes. An initial concentration of 0.5g/L NaHCO3 was found to be optimal for cyanobacteria growth, and fed-batch additions of NaHCO3 marginally improved growth. A separate study determined that a sealed test tube environment is necessary to maintain stable titers of volatile chemicals in solution. This study also showed that a SYTO® 9 fluorescence-based assay for cell viability was superior for monitoring filamentous cyanobacterial growth compared to absorbance, chlorophyll α (chl a) content, and biomass content due to its accuracy, small sampling size (100μL), and high throughput capabilities. Therefore, in future chemical inhibition trials, it is recommended that 0.5g/L NaHCO3 is used as the carbon source, and that culture viability is monitored via the SYTO® 9 fluorescence-based assay that requires minimum sample size. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. An Effective Model to Increase Student Attitude and Achievement: Narrative Including Analogies

    ERIC Educational Resources Information Center

    Akkuzu, Nalan; Akcay, Husamettin

    2011-01-01

    This study describes the analogical models and narratives used to introduce and teach Grade 9 chemical covalent compounds which are relatively abstract and difficult for students. We explained each model's development during the lessons and analyzed understanding students derived from these learning materials. In this context, achievement,…

  9. Does achievement motivation mediate the semantic achievement priming effect?

    PubMed

    Engeser, Stefan; Baumann, Nicola

    2014-10-01

    The aim of our research was to understand the processes of the prime-to-behavior effects with semantic achievement primes. We extended existing models with a perspective from achievement motivation theory and additionally used achievement primes embedded in the running text of excerpts of school textbooks to simulate a more natural priming condition. Specifically, we proposed that achievement primes affect implicit achievement motivation and conducted pilot experiments and 3 main experiments to explore this proposition. We found no reliable positive effect of achievement primes on implicit achievement motivation. In light of these findings, we tested whether explicit (instead of implicit) achievement motivation is affected by achievement primes and found this to be the case. In the final experiment, we found support for the assumption that higher explicit achievement motivation implies that achievement priming affects the outcome expectations. The implications of the results are discussed, and we conclude that primes affect achievement behavior by heightening explicit achievement motivation and outcome expectancies.

  10. Accuracies of genomically estimated breeding values from pure-breed and across-breed predictions in Australian beef cattle.

    PubMed

    Boerner, Vinzent; Johnston, David J; Tier, Bruce

    2014-10-24

    The major obstacles for the implementation of genomic selection in Australian beef cattle are the variety of breeds and in general, small numbers of genotyped and phenotyped individuals per breed. The Australian Beef Cooperative Research Center (Beef CRC) investigated these issues by deriving genomic prediction equations (PE) from a training set of animals that covers a range of breeds and crosses including Angus, Murray Grey, Shorthorn, Hereford, Brahman, Belmont Red, Santa Gertrudis and Tropical Composite. This paper presents accuracies of genomically estimated breeding values (GEBV) that were calculated from these PE in the commercial pure-breed beef cattle seed stock sector. PE derived by the Beef CRC from multi-breed and pure-breed training populations were applied to genotyped Angus, Limousin and Brahman sires and young animals, but with no pure-breed Limousin in the training population. The accuracy of the resulting GEBV was assessed by their genetic correlation to their phenotypic target trait in a bi-variate REML approach that models GEBV as trait observations. Accuracies of most GEBV for Angus and Brahman were between 0.1 and 0.4, with accuracies for abattoir carcass traits generally greater than for live animal body composition traits and reproduction traits. Estimated accuracies greater than 0.5 were only observed for Brahman abattoir carcass traits and for Angus carcass rib fat. Averaged across traits within breeds, accuracies of GEBV were highest when PE from the pooled across-breed training population were used. However, for the Angus and Brahman breeds the difference in accuracy from using pure-breed PE was small. For the Limousin breed no reasonable results could be achieved for any trait. Although accuracies were generally low compared to published accuracies estimated within breeds, they are in line with those derived in other multi-breed populations. Thus PE developed by the Beef CRC can contribute to the implementation of genomic selection in

  11. Noncompliance in people living with HIV: accuracy of defining characteristics of the nursing diagnosis1.

    PubMed

    Silva, Richardson Augusto Rosendo da; Costa, Mayara Mirna do Nascimento; Souza, Vinicius Lino de; Silva, Bárbara Coeli Oliveira da; Costa, Cristiane da Silva; Andrade, Itaísa Fernandes Cardoso de

    2017-10-30

    to evaluate the accuracy of the defining characteristics of the NANDA International nursing diagnosis, noncompliance, in people with HIV. study of diagnostic accuracy, performed in two stages. In the first stage, 113 people with HIV from a hospital of infectious diseases in the Northeast of Brazil were assessed for identification of clinical indicators of noncompliance. In the second, the defining characteristics were evaluated by six specialist nurses, analyzing the presence or absence of the diagnosis. For accuracy of the clinical indicators, the specificity, sensitivity, predictive values and likelihood ratios were measured. the presence of the noncompliance diagnosis was shown in 69% (n=78) of people with HIV. The most sensitive indicator was, missing of appointments (OR: 28.93, 95% CI: 1.112-2.126, p = 0.002). On the other hand, nonadherence behavior (OR: 15.00, 95% CI: 1.829-3.981, p = 0.001) and failure to meet outcomes (OR: 13.41; 95% CI: 1.272-2.508; P = 0.003) achieved higher specificity. the most accurate defining characteristics were nonadherence behavior, missing of appointments, and failure to meet outcomes. Thus, in the presence of these, the nurse can identify, with greater security, the diagnosis studied.

  12. 3D flexible alignment using 2D maximum common substructure: dependence of prediction accuracy on target-reference chemical similarity.

    PubMed

    Kawabata, Takeshi; Nakamura, Haruki

    2014-07-28

    A protein-bound conformation of a target molecule can be predicted by aligning the target molecule on the reference molecule obtained from the 3D structure of the compound-protein complex. This strategy is called "similarity-based docking". For this purpose, we develop the flexible alignment program fkcombu, which aligns the target molecule based on atomic correspondences with the reference molecule. The correspondences are obtained by the maximum common substructure (MCS) of 2D chemical structures, using our program kcombu. The prediction performance was evaluated using many target-reference pairs of superimposed ligand 3D structures on the same protein in the PDB, with different ranges of chemical similarity. The details of atomic correspondence largely affected the prediction success. We found that topologically constrained disconnected MCS (TD-MCS) with the simple element-based atomic classification provides the best prediction. The crashing potential energy with the receptor protein improved the performance. We also found that the RMSD between the predicted and correct target conformations significantly correlates with the chemical similarities between target-reference molecules. Generally speaking, if the reference and target compounds have more than 70% chemical similarity, then the average RMSD of 3D conformations is <2.0 Å. We compared the performance with a rigid-body molecular alignment program based on volume-overlap scores (ShaEP). Our MCS-based flexible alignment program performed better than the rigid-body alignment program, especially when the target and reference molecules were sufficiently similar.

  13. Consider the source: Children link the accuracy of text-based sources to the accuracy of the author.

    PubMed

    Vanderbilt, Kimberly E; Ochoa, Karlena D; Heilbrun, Jayd

    2018-05-06

    The present research investigated whether young children link the accuracy of text-based information to the accuracy of its author. Across three experiments, three- and four-year-olds (N = 231) received information about object labels from accurate and inaccurate sources who provided information both in text and verbally. Of primary interest was whether young children would selectively rely on information provided by more accurate sources, regardless of the form in which the information was communicated. Experiment 1 tested children's trust in text-based information (e.g., books) written by an author with a history of either accurate or inaccurate verbal testimony and found that children showed greater trust in books written by accurate authors. Experiment 2 replicated the findings of Experiment 1 and extended them by showing that children's selective trust in more accurate text-based sources was not dependent on experience trusting or distrusting the author's verbal testimony. Experiment 3 investigated this understanding in reverse by testing children's trust in verbal testimony communicated by an individual who had authored either accurate or inaccurate text-based information. Experiment 3 revealed that children showed greater trust in individuals who had authored accurate rather than inaccurate books. Experiment 3 also demonstrated that children used the accuracy of text-based sources to make inferences about the mental states of the authors. Taken together, these results suggest children do indeed link the reliability of text-based sources to the reliability of the author. Statement of Contribution Existing knowledge Children use sources' prior accuracy to predict future accuracy in face-to-face verbal interactions. Children who are just learning to read show increased trust in text bases (vs. verbal) information. It is unknown whether children consider authors' prior accuracy when judging the accuracy of text-based information. New knowledge added by this

  14. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  15. Development of quantitative screen for 1550 chemicals with GC-MS.

    PubMed

    Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A

    2018-05-01

    With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2  = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2  > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.

  16. Cadastral Database Positional Accuracy Improvement

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Ramli, S. N. M.; Omar, K. M.; Din, N.

    2017-10-01

    Positional Accuracy Improvement (PAI) is the refining process of the geometry feature in a geospatial dataset to improve its actual position. This actual position relates to the absolute position in specific coordinate system and the relation to the neighborhood features. With the growth of spatial based technology especially Geographical Information System (GIS) and Global Navigation Satellite System (GNSS), the PAI campaign is inevitable especially to the legacy cadastral database. Integration of legacy dataset and higher accuracy dataset like GNSS observation is a potential solution for improving the legacy dataset. However, by merely integrating both datasets will lead to a distortion of the relative geometry. The improved dataset should be further treated to minimize inherent errors and fitting to the new accurate dataset. The main focus of this study is to describe a method of angular based Least Square Adjustment (LSA) for PAI process of legacy dataset. The existing high accuracy dataset known as National Digital Cadastral Database (NDCDB) is then used as bench mark to validate the results. It was found that the propose technique is highly possible for positional accuracy improvement of legacy spatial datasets.

  17. A study on low-cost, high-accuracy, and real-time stereo vision algorithms for UAV power line inspection

    NASA Astrophysics Data System (ADS)

    Wang, Hongyu; Zhang, Baomin; Zhao, Xun; Li, Cong; Lu, Cunyue

    2018-04-01

    Conventional stereo vision algorithms suffer from high levels of hardware resource utilization due to algorithm complexity, or poor levels of accuracy caused by inadequacies in the matching algorithm. To address these issues, we have proposed a stereo range-finding technique that produces an excellent balance between cost, matching accuracy and real-time performance, for power line inspection using UAV. This was achieved through the introduction of a special image preprocessing algorithm and a weighted local stereo matching algorithm, as well as the design of a corresponding hardware architecture. Stereo vision systems based on this technique have a lower level of resource usage and also a higher level of matching accuracy following hardware acceleration. To validate the effectiveness of our technique, a stereo vision system based on our improved algorithms were implemented using the Spartan 6 FPGA. In comparative experiments, it was shown that the system using the improved algorithms outperformed the system based on the unimproved algorithms, in terms of resource utilization and matching accuracy. In particular, Block RAM usage was reduced by 19%, and the improved system was also able to output range-finding data in real time.

  18. Validity of Teacher-Based Vision Screening and Factors Associated with the Accuracy of Vision Screening in Vietnamese Children.

    PubMed

    Paudel, Prakash; Kovai, Vilas; Naduvilath, Thomas; Phuong, Ha Thanh; Ho, Suit May; Giap, Nguyen Viet

    2016-01-01

    To assess validity of teacher-based vision screening and elicit factors associated with accuracy of vision screening in Vietnam. After brief training, teachers independently measured visual acuity (VA) in 555 children aged 12-15 years in Ba Ria - Vung Tau Province. Teacher VA measurements were compared to those of refractionists. Sensitivity, specificity, positive predictive value and negative predictive value were calculated for uncorrected VA (UVA) and presenting VA (PVA) 20/40 or worse in either eye. Chi-square, Fisher's exact test and multivariate logistic regression were used to assess factors associated with accuracy of vision screening. Level of significance was set at 5%. Trained teachers in Vietnam demonstrated 86.7% sensitivity, 95.7% specificity, 86.7% positive predictive value and 95.7% negative predictive value in identifying children with visual impairment using the UVA measurement. PVA measurement revealed low accuracy for teachers, which was significantly associated with child's age, sex, spectacle wear and myopic status, but UVA measurement showed no such associations. Better accuracy was achieved in measurement of VA and identification of children with visual impairment using UVA measurement compared to PVA. UVA measurement is recommended for teacher-based vision screening programs.

  19. Protein structure refinement using a quantum mechanics-based chemical shielding predictor† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04344e Click here for additional data file.

    PubMed Central

    2017-01-01

    The accurate prediction of protein chemical shifts using a quantum mechanics (QM)-based method has been the subject of intense research for more than 20 years but so far empirical methods for chemical shift prediction have proven more accurate. In this paper we show that a QM-based predictor of a protein backbone and CB chemical shifts (ProCS15, PeerJ, 2016, 3, e1344) is of comparable accuracy to empirical chemical shift predictors after chemical shift-based structural refinement that removes small structural errors. We present a method by which quantum chemistry based predictions of isotropic chemical shielding values (ProCS15) can be used to refine protein structures using Markov Chain Monte Carlo (MCMC) simulations, relating the chemical shielding values to the experimental chemical shifts probabilistically. Two kinds of MCMC structural refinement simulations were performed using force field geometry optimized X-ray structures as starting points: simulated annealing of the starting structure and constant temperature MCMC simulation followed by simulated annealing of a representative ensemble structure. Annealing of the CHARMM structure changes the CA-RMSD by an average of 0.4 Å but lowers the chemical shift RMSD by 1.0 and 0.7 ppm for CA and N. Conformational averaging has a relatively small effect (0.1–0.2 ppm) on the overall agreement with carbon chemical shifts but lowers the error for nitrogen chemical shifts by 0.4 ppm. If an amino acid specific offset is included the ProCS15 predicted chemical shifts have RMSD values relative to experiments that are comparable to popular empirical chemical shift predictors. The annealed representative ensemble structures differ in CA-RMSD relative to the initial structures by an average of 2.0 Å, with >2.0 Å difference for six proteins. In four of the cases, the largest structural differences arise in structurally flexible regions of the protein as determined by NMR, and in the remaining two cases, the large

  20. Accuracy vs. Fluency: Which Comes First in ESL Instruction?

    ERIC Educational Resources Information Center

    Ebsworth, Miriam Eisenstein

    1998-01-01

    Discusses the debate over fluency versus accuracy in teaching English-as-a-Second-Language (ESL). Defines fluency and accuracy; examines alternative approaches (meaning first, accuracy first, and accuracy and fluency from the beginning); evaluates the alternatives; and highlights implications for teaching ESL. A sidebar presents an accuracy and…

  1. Weight Multispectral Reconstruction Strategy for Enhanced Reconstruction Accuracy and Stability With Cerenkov Luminescence Tomography.

    PubMed

    Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian

    2017-06-01

    Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.

  2. Accuracy Of LTPP Traffic Loading Estimates

    DOT National Transportation Integrated Search

    1998-07-01

    The accuracy and reliability of traffic load estimates are key to determining a pavement's life expectancy. To better understand the variability of traffic loading rates and its effect on the accuracy of the Long Term Pavement Performance (LTPP) prog...

  3. A corpus for plant-chemical relationships in the biomedical domain.

    PubMed

    Choi, Wonjun; Kim, Baeksoo; Cho, Hyejin; Lee, Doheon; Lee, Hyunju

    2016-09-20

    Plants are natural products that humans consume in various ways including food and medicine. They have a long empirical history of treating diseases with relatively few side effects. Based on these strengths, many studies have been performed to verify the effectiveness of plants in treating diseases. It is crucial to understand the chemicals contained in plants because these chemicals can regulate activities of proteins that are key factors in causing diseases. With the accumulation of a large volume of biomedical literature in various databases such as PubMed, it is possible to automatically extract relationships between plants and chemicals in a large-scale way if we apply a text mining approach. A cornerstone of achieving this task is a corpus of relationships between plants and chemicals. In this study, we first constructed a corpus for plant and chemical entities and for the relationships between them. The corpus contains 267 plant entities, 475 chemical entities, and 1,007 plant-chemical relationships (550 and 457 positive and negative relationships, respectively), which are drawn from 377 sentences in 245 PubMed abstracts. Inter-annotator agreement scores for the corpus among three annotators were measured. The simple percent agreement scores for entities and trigger words for the relationships were 99.6 and 94.8 %, respectively, and the overall kappa score for the classification of positive and negative relationships was 79.8 %. We also developed a rule-based model to automatically extract such plant-chemical relationships. When we evaluated the rule-based model using the corpus and randomly selected biomedical articles, overall F-scores of 68.0 and 61.8 % were achieved, respectively. We expect that the corpus for plant-chemical relationships will be a useful resource for enhancing plant research. The corpus is available at http://combio.gist.ac.kr/plantchemicalcorpus .

  4. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  5. Diagnostic accuracy of FEV1/forced vital capacity ratio z scores in asthmatic patients.

    PubMed

    Lambert, Allison; Drummond, M Bradley; Wei, Christine; Irvin, Charles; Kaminsky, David; McCormack, Meredith; Wise, Robert

    2015-09-01

    The FEV1/forced vital capacity (FVC) ratio is used as a criterion for airflow obstruction; however, the test characteristics of spirometry in the diagnosis of asthma are not well established. The accuracy of a test depends on the pretest probability of disease. We wanted to estimate the FEV1/FVC ratio z score threshold with optimal accuracy for the diagnosis of asthma for different pretest probabilities. Asthmatic patients enrolled in 4 trials from the Asthma Clinical Research Centers were included in this analysis. Measured and predicted FEV1/FVC ratios were obtained, with calculation of z scores for each participant. Across a range of asthma prevalences and z score thresholds, the overall diagnostic accuracy was calculated. One thousand six hundred eight participants were included (mean age, 39 years; 71% female; 61% white). The mean FEV1 percent predicted value was 83% (SD, 15%). In a symptomatic population with 50% pretest probability of asthma, optimal accuracy (68%) is achieved with a z score threshold of -1.0 (16th percentile), corresponding to a 6 percentage point reduction from the predicted ratio. However, in a screening population with a 5% pretest probability of asthma, the optimum z score is -2.0 (second percentile), corresponding to a 12 percentage point reduction from the predicted ratio. These findings were not altered by markers of disease control. Reduction of the FEV1/FVC ratio can support the diagnosis of asthma; however, the ratio is neither sensitive nor specific enough for diagnostic accuracy. When interpreting spirometric results, consideration of the pretest probability is an important consideration in the diagnosis of asthma based on airflow limitation. Copyright © 2015 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  6. Hyperspectral and differential CARS microscopy for quantitative chemical imaging in human adipocytes

    PubMed Central

    Di Napoli, Claudia; Pope, Iestyn; Masia, Francesco; Watson, Peter; Langbein, Wolfgang; Borri, Paola

    2014-01-01

    In this work, we demonstrate the applicability of coherent anti-Stokes Raman scattering (CARS) micro-spectroscopy for quantitative chemical imaging of saturated and unsaturated lipids in human stem-cell derived adipocytes. We compare dual-frequency/differential CARS (D-CARS), which enables rapid imaging and simple data analysis, with broadband hyperspectral CARS microscopy analyzed using an unsupervised phase-retrieval and factorization method recently developed by us for quantitative chemical image analysis. Measurements were taken in the vibrational fingerprint region (1200–2000/cm) and in the CH stretch region (2600–3300/cm) using a home-built CARS set-up which enables hyperspectral imaging with 10/cm resolution via spectral focussing from a single broadband 5 fs Ti:Sa laser source. Through a ratiometric analysis, both D-CARS and phase-retrieved hyperspectral CARS determine the concentration of unsaturated lipids with comparable accuracy in the fingerprint region, while in the CH stretch region D-CARS provides only a qualitative contrast owing to its non-linear behavior. When analyzing hyperspectral CARS images using the blind factorization into susceptibilities and concentrations of chemical components recently demonstrated by us, we are able to determine vol:vol concentrations of different lipid components and spatially resolve inhomogeneities in lipid composition with superior accuracy compared to state-of-the art ratiometric methods. PMID:24877002

  7. Quantum chemical approach to estimating the thermodynamics of metabolic reactions.

    PubMed

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-11-12

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism.

  8. Quantum Chemical Approach to Estimating the Thermodynamics of Metabolic Reactions

    PubMed Central

    Jinich, Adrian; Rappoport, Dmitrij; Dunn, Ian; Sanchez-Lengeling, Benjamin; Olivares-Amaya, Roberto; Noor, Elad; Even, Arren Bar; Aspuru-Guzik, Alán

    2014-01-01

    Thermodynamics plays an increasingly important role in modeling and engineering metabolism. We present the first nonempirical computational method for estimating standard Gibbs reaction energies of metabolic reactions based on quantum chemistry, which can help fill in the gaps in the existing thermodynamic data. When applied to a test set of reactions from core metabolism, the quantum chemical approach is comparable in accuracy to group contribution methods for isomerization and group transfer reactions and for reactions not including multiply charged anions. The errors in standard Gibbs reaction energy estimates are correlated with the charges of the participating molecules. The quantum chemical approach is amenable to systematic improvements and holds potential for providing thermodynamic data for all of metabolism. PMID:25387603

  9. Evaluation of Automatic Vehicle Location accuracy

    DOT National Transportation Integrated Search

    1999-01-01

    This study assesses the accuracy of the Automatic Vehicle Location (AVL) data provided for the buses of the Ann Arbor Transportation Authority with Global Positioning System (GPS) technology. In a sample of eighty-nine bus trips two kinds of accuracy...

  10. Communication Accuracy in Magazine Science Reporting.

    ERIC Educational Resources Information Center

    Borman, Susan Cray

    1978-01-01

    Evaluators with scientific expertise who analyzed the accuracy of popularized science news in mass circulation magazines found that the over-all accuracy of the magazine articles was good, and that the major problem was the omission of relevant information. (GW)

  11. Achievement Goals and Achievement Emotions: A Meta-Analysis

    ERIC Educational Resources Information Center

    Huang, Chiungjung

    2011-01-01

    This meta-analysis synthesized 93 independent samples (N = 30,003) in 77 studies that reported in 78 articles examining correlations between achievement goals and achievement emotions. Achievement goals were meaningfully associated with different achievement emotions. The correlations of mastery and mastery approach goals with positive achievement…

  12. Enabling Technologies for High-accuracy Multiangle Spectropolarimetric Imaging from Space

    NASA Technical Reports Server (NTRS)

    Diner, David J.; Macenka, Steven A.; Seshndri, Suresh; Bruce, Carl E; Jau, Bruno; Chipman, Russell A.; Cairns, Brian; Christoph, Keller; Foo, Leslie D.

    2004-01-01

    Satellite remote sensing plays a major role in measuring the optical and radiative properties, environmental impact, and spatial and temporal distribution of tropospheric aerosols. In this paper, we envision a new generation of spaceborne imager that integrates the unique strengths of multispectral, multiangle, and polarimetric approaches, thereby achieving better accuracies in aerosol optical depth and particle properties than can be achieved using any one method by itself. Design goals include spectral coverage from the near-UV to the shortwave infrared; global coverage within a few days; intensity and polarimetric imaging simultaneously at multiple view angles; kilometer to sub-kilometer spatial resolution; and measurement of the degree of linear polarization for a subset of the spectral complement with an uncertainty of 0.5% or less. The latter requirement is technically the most challenging. In particular, an approach for dealing with inter-detector gain variations is essential to avoid false polarization signals. We propose using rapid modulation of the input polarization state to overcome this problem, using a high-speed variable retarder in the camera design. Technologies for rapid retardance modulation include mechanically rotating retarders, liquid crystals, and photoelastic modulators (PEMs). We conclude that the latter are the most suitable.

  13. Investigating the spatial accuracy of CBCT-guided cranial radiosurgery: A phantom end-to-end test study.

    PubMed

    Calvo-Ortega, Juan-Francisco; Hermida-López, Marcelino; Moragues-Femenía, Sandra; Pozo-Massó, Miquel; Casals-Farran, Joan

    2017-03-01

    To evaluate the spatial accuracy of a frameless cone-beam computed tomography (CBCT)-guided cranial radiosurgery (SRS) using an end-to-end (E2E) phantom test methodology. Five clinical SRS plans were mapped to an acrylic phantom containing a radiochromic film. The resulting phantom-based plans (E2E plans) were delivered four times. The phantom was setup on the treatment table with intentional misalignments, and CBCT-imaging was used to align it prior to E2E plan delivery. Comparisons (global gamma analysis) of the planned and delivered dose to the film were performed using a commercial triple-channel film dosimetry software. The necessary distance-to-agreement to achieve a 95% (DTA95) gamma passing rate for a fixed 3% dose difference provided an estimate of the spatial accuracy of CBCT-guided SRS. Systematic (∑) and random (σ) error components, as well as 95% confidence levels were derived for the DTA95 metric. The overall systematic spatial accuracy averaged over all tests was 1.4mm (SD: 0.2mm), with a corresponding 95% confidence level of 1.8mm. The systematic (Σ) and random (σ) spatial components of the accuracy derived from the E2E tests were 0.2mm and 0.8mm, respectively. The E2E methodology used in this study allowed an estimation of the spatial accuracy of our CBCT-guided SRS procedure. Subsequently, a PTV margin of 2.0mm is currently used in our department. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Improving coding accuracy in an academic practice.

    PubMed

    Nguyen, Dana; O'Mara, Heather; Powell, Robert

    2017-01-01

    Practice management has become an increasingly important component of graduate medical education. This applies to every practice environment; private, academic, and military. One of the most critical aspects of practice management is documentation and coding for physician services, as they directly affect the financial success of any practice. Our quality improvement project aimed to implement a new and innovative method for teaching billing and coding in a longitudinal fashion in a family medicine residency. We hypothesized that implementation of a new teaching strategy would increase coding accuracy rates among residents and faculty. Design: single group, pretest-posttest. military family medicine residency clinic. Study populations: 7 faculty physicians and 18 resident physicians participated as learners in the project. Educational intervention: monthly structured coding learning sessions in the academic curriculum that involved learner-presented cases, small group case review, and large group discussion. overall coding accuracy (compliance) percentage and coding accuracy per year group for the subjects that were able to participate longitudinally. Statistical tests used: average coding accuracy for population; paired t test to assess improvement between 2 intervention periods, both aggregate and by year group. Overall coding accuracy rates remained stable over the course of time regardless of the modality of the educational intervention. A paired t test was conducted to compare coding accuracy rates at baseline (mean (M)=26.4%, SD=10%) to accuracy rates after all educational interventions were complete (M=26.8%, SD=12%); t24=-0.127, P=.90. Didactic teaching and small group discussion sessions did not improve overall coding accuracy in a residency practice. Future interventions could focus on educating providers at the individual level.

  15. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  16. The effects of aging on the speed-accuracy compromise: Boundary optimality in the diffusion model.

    PubMed

    Starns, Jeffrey J; Ratcliff, Roger

    2010-06-01

    We evaluated age-related differences in the optimality of decision boundary settings in a diffusion model analysis. In the model, the width of the decision boundary represents the amount of evidence that must accumulate in favor of a response alternative before a decision is made. Wide boundaries lead to slow but accurate responding, and narrow boundaries lead to fast but inaccurate responding. There is a single value of boundary separation that produces the most correct answers in a given period of time, and we refer to this value as the reward rate optimal boundary (RROB). We consistently found across a variety of decision tasks that older adults used boundaries that were much wider than the RROB value. Young adults used boundaries that were closer to the RROB value, although age differences in optimality were smaller with instructions emphasizing speed than with instructions emphasizing accuracy. Young adults adjusted their boundary settings to more closely approach the RROB value when they were provided with accuracy feedback and extensive practice. Older participants showed no evidence of making boundary adjustments in response to feedback or task practice, and they consistently used boundary separation values that produced accuracy levels that were near asymptote. Our results suggest that young adults attempt to balance speed and accuracy to achieve the most correct answers per unit time, whereas older adultts attempt to minimize errors even if they must respond quite slowly to do so. (c) 2010 APA, all rights reserved

  17. Finding and estimating chemical property data for environmental assessment.

    PubMed

    Boethling, Robert S; Howard, Philip H; Meylan, William M

    2004-10-01

    The ability to predict the behavior of a chemical substance in a biological or environmental system largely depends on knowledge of the physicochemical properties and reactivity of that substance. We focus here on properties, with the objective of providing practical guidance for finding measured values and using estimation methods when necessary. Because currently available computer software often makes it more convenient to estimate than to retrieve measured values, we try to discourage irrational exuberance for these tools by including comprehensive lists of Internet and hard-copy data resources. Guidance for assessors is presented in the form of a process to obtain data that includes establishment of chemical identity, identification of data sources, assessment of accuracy and reliability, substructure searching for analogs when experimental data are unavailable, and estimation from chemical structure. Regarding property estimation, we cover estimation from close structural analogs in addition to broadly applicable methods requiring only the chemical structure. For the latter, we list and briefly discuss the most widely used methods. Concluding thoughts are offered concerning appropriate directions for future work on estimation methods, again with an emphasis on practical applications.

  18. The effects of chronic achievement motivation and achievement primes on the activation of achievement and fun goals.

    PubMed

    Hart, William; Albarracín, Dolores

    2009-12-01

    This research examined the hypothesis that situational achievement cues can elicit achievement or fun goals depending on chronic differences in achievement motivation. In 4 studies, chronic differences in achievement motivation were measured, and achievement-denoting words were used to influence behavior. The effects of these variables were assessed on self-report inventories, task performance, task resumption following an interruption, and the pursuit of means relevant to achieving or having fun. Findings indicated that achievement priming (vs. control priming) activated a goal to achieve and inhibited a goal to have fun in individuals with chronically high-achievement motivation but activated a goal to have fun and inhibited a goal to achieve in individuals with chronically low-achievement motivation.

  19. The Effects of Chronic Achievement Motivation and Achievement Primes on the Activation of Achievement and Fun Goals

    PubMed Central

    Hart, William; Albarracín, Dolores

    2013-01-01

    This research examined the hypothesis that situational achievement cues can elicit achievement or fun goals depending on chronic differences in achievement motivation. In 4 studies, chronic differences in achievement motivation were measured, and achievement-denoting words were used to influence behavior. The effects of these variables were assessed on self-report inventories, task performance, task resumption following an interruption, and the pursuit of means relevant to achieving or having fun. Findings indicated that achievement priming (vs. control priming) activated a goal to achieve and inhibited a goal to have fun in individuals with chronically high-achievement motivation but activated a goal to have fun and inhibited a goal to achieve in individuals with chronically low-achievement motivation. PMID:19968423

  20. Students’ Achievement Goals, Learning-Related Emotions and Academic Achievement

    PubMed Central

    Lüftenegger, Marko; Klug, Julia; Harrer, Katharina; Langer, Marie; Spiel, Christiane; Schober, Barbara

    2016-01-01

    In the present research, the recently proposed 3 × 2 model of achievement goals is tested and associations with achievement emotions and their joint influence on academic achievement are investigated. The study was conducted with 388 students using the 3 × 2 Achievement Goal Questionnaire including the six proposed goal constructs (task-approach, task-avoidance, self-approach, self-avoidance, other-approach, other-avoidance) and the enjoyment and boredom scales from the Achievement Emotion Questionnaire. Exam grades were used as an indicator of academic achievement. Findings from CFAs provided strong support for the proposed structure of the 3 × 2 achievement goal model. Self-based goals, other-based goals and task-approach goals predicted enjoyment. Task-approach goals negatively predicted boredom. Task-approach and other-approach predicted achievement. The indirect effects of achievement goals through emotion variables on achievement were assessed using bias-corrected bootstrapping. No mediation effects were found. Implications for educational practice are discussed. PMID:27199836