Science.gov

Sample records for achieve improved accuracy

  1. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  2. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  3. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  4. Improving Educational Achievement.

    ERIC Educational Resources Information Center

    New York University Education Quarterly, 1979

    1979-01-01

    This is a slightly abridged version of the report of the National Academy of Education panel, convened at the request of HEW Secretary Joseph Califano and Assistant Secretary for Education Mary F. Berry, to study recent declines in student achievement and methods of educational improvement. (SJL)

  5. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  6. [Accuracy of apposition achieved by mandibular osteosyntheses. Stereophotogrammetric study].

    PubMed

    Randzio, J; Ficker, E; Wintges, T; Laser, S

    1989-01-01

    The accuracy of apposition achieved by wire and plate osteosyntheses is measured with the aid of close range stereophotogrammetry in the mandibles of dead bodies. Both osteosynthesis methods are characterized by an increase in the intercondylar distance which, on the average, is about 3.3 mm greater after plate osteosynthesis and about 1.9 mm after wiring. Moreover, osteosyntheses of the base of the mandible may involve a tendency of the condyle to become caudally dislocated.

  7. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  8. Why do delayed summaries improve metacomprehension accuracy?

    PubMed

    Anderson, Mary C M; Thiede, Keith W

    2008-05-01

    We showed that metacomprehension accuracy improved when participants (N=87 college students) wrote summaries of texts prior to judging their comprehension; however, accuracy only improved when summaries were written after a delay, not when written immediately after reading. We evaluated two hypotheses proposed to account for this delayed-summarization effect (the accessibility hypothesis and the situation model hypothesis). The data suggest that participants based metacomprehension judgments more on the gist of texts when they generated summaries after a delay; whereas, they based judgments more on details when they generated summaries immediately after reading. Focusing on information relevant to the situation model of a text (the gist of a text) produced higher levels of metacomprehension accuracy, which is consistent with situation model hypothesis.

  9. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  10. Improving classification accuracy and causal knowledge for better credit decisions.

    PubMed

    Wu, Wei-Wen

    2011-08-01

    Numerous studies have contributed to efforts to boost the accuracy of the credit scoring model. Especially interesting are recent studies which have successfully developed the hybrid approach, which advances classification accuracy by combining different machine learning techniques. However, to achieve better credit decisions, it is not enough merely to increase the accuracy of the credit scoring model. It is necessary to conduct meaningful supplementary analyses in order to obtain knowledge of causal relations, particularly in terms of significant conceptual patterns or structures involving attributes used in the credit scoring model. This paper proposes a solution of integrating data preprocessing strategies and the Bayesian network classifier with the tree augmented Na"ıve Bayes search algorithm, in order to improve classification accuracy and to obtain improved knowledge of causal patterns, thus enhancing the validity of credit decisions.

  11. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  12. Can Judges Improve Academic Achievement?

    ERIC Educational Resources Information Center

    Greene, Jay P.; Trivitt, Julie R.

    2008-01-01

    Over the last 3 decades student achievement has remained essentially unchanged in the United States, but not for a lack of spending. Over the same period a myriad of education reforms have been suggested and per-pupil spending has more than doubled. Since the 1990s the education reform attempts have frequently included judicial decisions to revise…

  13. A Comparative Investigation of Several Methods of Aiding College Freshmen to Achieve Grammatical Accuracy in Written Composition.

    ERIC Educational Resources Information Center

    Essary, William Howard

    Two problems were investigated in this study: (1) Which (if any) method of teaching freshmen composition is most effective in helping college students achieve grammatical accuracy? (2) Is improvement in grammatical accuracy paralleled or contrasted with improvement in content? Relatively weak students (low C high-school average and a mean SAT…

  14. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  15. Improvements on the accuracy of beam bugs

    SciTech Connect

    Chen, Y J; Fessenden, T

    1998-09-02

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as "beam bugs", have been used throughout linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.

  16. Improvements on the accuracy of beam bugs

    SciTech Connect

    Chen, Y.J.; Fessenden, T.

    1998-08-17

    At LLNL resistive wall monitors are used to measure the current and position used on ETA-II show a droop in signal due to a fast redistribution time constant of the signals. This paper presents the analysis and experimental test of the beam bugs used for beam current and position measurements in and after the fast kicker. It concludes with an outline of present and future changes that can be made to improve the accuracy of these beam bugs. of intense electron beams in electron induction linacs and beam transport lines. These, known locally as ''beam bugs'', have been used throughout linear induction accelerators as essential diagnostics of beam current and location. Recently, the development of a fast beam kicker has required improvement in the accuracy of measuring the position of beams. By picking off signals at more than the usual four positions around the monitor, beam position measurement error can be greatly reduced. A second significant source of error is the mechanical variation of the resistor around the bug.

  17. Improving the accuracy of death certification

    PubMed Central

    Myers, K A; Farquhar, D R

    1998-01-01

    BACKGROUND: Population-based mortality statistics are derived from the information recorded on death certificates. This information is used for many important purposes, such as the development of public health programs and the allocation of health care resources. Although most physicians are confronted with the task of completing death certificates, many do not receive adequate training in this skill. Resulting inaccuracies in information undermine the quality of the data derived from death certificates. METHODS: An educational intervention was designed and implemented to improve internal medicine residents' accuracy in death certificate completion. A total of 229 death certificates (146 completed before and 83 completed after the intervention) were audited for major and minor errors, and the rates of errors before and after the intervention were compared. RESULTS: Major errors were identified on 32.9% of the death certificates completed before the intervention, a rate comparable to previously reported rates for internal medicine services in teaching hospitals. Following the intervention the major error rate decreased to 15.7% (p = 0.01). The reduction in the major error rate was accounted for by significant reductions in the rate of listing of mechanism of death without a legitimate underlying cause of death (15.8% v. 4.8%) (p = 0.01) and the rate of improper sequencing of death certificate information (15.8% v. 6.0%) (p = 0.03). INTERPRETATION: Errors are common in the completion of death certificates in the inpatient teaching hospital setting. The accuracy of death certification can be improved with the implementation of a simple educational intervention. PMID:9614825

  18. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  19. Accuracy Improvement for Predicting Parkinson's Disease Progression.

    PubMed

    Nilashi, Mehrbakhsh; Ibrahim, Othman; Ahani, Ali

    2016-09-30

    Parkinson's disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson's datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease.

  20. Do you really understand? Achieving accuracy in interracial relationships.

    PubMed

    Holoien, Deborah Son; Bergsieker, Hilary B; Shelton, J Nicole; Alegre, Jan Marie

    2015-01-01

    Accurately perceiving whether interaction partners feel understood is important for developing intimate relationships and maintaining smooth interpersonal exchanges. During interracial interactions, when are Whites and racial minorities likely to accurately perceive how understood cross-race partners feel? We propose that participant race, desire to affiliate, and racial salience moderate accuracy in interracial interactions. Examination of cross-race roommates (Study 1) and interracial interactions with strangers (Study 2) revealed that when race is salient, Whites higher in desire to affiliate with racial minorities failed to accurately perceive the extent to which racial minority partners felt understood. Thus, although the desire to affiliate may appear beneficial, it may interfere with Whites' ability to accurately perceive how understood racial minorities feel. By contrast, racial minorities higher in desire to affiliate with Whites accurately perceived how understood White partners felt. Furthermore, participants' overestimation of how well they understood partners correlated negatively with partners' reports of relationship quality. Collectively, these findings indicate that racial salience and desire to affiliate moderate accurate perceptions of cross-race partners-even in the context of sustained interracial relationships-yielding divergent outcomes for Whites and racial minorities. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  1. Simultaneously improving the sensitivity and absolute accuracy of CPT magnetometer.

    PubMed

    Liang, Shang-Qing; Yang, Guo-Qing; Xu, Yun-Fei; Lin, Qiang; Liu, Zhi-Heng; Chen, Zheng-Xiang

    2014-03-24

    A new method to improve the sensitivity and absolute accuracy simultaneously for coherent population trapping (CPT) magnetometer based on the differential detection method is presented. Two modulated optical beams with orthogonal circular polarizations are applied, in one of which two magnetic resonances are excited simultaneously by modulating a 3.4GHz microwave with Larmor frequency. When a microwave frequency shift is introduced, the difference in the power transmitted through the cell in each beam shows a low noise resonance. The sensitivity of 2pT/Hz @ 10Hz is achieved. Meanwhile, the absolute accuracy of ± 0.5nT within the magnetic field ranging from 20000nT to 100000nT is realized.

  2. Improving the accuracy of walking piezo motors

    NASA Astrophysics Data System (ADS)

    den Heijer, M.; Fokkema, V.; Saedi, A.; Schakel, P.; Rost, M. J.

    2014-05-01

    Many application areas require ultraprecise, stiff, and compact actuator systems with a high positioning resolution in combination with a large range as well as a high holding and pushing force. One promising solution to meet these conflicting requirements is a walking piezo motor that works with two pairs of piezo elements such that the movement is taken over by one pair, once the other pair reaches its maximum travel distance. A resolution in the pm-range can be achieved, if operating the motor within the travel range of one piezo pair. However, applying the typical walking drive signals, we measure jumps in the displacement up to 2.4 μm, when the movement is given over from one piezo pair to the other. We analyze the reason for these large jumps and propose improved drive signals. The implementation of our new drive signals reduces the jumps to less than 42 nm and makes the motor ideally suitable to operate as a coarse approach motor in an ultra-high vacuum scanning tunneling microscope. The rigidity of the motor is reflected in its high pushing force of 6.4 N.

  3. Resist development modeling for OPC accuracy improvement

    NASA Astrophysics Data System (ADS)

    Fan, Yongfa; Zavyalova, Lena; Zhang, Yunqiang; Zhang, Charlie; Lucas, Kevin; Falch, Brad; Croffie, Ebo; Li, Jianliang; Melvin, Lawrence; Ward, Brian

    2009-03-01

    in the same way that current model calibration is done. The method is validated with a rigorous lithography process simulation tool which is based on physical models to simulate and predict effects during the resist PEB and development process. Furthermore, an experimental lithographic process was modeled using this new methodology, showing significant improvement in modeling accuracy in compassion to a traditional model. Layout correction test has shown that the new model form is equivalent to traditional model forms in terms of correction convergence and speed.

  4. Achieve inventory reduction and improve customer service?

    PubMed

    Moody, M C

    2000-05-01

    Is it really possible to achieve significant reductions in your manufacturing inventories while improving customer service? If you really want to achieve significant inventory reductions, focus on the root causes, and develop countermeasures and a work plan, to execute your countermeasures. Include measurements for recording your progress, and deploy your countermeasures until they are no longer required, or until new ones are needed.

  5. Generalized and Heuristic-Free Feature Construction for Improved Accuracy

    PubMed Central

    Fan, Wei; Zhong, Erheng; Peng, Jing; Verscheure, Olivier; Zhang, Kun; Ren, Jiangtao; Yan, Rong; Yang, Qiang

    2010-01-01

    State-of-the-art learning algorithms accept data in feature vector format as input. Examples belonging to different classes may not always be easy to separate in the original feature space. One may ask: can transformation of existing features into new space reveal significant discriminative information not obvious in the original space? Since there can be infinite number of ways to extend features, it is impractical to first enumerate and then perform feature selection. Second, evaluation of discriminative power on the complete dataset is not always optimal. This is because features highly discriminative on subset of examples may not necessarily be significant when evaluated on the entire dataset. Third, feature construction ought to be automated and general, such that, it doesn't require domain knowledge and its improved accuracy maintains over a large number of classification algorithms. In this paper, we propose a framework to address these problems through the following steps: (1) divide-conquer to avoid exhaustive enumeration; (2) local feature construction and evaluation within subspaces of examples where local error is still high and constructed features thus far still do not predict well; (3) weighting rules based search that is domain knowledge free and has provable performance guarantee. Empirical studies indicate that significant improvement (as much as 9% in accuracy and 28% in AUC) is achieved using the newly constructed features over a variety of inductive learners evaluated against a number of balanced, skewed and high-dimensional datasets. Software and datasets are available from the authors. PMID:21544257

  6. Exemplar pediatric collaborative improvement networks: achieving results.

    PubMed

    Billett, Amy L; Colletti, Richard B; Mandel, Keith E; Miller, Marlene; Muething, Stephen E; Sharek, Paul J; Lannon, Carole M

    2013-06-01

    A number of pediatric collaborative improvement networks have demonstrated improved care and outcomes for children. Regionally, Cincinnati Children's Hospital Medical Center Physician Hospital Organization has sustained key asthma processes, substantially increased the percentage of their asthma population receiving "perfect care," and implemented an innovative pay-for-performance program with a large commercial payor based on asthma performance measures. The California Perinatal Quality Care Collaborative uses its outcomes database to improve care for infants in California NICUs. It has achieved reductions in central line-associated blood stream infections (CLABSI), increased breast-milk feeding rates at hospital discharge, and is now working to improve delivery room management. Solutions for Patient Safety (SPS) has achieved significant improvements in adverse drug events and surgical site infections across all 8 Ohio children's hospitals, with 7700 fewer children harmed and >$11.8 million in avoided costs. SPS is now expanding nationally, aiming to eliminate all events of serious harm at children's hospitals. National collaborative networks include ImproveCareNow, which aims to improve care and outcomes for children with inflammatory bowel disease. Reliable adherence to Model Care Guidelines has produced improved remission rates without using new medications and a significant increase in the proportion of Crohn disease patients not taking prednisone. Data-driven collaboratives of the Children's Hospital Association Quality Transformation Network initially focused on CLABSI in PICUs. By September 2011, they had prevented an estimated 2964 CLABSI, saving 355 lives and $103,722,423. Subsequent improvement efforts include CLABSI reductions in additional settings and populations.

  7. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  8. SEQuel: improving the accuracy of genome assemblies

    PubMed Central

    Ronen, Roy; Boucher, Christina; Chitsaz, Hamidreza; Pevzner, Pavel

    2012-01-01

    Motivation: Assemblies of next-generation sequencing (NGS) data, although accurate, still contain a substantial number of errors that need to be corrected after the assembly process. We develop SEQuel, a tool that corrects errors (i.e. insertions, deletions and substitution errors) in the assembled contigs. Fundamental to the algorithm behind SEQuel is the positional de Bruijn graph, a graph structure that models k-mers within reads while incorporating the approximate positions of reads into the model. Results: SEQuel reduced the number of small insertions and deletions in the assemblies of standard multi-cell Escherichia coli data by almost half, and corrected between 30% and 94% of the substitution errors. Further, we show SEQuel is imperative to improving single-cell assembly, which is inherently more challenging due to higher error rates and non-uniform coverage; over half of the small indels, and substitution errors in the single-cell assemblies were corrected. We apply SEQuel to the recently assembled Deltaproteobacterium SAR324 genome, which is the first bacterial genome with a comprehensive single-cell genome assembly, and make over 800 changes (insertions, deletions and substitutions) to refine this assembly. Availability: SEQuel can be used as a post-processing step in combination with any NGS assembler and is freely available at http://bix.ucsd.edu/SEQuel/. Contact: ppevzner@cs.ucsd.edu PMID:22689760

  9. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  10. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a…

  11. Improving metacomprehension accuracy in an undergraduate course context.

    PubMed

    Wiley, Jennifer; Griffin, Thomas D; Jaeger, Allison J; Jarosz, Andrew F; Cushen, Patrick J; Thiede, Keith W

    2016-12-01

    Students tend to have poor metacomprehension when learning from text, meaning they are not able to distinguish between what they have understood well and what they have not. Although there are a good number of studies that have explored comprehension monitoring accuracy in laboratory experiments, fewer studies have explored this in authentic course contexts. This study investigated the effect of an instructional condition that encouraged comprehension-test-expectancy and self-explanation during study on metacomprehension accuracy in the context of an undergraduate course in research methods. Results indicated that when students received this instructional condition, relative metacomprehension accuracy was better than in a comparison condition. In addition, differences were also seen in absolute metacomprehension accuracy measures, strategic study behaviors, and learning outcomes. The results of the current study demonstrate that a condition that has improved relative metacomprehension accuracy in laboratory contexts may have value in real classroom contexts as well. (PsycINFO Database Record

  12. Improving Student Achievement in Math and Science

    NASA Technical Reports Server (NTRS)

    Sullivan, Nancy G.; Hamsa, Irene Schulz; Heath, Panagiota; Perry, Robert; White, Stacy J.

    1998-01-01

    As the new millennium approaches, a long anticipated reckoning for the education system of the United States is forthcoming, Years of school reform initiatives have not yielded the anticipated results. A particularly perplexing problem involves the lack of significant improvement of student achievement in math and science. Three "Partnership" projects represent collaborative efforts between Xavier University (XU) of Louisiana, Southern University of New Orleans (SUNO), Mississippi Valley State University (MVSU), and the National Aeronautics and Space Administration (NASA), Stennis Space Center (SSC), to enhance student achievement in math and science. These "Partnerships" are focused on students and teachers in federally designated rural and urban empowerment zones and enterprise communities. The major goals of the "Partnerships" include: (1) The identification and dissemination of key indices of success that account for high performance in math and science; (2) The education of pre-service and in-service secondary teachers in knowledge, skills, and competencies that enhance the instruction of high school math and science; (3) The development of faculty to enhance the quality of math and science courses in institutions of higher education; and (4) The incorporation of technology-based instruction in institutions of higher education. These goals will be achieved by the accomplishment of the following objectives: (1) Delineate significant ?best practices? that are responsible for enhancing student outcomes in math and science; (2) Recruit and retain pre-service teachers with undergraduate degrees in Biology, Math, Chemistry, or Physics in a graduate program, culminating with a Master of Arts in Curriculum and Instruction; (3) Provide faculty workshops and opportunities for travel to professional meetings for dissemination of NASA resources information; (4) Implement methodologies and assessment procedures utilizing performance-based applications of higher order

  13. Bounds on achievable accuracy in analog optical linear-algebra processors

    NASA Astrophysics Data System (ADS)

    Batsell, Stephen G.; Walkup, John F.; Krile, Thomas F.

    1990-07-01

    Upper arid lower bounds on the number of bits of accuracy achievable are determined by applying a seconth-ortler statistical model to the linear algebra processor. The use of bounds was found necessary due to the strong signal-dependence of the noise at the output of the optical linear algebra processor (OLAP). 1 1. ACCURACY BOUNDS One of the limiting factors in applying OLAPs to real world problems has been the poor achievable accuracy of these processors. Little previous research has been done on determining noise sources from a systems perspective which would include noise generated in the multiplication ard addition operations spatial variations across arrays and crosstalk. We have previously examined these noise sources and determined a general model for the output noise mean and variance. The model demonstrates a strony signaldependency in the noise at the output of the processor which has been confirmed by our experiments. 1 We define accuracy similar to its definition for an analog signal input to an analog-to-digital (ND) converter. The number of bits of accuracy achievable is related to the log (base 2) of the number of separable levels at the P/D converter output. The number of separable levels is fouri by dividing the dynamic range by m times the standard deviation of the signal a. 2 Here m determines the error rate in the P/D conversion. The dynamic range can be expressed as the

  14. Using judgement to improve accuracy in decision-making.

    PubMed

    Dowding, Dawn; Thompson, Carl

    Nursing judgements are complex, often involving the need to process a large number of information cues. Key issues include how accurate they are and how we can improve levels of accuracy. Traditional approaches to the study of nursing judgement, characterised by qualitative and descriptive research, have provided valuable insights into the nature of expert nursing practice and the complexity of practice. However, they have largely failed to provide the data needed to address judgement accuracy. Social judgement analysis approaches are one way of overcoming these limitations. This paper argues that as nurses take on more roles requiring accurate judgement, it is time to increase our knowledge of judgement and ways to improve it.

  15. Techniques for improving overlay accuracy by using device correlated metrology targets as reference

    NASA Astrophysics Data System (ADS)

    Tzai, Wei Jhe; Hsu, Simon C. C.; Chen, Howard; Chen, Charlie; Pai, Yuan Chi; Yu, Chun-Chi; Lin, Chia Ching; Itzkovich, Tal; Yap, Lipkong; Amit, Eran; Tien, David; Huang, Eros; Kuo, Kelly T. L.; Amir, Nuriel

    2014-10-01

    The performance of overlay metrology as total measurement uncertainty, design rule compatibility, device correlation, and measurement accuracy has been challenged at the 2× nm node and below. The process impact on overlay metrology is becoming critical, and techniques to improve measurement accuracy become increasingly important. We present a methodology for improving the overlay accuracy. A propriety quality metric, Qmerit, is used to identify overlay metrology measurement settings with the least process impacts and reliable accuracies. Using the quality metric, a calibration method, Archer self-calibration, is then used to remove the inaccuracies. Accuracy validation can be achieved by correlation to reference overlay data from another independent metrology source such as critical dimension-scanning electron microscopy data collected on a device correlated metrology hybrid target or by electrical testing. Additionally, reference metrology can also be used to verify which measurement conditions are the most accurate. We provide an example of such a case.

  16. Innovative techniques for improving overlay accuracy by using DCM (device correlated metrology) targets as reference

    NASA Astrophysics Data System (ADS)

    Tzai, Wei-Jhe; Hsu, Simon C. C.; Chen, Howard; Chen, Charlie; Pai, Yuan Chi; Yu, Chun-Chi; Lin, Chia Ching; Itzkovich, Tal; Yap, Lipkong; Amit, Eran; Tien, David; Huang, Eros; Kuo, Kelly T. L.; Amir, Nuriel

    2014-04-01

    Overlay metrology performance as Total Measurement Uncertainty (TMU), design rule compatibility, device correlation and measurement accuracy are been challenged at 2x nm node and below. Process impact on overlay metrology becoming critical, and techniques to improve measurement accuracy becomes increasingly important. In this paper, we present an innovative methodology for improving overlay accuracy. A propriety quality metric, Qmerit, is used to identify overlay metrology measurement settings with least process impacts and reliable accuracies. Using the quality metric, an innovative calibration method, ASC (Archer Self Calibration) is then used to remove the inaccuracies. Accuracy validation can be achieved by correlation to reference overlay data from another independent metrology source such as CDSEM data collected on DCM (Device Correlated Metrology) hybrid target or electrical testing. Additionally, reference metrology can also be used to verify which measurement conditions are the most accurate. In this paper we bring an example of such use case.

  17. Stochastic FDTD accuracy improvement through correlation coefficient estimation

    NASA Astrophysics Data System (ADS)

    Masumnia Bisheh, Khadijeh; Zakeri Gatabi, Bijan; Andargoli, Seyed Mehdi Hosseini

    2015-04-01

    This paper introduces a new scheme to improve the accuracy of the stochastic finite difference time domain (S-FDTD) method. S-FDTD, reported recently by Smith and Furse, calculates the variations in the electromagnetic fields caused by variability or uncertainty in the electrical properties of the materials in the model. The accuracy of the S-FDTD method is controlled by the approximations for correlation coefficients between the electrical properties of the materials in the model and the fields propagating in them. In this paper, new approximations for these correlation coefficients are obtained using Monte Carlo method with a small number of runs, terming them as Monte Carlo correlation coefficients (MC-CC). Numerical results for two bioelectromagnetic simulation examples demonstrate that MC-CC can improve the accuracy of the S-FDTD method and yield more accurate results than previous approximations.

  18. Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Fukaya, Tatsushi

    2013-01-01

    The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…

  19. Operators, service companies improve horizontal drilling accuracy offshore

    SciTech Connect

    Lyle, D.

    1996-04-01

    Continuing efforts to get more and better measurement and logging equipment closer to the bit improve accuracy in offshore drilling. Using current technology, both in measurement while drilling and logging while drilling, a target can consistently be hit within five vertical feet.

  20. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  1. The rereading effect: metacomprehension accuracy improves across reading trials.

    PubMed

    Rawson, K A; Dunlosky, J; Thiede, K W

    2000-09-01

    Guided by a hypothesis that integrates principles of monitoring from a cue-based framework of metacognitive judgments with assumptions about levels of text representation derived from theories of comprehension, we discovered that rereading improves metacomprehension accuracy. In Experiments 1 and 2, the participants read texts either once or twice, rated their comprehension for each text, and then were tested on the material. In both experiments, correlations between comprehension ratings and test scores were reliably greater for participants who reread texts than for participants who read texts only once. Furthermore, in contrast to the low levels of accuracy typically reported in the literature, rereading produced relatively high levels of accuracy, with the median gamma between ratings and test performance being +.60 across participants from both experiments. Our discussion focuses on two alternative hypotheses--that improved accuracy is an artifact of when judgments are collected or that it results from increased reliability of test performance--and on evidence that is inconsistent with these explanations for the rereading effect.

  2. Improving mental health outcomes: achieving equity through quality improvement

    PubMed Central

    Poots, Alan J.; Green, Stuart A.; Honeybourne, Emmi; Green, John; Woodcock, Thomas; Barnes, Ruth; Bell, Derek

    2014-01-01

    Objective To investigate equity of patient outcomes in a psychological therapy service, following increased access achieved by a quality improvement (QI) initiative. Design Retrospective service evaluation of health outcomes; data analysed by ANOVA, chi-squared and Statistical Process Control. Setting A psychological therapy service in Westminster, London, UK. Participants People living in the Borough of Westminster, London, attending the service (from either healthcare professional or self-referral) between February 2009 and May 2012. Intervention(s) Social marketing interventions were used to increase referrals, including the promotion of the service through local media and through existing social networks. Main Outcome Measure(s) (i) Severity of depression on entry using Patient Health Questionnaire-9 (PHQ9). (ii) Changes to severity of depression following treatment (ΔPHQ9). (iii) Changes in attainment of a meaningful improvement in condition assessed by a key performance indicator. Results Patients from areas of high deprivation entered the service with more severe depression (M = 15.47, SD = 6.75), compared with patients from areas of low (M = 13.20, SD = 6.75) and medium (M = 14.44, SD = 6.64) deprivation. Patients in low, medium and high deprivation areas attained similar changes in depression score (ΔPHQ9: M = −6.60, SD = 6.41). Similar proportions of patients achieved the key performance indicator across initiative phase and deprivation categories. Conclusions QI methods improved access to mental health services; this paper finds no evidence for differences in clinical outcomes in patients, regardless of level of deprivation, interpreted as no evidence of inequity in the service with respect to this outcome. PMID:24521701

  3. Improving Achievement through Problem-Based Learning

    ERIC Educational Resources Information Center

    Sungur, Semra; Tekkaya, Ceren; Geban, Omer

    2006-01-01

    In this study, the effect of problem-based learning on students' academic achievement and performance skills in a unit on the human excretory system was investigated. Sixty-one 10th grade students, from two full classes instructed by the same biology teacher, were involved in the study. Classes were randomly assigned as either the experimental or…

  4. Do Charter Schools Improve Student Achievement?

    ERIC Educational Resources Information Center

    Clark, Melissa A.; Gleason, Philip M.; Tuttle, Christina Clark; Silverberg, Marsha K.

    2015-01-01

    This article presents findings from a lottery-based study of the impacts of a broad set of 33 charter middle schools across 13 states on student achievement. To estimate charter school impacts, we compare test score outcomes of students admitted to these schools through the randomized admissions lotteries with outcomes of applicants who were not…

  5. Proven Strategies for Improving Learning & Achievement.

    ERIC Educational Resources Information Center

    Brown, Duane

    The purpose of this book is to give student support personnel tools that: (1) will be recognized by educators as directly related to enhancing academic performance; (2) can be used with confidence that they will have the desired impact on achievement; and (3) are culturally sensitive. Chapters contain detailed presentation of the technology as…

  6. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  7. Accuracy improvement in digital holographic microtomography by multiple numerical reconstructions

    NASA Astrophysics Data System (ADS)

    Ma, Xichao; Xiao, Wen; Pan, Feng

    2016-11-01

    In this paper, we describe a method to improve the accuracy in digital holographic microtomography (DHMT) for measurement of thick samples. Two key factors impairing the accuracy, the deficiency of depth of focus and the rotational error, are considered and addressed simultaneously. The hologram is propagated to a series of distances by multiple numerical reconstructions so as to extend the depth of focus. The correction of the rotational error, implemented by numerical refocusing and image realigning, is merged into the computational process. The method is validated by tomographic results of a four-core optical fiber and a large mode optical crystal fiber. A sample as thick as 258 μm is accurately reconstructed and the quantitative three-dimensional distribution of refractive index is demonstrated.

  8. Improving Learner Achievement through Evaluation by Objectives.

    ERIC Educational Resources Information Center

    Sullivan, Howard J.

    Evaluation techniques were designed to improve learner performance through use of pre-specified popular instructional objectives. Current curriculum planning and evaluation practices are examined. Two common evaluation malpractices are: (1) the tendency to treat the content of the program as the most important criterion for evaluation, (2) the…

  9. Improving Student Achievement through Behavior Intervention.

    ERIC Educational Resources Information Center

    Berry, Gina; And Others

    This report describes a program that was designed to identify and modify disruptive student behavior and improve academic performance. The targeted fifth grade class had been noted for inappropriate behavior and sporadic academic success, with problems documented by teacher observation surveys and self-reporting by students. Probable causes…

  10. Strategic School Funding for Improved Student Achievement

    ERIC Educational Resources Information Center

    Chambers, Jay G.; Brown, James R.; Levin, Jesse; Jubb, Steve; Harper, Dorothy; Tolleson, Ray; Manship, Karen

    2010-01-01

    This article features Strategic School Funding for Results (SSFR) project, a new joint initiative of the American Institutes for Research (AIR) and Pivot Learning Partners (PLP) aimed at improving school finance, human resources, and management systems in large urban school districts. The goal of the project is to develop and implement more…

  11. a Method to Achieve Large Volume, High Accuracy Photogrammetric Measurements Through the Use of AN Actively Deformable Sensor Mounting Platform

    NASA Astrophysics Data System (ADS)

    Sargeant, B.; Robson, S.; Szigeti, E.; Richardson, P.; El-Nounu, A.; Rafla, M.

    2016-06-01

    When using any optical measurement system one important factor to consider is the placement of the sensors in relation to the workpiece being measured. When making decisions on sensor placement compromises are necessary in selecting the best placement based on the shape and size of the object of interest and the desired resolution and accuracy. One such compromise is in the distance the sensors are placed from the measurement surface, where a smaller distance gives a higher spatial resolution and local accuracy and a greater distance reduces the number of measurements necessary to cover a large area reducing the build-up of errors between measurements and increasing global accuracy. This paper proposes a photogrammetric approach whereby a number of sensors on a continuously flexible mobile platform are used to obtain local measurements while the position of the sensors is determined by a 6DoF tracking solution and the results combined to give a single set of measurement data within a continuous global coordinate system. The ability of this approach to achieve both high accuracy measurement and give results over a large volume is then tested and areas of weakness to be improved upon are identified.

  12. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  13. Can optimal marker weightings improve thoracohumeral kinematics accuracy?

    PubMed

    Begon, Mickaël; Dal Maso, Fabien; Arndt, Anton; Monnet, Tony

    2015-07-16

    Local and global optimization algorithms have been developed to estimate joint kinematics to reducing soft movement artifact (STA). Such algorithms can include weightings to account for different STA occur at each marker. The objective was to quantify the benefit of optimal weighting and determine if optimal marker weightings can improve humerus kinematics accuracy. A pin with five reflective markers was inserted into the humerus of four subjects. Seven markers were put on the skin of the arm. Subjects performed 38 different tasks including arm elevation, rotation, daily-living tasks, and sport activities. In each movement, mean and peak errors in skin- vs. pins-orientation were reported. Then, optimal marker weightings were found to best match skin- and pin-based orientation. Without weighting, the error of the arm orientation ranged from 1.9° to 17.9°. With weighting, 100% of the trials were improved and the average error was halved. The mid-arm markers weights were close to 0 for three subjects. Weights of a subject applied to the others for a given movement, and weights of a movement applied to others for a given subject did not systematically increased accuracy of arm orientation. Without weighting, a redundant set of marker and least square algorithm improved accuracy to estimate arm orientation compared to data of the literature using electromagnetic sensor. Weightings were subject- and movement-specific, which reinforces that STA are subject- and movement-specific. However, markers on the deltoid insertion and on lateral and medial epicondyles may be preferred if a limited number of markers is used.

  14. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  15. Post-processing for improving hyperspectral anomaly detection accuracy

    NASA Astrophysics Data System (ADS)

    Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang

    2015-10-01

    Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

  16. How Patients Can Improve the Accuracy of their Medical Records

    PubMed Central

    Dullabh, Prashila M.; Sondheimer, Norman K.; Katsh, Ethan; Evans, Michael A.

    2014-01-01

    Objectives: Assess (1) if patients can improve their medical records’ accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback’s impact on medical record accuracy. Background: Improving medical record’ accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records’ accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists’ accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. Methods: The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists’ medication reconciliation logs. Findings/Discussion: (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency—or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms

  17. Accuracy improvement in dissipated energy measurement by using phase information

    NASA Astrophysics Data System (ADS)

    Shiozawa, D.; Inagawa, T.; Washio, T.; Sakagami, T.

    2017-04-01

    In this paper, a technique for improving the accuracy of a dissipated energy measurement based on the phase information—called the phase 2f lock-in infrared method—is proposed. In the conventional 2f lock-in infrared method, the dissipated energy is obtained as the double frequency component of the measured temperature change. In this work, a phase analysis of the double frequency component has been conducted. It is found that the double frequency component includes the influence of the energy dissipation and harmonic vibration of the fatigue testing machine, and the phase difference between the thermoelastic temperature change and the double frequency component is a specific value. The phase 2f lock-in method utilizes a specific phase of the dissipated energy and is effective for removing the noise component such as the thermoelastic temperature change due to the harmonic vibration of fatigue testing machine. This method provides an improvement in the accuracy of the fatigue-limit estimate and the detection of future crack initiation points based on the dissipated energy.

  18. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  19. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  20. The use of statistical process control to improve the accuracy of turning

    NASA Astrophysics Data System (ADS)

    Pisarciuc, Cristian

    2016-11-01

    The present work deals with the turning process improvement using means of statistical process control. The approach on improvement is related to the fact that several methods are used in order to achieve quality defined by technical specifications. The experimental data is collected during identical and successive manufacturing processes of turning of an electrical motor shaft. The initial process presents some difficulties because many machined parts are nonconforming as a consequence of reduced precision of turning. The article is using data collected in turning process, presented through histograms and control charts, to improve the accuracy in order to reduce scrap.

  1. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  2. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.

    PubMed

    Mastanduno, Michael A; Gambhir, Sanjiv S

    2016-10-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.

  3. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  4. Selecting fillers on emotional appearance improves lineup identification accuracy.

    PubMed

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  5. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures

    PubMed Central

    Mastanduno, Michael A.; Gambhir, Sanjiv S.

    2016-01-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies. PMID:27867695

  6. Improving the Accuracy of High-Order Nodal Transport Methods

    SciTech Connect

    Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

    1999-09-27

    This paper outlines some recent advances towards improving the accuracy of neutron transport calculations using the Arbitrarily High Order Transport-Nodal (AHOT-N) Method. These advances consist of several contributions: (a) A formula for the spatial weights that allows for the polynomial order to be raised arbitrarily high without suffering adverse effects from round-off error; (b) A reconstruction technique for the angular flux, based upon a recursive formula, that reduces the pointwise error by one ordeq (c) An a posterior error indicator that estimates the true error and its distribution throughout the domain, so that it can be used for adaptively refining the approximation. Present results are mainly for ID, extension to 2D-3D is in progress.

  7. Improving the Accuracy of High-Order Nodal Transport Methods

    SciTech Connect

    Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

    1999-09-27

    This paper outlines some recent advances towards improving the accuracy of neutron calculations using the Arbitrarily High Order Transport-Nodal (AHOT-N) Method. These transport advances consist of several contributions: (a) A formula for the spatial weights that allows for the polynomial order to be raised arbitrarily high without suffering from pollution from round-off, error; (b) A reconstruction technique for the angular flux, based upon a recursive formula, that reduces the pointwise error by one order; (c) An a posterior error indicator that estimates the true error and its distribution throughout the domain, so that it can be used for adaptively reftig the approximation. Present results are mainly for ID, extension to 2D-3D is in progress.

  8. Improvement of SLR accuracy, a possible new step

    NASA Technical Reports Server (NTRS)

    Kasser, Michel

    1993-01-01

    The satellite laser ranging (SLR) technology experienced a large number of technical improvements since the early 1970's, leading now to a millimetric instrumental accuracy. Presently, it appears as useless to increase these instrumental performances as long as the atmospheric propagation delay suffers its actual imprecision. It has been proposed for many years to work in multiwavelength mode, but up to now the considerable technological difficulties of subpicosecond timing have seriously delayed such an approach. Then a new possibility is proposed, using a device which is not optimized now for SLR but has already given good results in the lower troposphere for wind measurement: the association of a radar and a sodar. While waiting for the 2-lambda methodology, this one could provide an atmospheric propagation delay at the millimeter level during a few years with only little technological investment.

  9. Advantages of improved timing accuracy in PET cameras using LSOscintillator

    SciTech Connect

    Moses, William W.

    2002-12-02

    PET scanners based on LSO have the potential forsignificantly better coincidence timing resolution than the 6 ns fwhmtypically achieved with BGO. This study analyzes the performanceenhancements made possible by improved timing as a function of thecoincidence time resolution. If 500 ps fwhm coincidence timing resolutioncan be achieved in a complete PET camera, the following four benefits canbe realized for whole-body FDG imaging: 1) The random event rate can bereduced by using a narrower coincidence timing window, increasing thepeak NECR by~;50 percent. 2) Using time-of-flight in the reconstructionalgorithm will reduce the noise variance by a factor of 5. 3) Emissionand transmission data can be acquired simultaneously, reducing the totalscan time. 4) Axial blurring can be reduced by using time-of-flight todetermine the correct axial plane that each event originated from. Whiletime-of-flight was extensively studied in the 1980's, practical factorslimited its effectiveness at that time and little attention has been paidto timing in PET since then. As these potential improvements aresubstantial and the advent of LSO PET cameras gives us the means toobtain them without other sacrifices, efforts to improve PET timingshould resume after their long dormancy.

  10. Stratified computed tomography findings improve diagnostic accuracy for appendicitis

    PubMed Central

    Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June

    2014-01-01

    AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P < 0.001), periappendiceal fat stranding (34.1% vs 8.9%; P = 0.001), and the absence of intraluminal air (67.6% vs 48.9%; P = 0.024) compared to the non-appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P < 0.001). CONCLUSION: Appendiceal diameter and wall thickening, fat stranding, and absence of intraluminal air can be used to increased diagnostic accuracy for appendicitis with equivocal CT findings. PMID:25320531

  11. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  12. Accuracy Improvement on the Measurement of Human-Joint Angles.

    PubMed

    Meng, Dai; Shoepe, Todd; Vejarano, Gustavo

    2016-03-01

    A measurement technique that decreases the root mean square error (RMSE) of measurements of human-joint angles using a personal wireless sensor network is reported. Its operation is based on virtual rotations of wireless sensors worn by the user, and it focuses on the arm, whose position is measured on 5 degree of freedom (DOF). The wireless sensors use inertial magnetic units that measure the alignment of the arm with the earth's gravity and magnetic fields. Due to the biomechanical properties of human tissue (e.g., skin's elasticity), the sensors' orientation is shifted, and this shift affects the accuracy of measurements. In the proposed technique, the change of orientation is first modeled from linear regressions of data collected from 15 participants at different arm positions. Then, out of eight body indices measured with dual-energy X-ray absorptiometry, the percentage of body fat is found to have the greatest correlation with the rate of change in sensors' orientation. This finding enables us to estimate the change in sensors' orientation from the user's body fat percentage. Finally, an algorithm virtually rotates the sensors using quaternion theory with the objective of reducing the error. The proposed technique is validated with experiments on five different participants. In the DOF, whose error decreased the most, the RMSE decreased from 2.20(°) to 0.87(°). This is an improvement of 60%, and in the DOF whose error decreased the least, the RMSE decreased from 1.64(°) to 1.37(°). This is an improvement of 16%. On an average, the RMSE improved by 44%.

  13. An Effective Approach to Improving Low-Cost GPS Positioning Accuracy in Real-Time Navigation

    PubMed Central

    Islam, Md. Rashedul; Kim, Jong-Myon

    2014-01-01

    Positioning accuracy is a challenging issue for location-based applications using a low-cost global positioning system (GPS). This paper presents an effective approach to improving the positioning accuracy of a low-cost GPS receiver for real-time navigation. The proposed method precisely estimates position by combining vehicle movement direction, velocity averaging, and distance between waypoints using coordinate data (latitude, longitude, time, and velocity) of the GPS receiver. The previously estimated precious reference point, coordinate translation, and invalid data check also improve accuracy. In order to evaluate the performance of the proposed method, we conducted an experiment using a GARMIN GPS 19xHVS receiver attached to a car and used Google Maps to plot the processed data. The proposed method achieved improvement of 4–10 meters in several experiments. In addition, we compared the proposed approach with two other state-of-the-art methods: recursive averaging and ARMA interpolation. The experimental results show that the proposed approach outperforms other state-of-the-art methods in terms of positioning accuracy. PMID:25136679

  14. Forecasting space weather: Can new econometric methods improve accuracy?

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  15. Accuracy Improvement for Predicting Parkinson’s Disease Progression

    PubMed Central

    Nilashi, Mehrbakhsh; Ibrahim, Othman; Ahani, Ali

    2016-01-01

    Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease. PMID:27686748

  16. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    PubMed Central

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from <10% during 2003–2008 to ≈70% during 2009–2013. Observed hospitalization rates per 100,000 persons varied by season: 7.3–50.5 for children <18 years of age, 3.0–30.3 for adults 18–64 years, and 13.6–181.8 for adults >65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children <18 years, ≈20% higher for adults 18–64 years, and ≈55% for adults >65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  17. Improving accuracy through density correction in guided wave tomography

    PubMed Central

    2016-01-01

    The accurate quantification of wall loss caused by corrosion is critical to the reliable life estimation of pipes and pressure vessels. Traditional thickness gauging by scanning a probe is slow and requires access to all points on the surface; this is impractical in many cases as corrosion often occurs where access is restricted, such as beneath supports where water collects. Guided wave tomography presents a solution to this; by transmitting guided waves through the region of interest and exploiting their dispersive nature, it is possible to build up a map of thickness. While the best results have been seen when using the fundamental modes A0 and S0 at low frequency, the complex scattering of the waves causes errors within the reconstruction. It is demonstrated that these lead to an underestimate in wall loss for A0 but an overestimate for S0. Further analysis showed that this error was related to density variation, which was proportional to thickness. It was demonstrated how this could be corrected for in the reconstructions, in many cases resulting in the near-elimination of the error across a range of defects, and greatly improving the accuracy of life estimates from guided wave tomography. PMID:27118904

  18. Science Achievement for All: Improving Science Performance and Closing Achievement Gaps

    NASA Astrophysics Data System (ADS)

    Jackson, Julie K.; Ash, Gwynne

    2012-11-01

    This article addresses the serious and growing need to improve science instruction and science achievement for all students. We will describe the results of a 3-year study that transformed science instruction and student achievement at two high-poverty ethnically diverse public elementary schools in Texas. The school-wide intervention included purposeful planning, inquiry science instruction, and contextually rich academic science vocabulary development. In combination, these instructional practices rapidly improved student-science learning outcomes and narrowed achievement gaps across diverse student populations.

  19. How Much Can Spatial Training Improve STEM Achievement?

    ERIC Educational Resources Information Center

    Stieff, Mike; Uttal, David

    2015-01-01

    Spatial training has been indicated as a possible solution for improving Science, Technology, Engineering, and Mathematics (STEM) achievement and degree attainment. Advocates for this approach have noted that the correlation between spatial ability and several measures of STEM achievement suggests that spatial training should focus on improving…

  20. Achieving sub-pixel geolocation accuracy in support of MODIS land science

    USGS Publications Warehouse

    Wolfe, R.E.; Nishihama, M.; Fleig, A.J.; Kuyper, J.A.; Roy, D.P.; Storey, J.C.; Patt, F.S.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands—29 with 1 km, five with 500 m, and two with 250 m nadir pixel dimensions. The Terra satellite has on-board exterior orientation (position and attitude) measurement systems designed to enable geolocation of MODIS data to approximately 150 m (1σ) at nadir. A global network of ground control points is being used to determine biases and trends in the sensor orientation. Biases have been removed by updating models of the spacecraft and instrument orientation in the MODIS geolocation software several times since launch and have improved the MODIS geolocation to approximately 50 m (1σ) at nadir. This paper overviews the geolocation approach, summarizes the first year of geolocation analysis, and overviews future work. The approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications.

  1. Accuracy of pitch matching significantly improved by live voice model.

    PubMed

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech.

  2. Accuracy improvement of quantitative analysis by spatial confinement in laser-induced breakdown spectroscopy.

    PubMed

    Guo, L B; Hao, Z Q; Shen, M; Xiong, W; He, X N; Xie, Z Q; Gao, M; Li, X Y; Zeng, X Y; Lu, Y F

    2013-07-29

    To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

  3. Improved localization accuracy in stochastic super-resolution fluorescence microscopy by K-factor image deshadowing.

    PubMed

    Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G; Menon, Rajesh; Gerton, Jordan M; Jorgensen, Erik M; Zalevsky, Zeev

    2013-12-16

    Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution.

  4. Improving Literacy Achievement: An Effective Approach to Continuous Progress

    ERIC Educational Resources Information Center

    Haley, Carolyn E.

    2007-01-01

    Billions of dollars are spent searching for programs and strategic plans that will prove to be the panacea for improving literacy achievement. With all of the experimental and researched programs implemented in school districts, the overall results are still at a minimum and many improvement gains have been short term. This book focuses on…

  5. Does Children's Academic Achievement Improve when Single Mothers Marry?

    ERIC Educational Resources Information Center

    Wagmiller, Robert L., Jr.; Gershoff, Elizabeth; Veliz, Philip; Clements, Margaret

    2010-01-01

    Promoting marriage, especially among low-income single mothers with children, is increasingly viewed as a promising public policy strategy for improving developmental outcomes for disadvantaged children. Previous research suggests, however, that children's academic achievement either does not improve or declines when single mothers marry. In this…

  6. An Action Plan for Improving Mediocre or Stagnant Student Achievement

    ERIC Educational Resources Information Center

    Redmond, Kimberley B.

    2013-01-01

    Although all of the schools in the target school system adhere to a school improvement process, achievement scores remain mediocre or stagnant within the overseas school in Italy that serves children of United States armed service members. To address this problem, this study explored the target school's improvement process to discover how…

  7. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  8. Improving Accuracy of Sleep Self-Reports through Correspondence Training

    ERIC Educational Resources Information Center

    St. Peter, Claire C.; Montgomery-Downs, Hawley E.; Massullo, Joel P.

    2012-01-01

    Sleep insufficiency is a major public health concern, yet the accuracy of self-reported sleep measures is often poor. Self-report may be useful when direct measurement of nonverbal behavior is impossible, infeasible, or undesirable, as it may be with sleep measurement. We used feedback and positive reinforcement within a small-n multiple-baseline…

  9. Laser ranging with the MéO telescope to improve orbital accuracy of space debris

    NASA Astrophysics Data System (ADS)

    Hennegrave, L.; Pyanet, M.; Haag, H.; Blanchet, G.; Esmiller, B.; Vial, S.; Samain, E.; Paris, J.; Albanese, D.

    2013-05-01

    Improving orbital accuracy of space debris is one of the major prerequisite to performing reliable collision prediction in low earth orbit. The objective is to avoid false alarms and useless maneuvers for operational satellites. This paper shows how laser ranging on debris can improve the accuracy of orbit determination. In March 2012 a joint OCA-Astrium team had the first laser echoes from space debris using the MéO (Métrologie Optique) telescope of the Observatoire de la Côte d'Azur (OCA), upgraded with a nanosecond pulsed laser. The experiment was conducted in full compliance with the procedures dictated by the French Civil Aviation Authorities. To perform laser ranging measurement on space debris, the laser link budget needed to be improved. Related technical developments were supported by implementation of a 2J pulsed laser purchased by ASTRIUM and an adapted photo detection. To achieve acquisition of the target from low accuracy orbital data such as Two Lines Elements, a 2.3-degree field of view telescope was coupled to the original MéO telescope 3-arcmin narrow field of view. The wide field of view telescope aimed at pointing, adjusting and acquiring images of the space debris for astrometry measurement. The achieved set-up allowed performing laser ranging and angular measurements in parallel, on several rocket stages from past launches. After a brief description of the set-up, development issues and campaigns, the paper discusses added-value of laser ranging measurement when combined to angular measurement for accurate orbit determination. Comparison between different sets of experimental results as well as simulation results is given.

  10. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium

    PubMed Central

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619

  11. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  12. Improved Discrimination of Influenza Forecast Accuracy Using Consecutive Predictions

    PubMed Central

    Shaman, Jeffrey; Kandula, Sasikiran

    2015-01-01

    Introduction: The ability to predict the growth and decline of infectious disease incidence has advanced considerably in recent years. In particular, accurate forecasts of influenza epidemiology have been developed using a number of approaches. Methods: Within our own group we produce weekly operational real-time forecasts of influenza at the municipal and state level in the U.S. These forecasts are generated using ensemble simulations depicting local influenza transmission dynamics, which have been optimized prior to forecast with observations of influenza incidence and data assimilation methods. The expected accuracy of a given forecast can be inferred in real-time through quantification of the agreement (e.g. the variance) among the ensemble of simulations. Results: Here we show that forecast expected accuracy can be further discriminated with the additional consideration of the streak or persistence of the forecast—the number of consecutive weeks the forecast has converged to the same outcome. Discussion: The findings indicate that the use of both the streak and ensemble agreement provides a more detailed and informative assessment of forecast expected accuracy. PMID:26512336

  13. Human Physiology: Improving Students' Achievements through Intelligent Studyware.

    ERIC Educational Resources Information Center

    Dori, Yehudit J.; Yochim, Jerome M.

    1994-01-01

    A studyware comprising a set of interconnected modules on human physiology has been developed and used to improve undergraduate students' achievements. Study results show the scores of students who used the optional computer laboratory sessions were enhanced over those who did not use the studyware. Presents examples from the modules. (LZ)

  14. DOD Joint Bases: Management Improvements Needed to Achieve Greater Efficiencies

    DTIC Science & Technology

    2012-11-01

    Joint Bases Realign Fort Eustis, VA, by relocating the installation management functions to Langley AFB, VA. Realign Fort Story , VA, by...the installation management functions to L·mglcy AFB, VA. Realign Fort Story , VA, by relocating the installation management functions to Commander...DOD JOINT BASES Management Improvements Needed to Achieve Greater Efficiencies Report to Congressional Addressees

  15. Systems Thinking: A Skill to Improve Student Achievement

    ERIC Educational Resources Information Center

    Thornton, Bill; Peltier, Gary; Perreault, George

    2004-01-01

    This article examines how schools can avoid barriers to systems thinking in relation to improving student achievement. It then illustrates common errors associated with non-systems thinking and recommends solutions. Educators who understand that schools are complex interdependent social systems can move their organizations forward. Unfortunately,…

  16. Using Students' Cultural Heritage to Improve Academic Achievement in Writing

    ERIC Educational Resources Information Center

    Mendez, Gilbert

    2006-01-01

    This article discusses an approach to teaching used at Calexico Unified School District, a California-Mexican border high school, by a group of teachers working to make teaching and learning more relevant to Chicano and Mexican students' lives and to improve their academic achievement in writing. An off-shoot of a training program for English…

  17. New Directions in Social Psychological Interventions to Improve Academic Achievement

    ERIC Educational Resources Information Center

    Wilson, Timothy D.; Buttrick, Nicholas R.

    2016-01-01

    Attempts to improve student achievement typically focus on changing the educational environment (e.g., better schools, better teachers) or on personal characteristics of students (e.g., intelligence, self-control). The 6 articles in this special issue showcase an additional approach, emanating from social psychology, which focuses on students'…

  18. Accuracy of Teachers' Judgments of Students' Academic Achievement: A Meta-Analysis

    ERIC Educational Resources Information Center

    Sudkamp, Anna; Kaiser, Johanna; Moller, Jens

    2012-01-01

    This meta-analysis summarizes empirical results on the correspondence between teachers' judgments of students' academic achievement and students' actual academic achievement. The article further investigates theoretically and methodologically relevant moderators of the correlation between the two measures. Overall, 75 studies reporting…

  19. Method for improving accuracy in full evaporation headspace analysis.

    PubMed

    Xie, Wei-Qi; Chai, Xin-Sheng

    2017-03-21

    We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. This article is protected by copyright. All rights reserved.

  20. Improved accuracy of measurements of complex permittivity and permeability using transmission lines

    NASA Astrophysics Data System (ADS)

    Shemelin, V.; Valles, N.

    2014-12-01

    Strong damping of Higher-Order-Modes (HOMs) excited by the beam in accelerating cavities is a necessary condition for achievement of high currents and low emittances in storage rings, electron-positron colliders, and high average power Energy Recovery Linacs (ERLs). Characterization of the electromagnetic properties of lossy ceramics and ferrites used in HOM loads is therefore an essential part of constructing these accelerators. Here we show how to improve these measurements beyond the state of the art. In the past, significant discrepancies have been typical between measured properties for different batches of the same material. Here we show that these can be explained not only by technological deviations in the material production but also by errors in the dimensions of the measured samples. We identify the main source of errors and show how to improve the accuracy of measuring the electromagnetic parameters of absorbing materials.

  1. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    SciTech Connect

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  2. Combining sparseness and smoothness improves classification accuracy and interpretability.

    PubMed

    de Brecht, Matthew; Yamagishi, Noriko

    2012-04-02

    Sparse logistic regression (SLR) has been shown to be a useful method for decoding high-dimensional fMRI and MEG data by automatically selecting relevant feature dimensions. However, when applied to signals with high spatio-temporal correlations, SLR often over-prunes the feature space, which can result in overfitting and weight vectors that are difficult to interpret. To overcome this problem, we investigate a modification of ℓ₁-normed sparse logistic regression, called smooth sparse logistic regression (SSLR), which has a spatio-temporal "smoothing" prior that encourages weights that are close in time and space to have similar values. This causes the classifier to select spatio-temporally continuous groups of features, whereas SLR classifiers often select a scattered collection of independent features. We applied the method to both simulation data and real MEG data. We found that SSLR consistently increases classification accuracy, and produces weight vectors that are more meaningful from a neuroscientific perspective.

  3. Improving the accuracy of acetabular cup implantation using a bulls-eye spirit level.

    PubMed

    Macdonald, Duncan; Gupta, Sanjay; Ohly, Nicholas E; Patil, Sanjeev; Meek, R; Mohammed, Aslam

    2011-01-01

    Acetabular introducers have a built-in inclination of 45 degrees to the handle shaft. With patients in the lateral position, surgeons aim to align the introducer shaft vertical to the floor to implant the acetabulum at 45 degrees. We aimed to determine if a bulls-eye spirit level attached to an introducer improved the accuracy of implantation. A small circular bulls-eye spirit level was attached to the handle of an acetabular introducer. A saw bone hemipelvis was fixed to a horizontal, flat surface. A cement substitute was placed in the acetabulum and subjects were asked to implant a polyethylene cup, aiming to obtain an angle of inclination of 45 degrees. Two attempts were made with the spirit level masked and two with it unmasked. The distance of the air bubble from the spirit level's center was recorded by a single assessor. The angle of inclination of the acetabular component was then calculated. Subjects included both orthopedic consultants and trainees. Twenty-five subjects completed the study. Accuracy of acetabular implantation when using the unmasked spirit level improved significantly in all grades of surgeon. With the spirit level masked, 12 out of 50 attempts were accurate at 45 degrees inclination; 11 out of 50 attempts were "open," with greater than 45 degrees of inclination, and 27 were "closed," with less than 45 degrees. With the spirit level visible, all subjects achieved an inclination angle of exactly 45 degrees. A simple device attached to the handle of an acetabular introducer can significantly improve the accuracy of implantation of a cemented cup into a saw bone pelvis in the lateral position.

  4. Accurate rotational constants for linear interstellar carbon chains: achieving experimental accuracy

    NASA Astrophysics Data System (ADS)

    Etim, Emmanuel E.; Arunan, Elangannan

    2017-01-01

    Linear carbon chain molecular species remain the dominant theme in interstellar chemistry. Their continuous astronomical observation depends on the availability of accurate spectroscopic parameters. Accurate rotational constants are reported for hundreds of molecular species of astrophysical, spectroscopy and chemical interests from the different linear carbon chains; C_{{n}}H, C_{{n}}H-, C_{{n}}N, C_{{n}}N-, C_{{n}}O, C_{{n}}S, HC_{{n}}S, C_{{n}}Si, CH3(CC)_{{n}}H, HC_{{n}}N, DC_{2{n}+1}N, HC_{2{n}}NC, and CH3(C≡C)_{{n}}CN using three to four moments of inertia calculated from the experimental rotational constants coupled with those obtained from the optimized geometries at the Hartree Fock level. The calculated rotational constants are obtained from the corrected moments of inertia at the Hartfree Fock geometries. The calculated rotational constants show accuracy of few kHz below irrespective of the chain length and terminating groups. The obtained accuracy of few kHz places these rotational constants as excellent tools for both astronomical and laboratory detection of these molecular species of astrophysical interest. From the numerous unidentified lines from different astronomical surveys, transitions corresponding to known and new linear carbon chains could be found using these rotational constants. The astrophysical, spectroscopic and chemical implications of these results are discussed.

  5. Optimized diagnostic model combination for improving diagnostic accuracy

    NASA Astrophysics Data System (ADS)

    Kunche, S.; Chen, C.; Pecht, M. G.

    Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed

  6. Nested uncertainties and hybrid metrology to improve measurement accuracy

    NASA Astrophysics Data System (ADS)

    Silver, R. M.; Zhang, N. F.; Barnes, B. M.; Zhou, H.; Qin, J.; Dixson, R.

    2011-03-01

    In this paper we present a method to combine measurement techniques that reduce uncertainties and improve measurement throughput. The approach has immediate utility when performing model-based optical critical dimension (OCD) measurements. When modeling optical measurements, a library of curves is assembled through the simulation of a multi-dimensional parameter space. Parametric correlation and measurement noise lead to measurement uncertainty in the fitting process resulting in fundamental limitations due to parametric correlations. We provide a strategy to decouple parametric correlation and reduce measurement uncertainties. We also develop the rigorous underlying Bayesian statistical model to apply this methodology to OCD metrology. These statistical methods use a priori information rigorously to reduce measurement uncertainty, improve throughput and develop an improved foundation for comprehensive reference metrology.

  7. Microalloying Boron Carbide with Silicon to Achieve Dramatically Improved Ductility

    DTIC Science & Technology

    2014-11-18

    Microalloying Boron Carbide with Silicon to Achieve Dramatically Improved Ductility Qi An and William A. Goddard, III* Materials and Process... Boron carbide (B4C) is a hard material whose value for extended engineering applications such as body armor; is limited by its brittleness under...Plasmonics, Optical Materials, and Hard Matter Superhard materials, such as diamond, cubic boron nitride,and boron carbide (B4C), exhibit many

  8. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    NASA Astrophysics Data System (ADS)

    Panday, Sorab; Langevin, Christian D.

    2012-06-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  9. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  10. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  11. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  12. Singing Video Games May Help Improve Pitch-Matching Accuracy

    ERIC Educational Resources Information Center

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  13. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  14. Computer assisted electromagnetic navigation improves accuracy in computed tomography guided interventions: A prospective randomized clinical trial

    PubMed Central

    2017-01-01

    Purpose To assess the accuracy and usability of an electromagnetic navigation system designed to assist Computed Tomography (CT) guided interventions. Materials and methods 120 patients requiring a percutaneous CT intervention (drainage, biopsy, tumor ablation, infiltration, sympathicolysis) were included in this prospective randomized trial. Nineteen radiologists participated. Conventional procedures (CT group) were compared with procedures assisted by a navigation system prototype using an electromagnetic localizer to track the position and orientation of a needle holder (NAV group). The navigation system displays the needle path in real-time on 2D reconstructed CT images extracted from the 3D CT volume. The regional ethics committee approved this study and all patients gave written informed consent. The main outcome was the distance between the planned trajectory and the achieved needle trajectory calculated from the initial needle placement. Results 120 patients were analyzable in intention-to-treat (NAV: 60; CT: 60). Accuracy improved when the navigation system was used: distance error (in millimeters: median[P25%; P75%]) with NAV = 4.1[2.7; 9.1], vs. with CT = 8.9[4.9; 15.1] (p<0.001). After the initial needle placement and first control CT, fewer subsequent CT acquisitions were necessary to reach the target using the navigation system: NAV = 2[2; 3]; CT = 3[2; 4] (p = 0.01). Conclusion The tested system was usable in a standard clinical setting and provided significant improvement in accuracy; furthermore, with the help of navigation, targets could be reached with fewer CT control acquisitions. PMID:28296957

  15. Using known map category marginal frequencies to improve estimates of thematic map accuracy

    NASA Technical Reports Server (NTRS)

    Card, D. H.

    1982-01-01

    By means of two simple sampling plans suggested in the accuracy-assessment literature, it is shown how one can use knowledge of map-category relative sizes to improve estimates of various probabilities. The fact that maximum likelihood estimates of cell probabilities for the simple random sampling and map category-stratified sampling were identical has permitted a unified treatment of the contingency-table analysis. A rigorous analysis of the effect of sampling independently within map categories is made possible by results for the stratified case. It is noted that such matters as optimal sample size selection for the achievement of a desired level of precision in various estimators are irrelevant, since the estimators derived are valid irrespective of how sample sizes are chosen.

  16. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE PAGES

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; ...

    2016-02-11

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  17. Using improvement science methods to increase accuracy of surgical consents.

    PubMed

    Mercurio, Patti; Shaffer Ellis, Andrea; Schoettker, Pamela J; Stone, Raymond; Lenk, Mary Anne; Ryckman, Frederick C

    2014-07-01

    The surgical consent serves as a key link in preventing breakdowns in communication that could lead to wrong-patient, wrong-site, or wrong-procedure events. We conducted a quality improvement initiative at a large, urban pediatric academic medical center to reliably increase the percentage of informed consents for surgical and medical procedures with accurate safety data information at the first point of perioperative contact. Improvement activities focused on awareness, education, standardization, real-time feedback and failure identification, and transparency. A total of 54,082 consent forms from 13 surgical divisions were reviewed between May 18, 2011, and November 30, 2012. Between May 2011 and June 2012, the percentage of consents without safety errors increased from a median of 95.4% to 99.7%. Since July 2012, the median has decreased slightly but has remained stable at 99.4%. Our results suggest that effective safety checks allow discovery and prevention of errors.

  18. Improving science achievement at high-poverty urban middle schools

    NASA Astrophysics Data System (ADS)

    Ruby, Allen

    2006-11-01

    A large percentage of U.S. students attending high-poverty urban middle schools achieve low levels of science proficiency, posing significant challenges to their success in high school science and to national and local efforts to reform science education. Through its work in Philadelphia schools, the Center for Social Organization of Schools at Johns Hopkins University developed a teacher-support model to address variation in science curricula, lack of materials, and underprepared teachers that combined with initial low levels of proficiency block improvements in science achievement. The model includes a common science curriculum based on NSF-supported materials commercially available, ongoing teacher professional development built around day-to-day lessons, and regular in-class support of teachers by expert peer coaches. One cohort of students at three Philadelphia middle schools using the model was followed from the end of fourth grade through seventh grade. Their gains in science achievement and achievement levels were substantially greater than students at 3 matched control schools and the 23 district middle schools serving a similar student population. Under school-by-school comparisons, these results held for the two schools with adequate implementation. Using widely available materials and techniques, the model can be adopted and modified by school partners and districts.

  19. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  20. FISM 2.0: Improved Spectral Range, Resolution, and Accuracy

    NASA Technical Reports Server (NTRS)

    Chamberlin, Phillip C.

    2012-01-01

    The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.

  1. Two-step FEM-based Liver-CT registration: improving internal and external accuracy

    NASA Astrophysics Data System (ADS)

    Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2014-03-01

    To know the exact location of the internal structures of the organs, especially the vasculature, is of great importance for the clinicians. This information allows them to know which structures/vessels will be affected by certain therapy and therefore to better treat the patients. However the use of internal structures for registration is often disregarded especially in physical based registration methods. In this paper we propose an algorithm that uses finite element methods to carry out a registration of liver volumes that will not only have accuracy in the boundaries of the organ but also in the interior. Therefore a graph matching algorithm is used to find correspondences between the vessel trees of the two livers to be registered. In addition to this an adaptive volumetric mesh is generated that contains nodes in the locations in which correspondences were found. The displacements derived from those correspondences are the input for the initial deformation of the model. The first deformation brings the internal structures to their final deformed positions and the surfaces close to it. Finally, thin plate splines are used to refine the solution at the boundaries of the organ achieving an improvement in the accuracy of 71%. The algorithm has been evaluated in CT clinical images of the abdomen.

  2. Improvements in ECG accuracy for diagnosis of left ventricular hypertrophy in obesity

    PubMed Central

    Rider, Oliver J; Ntusi, Ntobeko; Bull, Sacha C; Nethononda, Richard; Ferreira, Vanessa; Holloway, Cameron J; Holdsworth, David; Mahmod, Masliza; Rayner, Jennifer J; Banerjee, Rajarshi; Myerson, Saul; Watkins, Hugh; Neubauer, Stefan

    2016-01-01

    Objectives The electrocardiogram (ECG) is the most commonly used tool to screen for left ventricular hypertrophy (LVH), and yet current diagnostic criteria are insensitive in modern increasingly overweight society. We propose a simple adjustment to improve diagnostic accuracy in different body weights and improve the sensitivity of this universally available technique. Methods Overall, 1295 participants were included—821 with a wide range of body mass index (BMI 17.1–53.3 kg/m2) initially underwent cardiac magnetic resonance evaluation of anatomical left ventricular (LV) axis, LV mass and 12-lead surface ECG in order to generate an adjustment factor applied to the Sokolow–Lyon criteria. This factor was then validated in a second cohort (n=520, BMI 15.9–63.2 kg/m2). Results When matched for LV mass, the combination of leftward anatomical axis deviation and increased BMI resulted in a reduction of the Sokolow–Lyon index, by 4 mm in overweight and 8 mm in obesity. After adjusting for this in the initial cohort, the sensitivity of the Sokolow–Lyon index increased (overweight: 12.8% to 30.8%, obese: 3.1% to 27.2%) approaching that seen in normal weight (37.8%). Similar results were achieved in the validation cohort (specificity increased in overweight: 8.3% to 39.1%, obese: 9.4% to 25.0%) again approaching normal weight (39.0%). Importantly, specificity remained excellent (>93.1%). Conclusions Adjusting the Sokolow–Lyon index for BMI (overweight +4 mm, obesity +8 mm) improves the diagnostic accuracy for detecting LVH. As the ECG, worldwide, remains the most widely used screening tool for LVH, implementing these findings should translate into significant clinical benefit. PMID:27486142

  3. Applicability and accuracy improvement of transient elastography using the M and XL probes by experienced operators.

    PubMed

    Carrión, J A; Puigvehí, M; Coll, S; Garcia-Retortillo, M; Cañete, N; Fernández, R; Márquez, C; Giménez, M D; Garcia, M; Bory, F; Solà, R

    2015-03-01

    Transient elastography (TE) is the reference method to obtain liver stiffness measurements (LSM), but no results are obtained in 3.1% and unreliable in 15.8%. We assessed the applicability and diagnostic accuracy of TE re-evaluation using M and XL probes. From March 2011 to April 2012 868 LSM were performed with the M probe by trained operators (50-500 studies) (LSM1). Measurements were categorized as inadequate (no values or ratio <60% and/or IQR/LSM >30%) or adequate. Inadequate LSM1 were re-evaluated by experienced operators (>500 explorations) (LSM2) and inadequate LSM2 using XL probe (LSMXL). Inadequate LSM1 were obtained in 187 (21.5%) patients, IQR/LSM >30% in 97 (51%), ratio <60% in 24 (13%) and TE failed to obtain a measurement in 67 (36%). LSM2 achieved adequate registers in 123 (70%) of 175 registers previously considered as inadequate. Independent variables (OR, 95%CI) related to inadequate LSM1 were body mass index (1.11, 1.04-1.18), abdominal circumference (1.03, 1.01-1.06) and age (1.03, 1.01-1.04) and to inadequate LSM2 were skin-capsule distance (1.21, 1.09-1.34) and abdominal circumference (1.05, 1.01-1.10). The diagnostic accuracy (AUROC) to identify significant fibrosis improved from 0.89 (LSM1) to 0.91 (LSM2) (P = 0.046) in 334 patients with liver biopsy or clinically significant portal hypertension. A third evaluation (LSMXL) obtained adequate registers in 41 (93%) of 44 patients with inadequate LSM2. Operator experience increases the applicability and diagnostic accuracy of TE. The XL probe may be recommended for patients with inadequate values obtained by experienced operators using the M probe. http://clinicaltrials.gov (NCT01900808).

  4. Mechanized pivot shift test achieves greater accuracy than manual pivot shift test.

    PubMed

    Musahl, Volker; Voos, James; O'Loughlin, Padhraig F; Stueber, Volker; Kendoff, Daniel; Pearle, Andrew D

    2010-09-01

    The objective of this study was to design a navigated mechanized pivot shift test setup and evaluate its repeatability in the ACL-deficient knee. It was hypothesized that translations and rotations measured with the mechanized pivot shift would be more repeatable when compared to those obtained with a manual pivot shift. Twelve fresh frozen cadaveric hip-to-toe whole lower extremities were used for this study. A manual pivot shift test was performed in the intact knee and in the ACL-deficient knee and was repeated three times. A navigation system simultaneously recorded tibial translation and rotation. The mechanized pivot shift test consists of a modified continuous passive motion (CPM) machine and a custom-made foot holder to allow for the application of internal rotation moments at the knee. Valgus moments were achieved by a 45 degrees tilt of the CPM machine with respect to the supine position and a Velcro strap secured across the proximal tibia. The mechanized pivot shift was repeated three times. Repeated measures ANOVA was used to compare manual and mechanized pivot shift testing. An intra-class correlation coefficient (ICC) was used to determine variability within each knee at each testing condition. In the ACL-deficient knee, translation with manual pivot shift testing (11.7 +/- 2.6 mm) was significantly higher than with mechanized pivot shift testing (7.4 +/- 2.5 mm; p < 0.05). Rotation with the manual pivot shift testing (18.6 +/- 5.4 degrees) was also significantly higher than with mechanized pivot shift testing (11.0 +/- 2.3 degrees; p < 0.05). The intra-class ICC for translations was 0.76 for manual pivot shift and 0.92 for the mechanized pivot shift test. The intra-class ICC for rotations was 0.89 for manual pivot shift and 0.82 for the mechanized pivot shift test. This study introduced a modified CPM for mechanized pivot shift testing. Although recorded translations and rotations with the mechanized pivot shift test were lower than with manual

  5. The use of imprecise processing to improve accuracy in weather & climate prediction

    NASA Astrophysics Data System (ADS)

    Düben, Peter D.; McNamara, Hugh; Palmer, T. N.

    2014-08-01

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  6. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  7. Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Fabik, Richard H.

    1993-01-01

    The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.

  8. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    NASA Astrophysics Data System (ADS)

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  9. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment

    PubMed Central

    Vu, An T.; Phillips, Jeffrey S.; Kay, Kendrick; Phillips, Matthew E.; Johnson, Matthew R.; Shinkareva, Svetlana V.; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2017-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  10. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  11. Improving Student Achievement: A Study of High-Poverty Schools with Higher Student Achievement Outcomes

    ERIC Educational Resources Information Center

    Butz, Stephen D.

    2012-01-01

    This research examined the education system at high-poverty schools that had significantly higher student achievement levels as compared to similar schools with lower student achievement levels. A multischool qualitative case study was conducted of the educational systems where there was a significant difference in the scores achieved on the…

  12. Improving Accuracy of Decoding Emotions from Facial Expressions by Cooperative Learning Techniques, Two Experimental Studies.

    ERIC Educational Resources Information Center

    Klinzing, Hans Gerhard

    A program was developed for the improvement of social competence in general among professionals with the improvement of the accuracy of decoding emotions from facial expressions as the specific focus. It was integrated as a laboratory experience into traditional lectures at two German universities where studies were conducted to assess the…

  13. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  14. Proficiency testing linked to the national reference system for the clinical laboratory: a proposal for achieving accuracy.

    PubMed

    Lasky, F D

    1992-07-01

    I propose using proficiency testing (PT) to achieve one of the important goals of CLIA: accurate and reliable clinical testing. Routine methods for the clinical laboratory are traceable to Definitive (DM) or Reference Methods (RM) or to Methodological Principles (MP) through a modification of the National Reference System for the Clinical Laboratory. PT is the link used to monitor consistent field performance. Although PT has been effective as a relative measure of laboratory performance, the technical limitations of PT fluids and of routine methods currently in use make it unlikely that PT alone can be used as a reliable measure of laboratory accuracy. Instead, I recommend calibration of routine systems through correlation to DM, RM, or MP with use of patients' specimens. The manufacturer is in the best position to assume this responsibility because of also being responsible for consistent, reliable product. Analysis of different manufactured batches of reagent would be compared with predetermined goals for precision and accuracy, as illustrated with data from product testing of Kodak Ektachem clinical chemistry slides. Adoption of this proposal would give manufacturers of PT materials, manufacturers of analytical systems, PT providers, and government agencies time to understand and resolve sources of error that limit the utility of PT for the job required by law.

  15. Improvement of the accuracy of phase observation by modification of phase-shifting electron holography.

    PubMed

    Suzuki, Takahiro; Aizawa, Shinji; Tanigaki, Toshiaki; Ota, Keishin; Matsuda, Tsuyoshi; Tonomura, Akira

    2012-07-01

    We found that the accuracy of the phase observation in phase-shifting electron holography is strongly restricted by time variations of mean intensity and contrast of the holograms. A modified method was developed for correcting these variations. Experimental results demonstrated that the modification enabled us to acquire a large number of holograms, and as a result, the accuracy of the phase observation has been improved by a factor of 5.

  16. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  17. Multispectral image compression methods for improvement of both colorimetric and spectral accuracy

    NASA Astrophysics Data System (ADS)

    Liang, Wei; Zeng, Ping; Xiao, Zhaolin; Xie, Kun

    2016-07-01

    We propose that both colorimetric and spectral distortion in compressed multispectral images can be reduced by a composite model, named OLCP(W)-X (OptimalLeaders_Color clustering-PCA-W weighted-X coding). In the model, first the spectral-colorimetric clustering is designed for sparse equivalent representation by generating spatial basis. Principal component analysis (PCA) is subsequently used in the manipulation of spatial basis for spectral redundancy removal. Then error compensation mechanism is presented to produce predicted difference image, and finally combined with visual characteristic matrix W, and the created image is compressed by traditional multispectral image coding schemes. We introduce four model-based algorithms to explain their validity. The first two algorithms are OLCPWKWS (OLC-PCA-W-KLT-WT-SPIHT) and OLCPKWS, in which Karhunen-Loeve transform, wavelet transform, and set partitioning in hierarchical trees coding are applied for the created image compression. And the latter two methods are OLCPW-JPEG2000-MCT and OLCP-JPEG2000-MCT. Experimental results show that, compared with the corresponding traditional coding, the proposed OLCPW-X schemes can significantly improve the colorimetric accuracy of rebuilding images under various illumination conditions and generally achieve satisfactory peak signal-to-noise ratio under the same compression ratio. And OLCP-X methods could always ensure superior spectrum reconstruction. Furthermore, our model has excellent performance on user interaction.

  18. Neural network incorporating meal information improves accuracy of short-time prediction of glucose concentration.

    PubMed

    Zecchin, Chiara; Facchinetti, Andrea; Sparacino, Giovanni; De Nicolao, Giuseppe; Cobelli, Claudio

    2012-06-01

    Diabetes mellitus is one of the most common chronic diseases, and a clinically important task in its management is the prevention of hypo/hyperglycemic events. This can be achieved by exploiting continuous glucose monitoring (CGM) devices and suitable short-term prediction algorithms able to infer future glycemia in real time. In the literature, several methods for short-time glucose prediction have been proposed, most of which do not exploit information on meals, and use past CGM readings only. In this paper, we propose an algorithm for short-time glucose prediction using past CGM sensor readings and information on carbohydrate intake. The predictor combines a neural network (NN) model and a first-order polynomial extrapolation algorithm, used in parallel to describe, respectively, the nonlinear and the linear components of glucose dynamics. Information on the glucose rate of appearance after a meal is described by a previously published physiological model. The method is assessed on 20 simulated datasets and on 9 real Abbott FreeStyle Navigator datasets, and its performance is successfully compared with that of a recently proposed NN glucose predictor. Results suggest that exploiting meal information improves the accuracy of short-time glucose prediction.

  19. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  20. Accuracy, Effectiveness and Improvement of Vibration-Based Maintenance in Paper Mills: Case Studies

    NASA Astrophysics Data System (ADS)

    AL-NAJJAR, B.

    2000-01-01

    Many current vibration-based maintenance (VBM) policies for rolling element bearings do not use as much as possible of their useful lives. Evidence and indications to prolong the bearings' mean effective lives by using more accurate diagnosis and prognosis are confirmed when faulty bearing installation, faulty machinery design, harsh environmental condition and when a bearing is replaced as soon as its vibration level exceeds the normal. Analysis of data from roller bearings at two paper mills suggests that longer bearing lives can be safely achieved by increasing the accuracy of the vibration data. This paper relates bearing failure modes to the observed vibration spectra and their development patterns over the bearings' lives. A systematic approach, which describes the objectives and performance of studies in two Swedish paper mills, is presented. Explanations of the mechanisms behind some frequent modes of early failure and ways to avoid them are suggested. It is shown theoretically, and partly confirmed by the analysis of (unfortunately incomplete) data from two paper mills over many years, that accurate prediction of remaining bearing life requires: (a) enough vibration measurements, (b) numerate records of operating conditions, (c) better discrimination between frequencies in the spectrum and (d) correlation of (b) and (c). This is because life prediction depends on precise knowledge of primary, harmonic and side-band frequency amplitudes and their development over time. Further, the available data, which are collected from relevant plant activities, can be utilized to perform cyclic improvements in diagnosis, prognosis, experience and economy.

  1. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    PubMed Central

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic

  2. Alaska Case Study: Scientists Venturing Into Field with Journalists Improves Accuracy

    NASA Astrophysics Data System (ADS)

    Ekwurzel, B.; Detjen, J.; Hayes, R.; Nurnberger, L.; Pavangadkar, A.; Poulson, D.

    2008-12-01

    Issues such as climate change, stem cell research, public health vaccination, etc., can be fraught with public misunderstanding, myths, as well as deliberate distortions of the fundamental science. Journalists are adept at creating print, radio, and video content that can be both compelling and informative to the public. Yet most scientists have little time or training to devote to developing media content for the public and spend little time with journalists who cover science stories. We conducted a case study to examine whether the time and funding invested in exposing journalists to scientists in the field over several days would improve accuracy of media stories about complex scientific topics. Twelve journalists were selected from the 70 who applied for a four-day environmental journalism fellowship in Alaska. The final group achieved the goal of a broad geographic spectrum of the media outlets (small regional to large national organizations), medium (print, radio, online), and experience (early career to senior producers). Reporters met with a diverse group of scientists. The lessons learned and successful techniques will be presented. Initial results demonstrate that stories were highly accurate and rich with audio or visual content for lay audiences. The journalists have also maintained contact with the scientists, asking for leads on emerging stories and seeking new experts that can assist in their reporting. Science-based institutions should devote more funding to foster direct journalist-scientist interactions in the lab and field. These positive goals can be achieved: (1) more accurate dissemination of science information to the public; (2) a broader portion of the scientific community will become a resource to journalists instead of the same eloquent few in the community; (3) scientists will appreciate the skill and pressures of those who survive the media downsizing and provide media savvy content; and (4) the public may incorporate science evidence

  3. Ensemble of classifiers to improve accuracy of the CLIP4 machine-learning algorithm

    NASA Astrophysics Data System (ADS)

    Kurgan, Lukasz; Cios, Krzysztof J.

    2002-03-01

    Machine learning, one of the data mining and knowledge discovery tools, addresses automated extraction of knowledge from data, expressed in the form of production rules. The paper describes a method for improving accuracy of rules generated by inductive machine learning algorithm by generating the ensemble of classifiers. It generates multiple classifiers using the CLIP4 algorithm and combines them using a voting scheme. The generation of a set of different classifiers is performed by injecting controlled randomness into the learning algorithm, but without modifying the training data set. Our method is based on the characteristic properties of the CLIP4 algorithm. The case study of the SPECT heart image analysis system is used as an example where improving accuracy is very important. Benchmarking results on other well-known machine learning datasets, and comparison with an algorithm that uses boosting technique to improve its accuracy are also presented. The proposed method always improves the accuracy of the results when compared with the accuracy of a single classifier generated by the CLIP4 algorithm, as opposed to using boosting. The obtained results are comparable with other state-of-the-art machine learning algorithms.

  4. School Improvement for All: Reflections on the Achievement Gap.

    ERIC Educational Resources Information Center

    Howard, Gary R.

    2002-01-01

    Asserts that three social dominance issues have caused and continue to perpetuate achievement disparities among poor and minority students: the assumption of rightness, the luxury of ignorance, and the legacy of privilege. Describes school district initiatives, leadership, and research findings related to overcoming this achievement gap. Contends…

  5. Evidence that Smaller Schools Do Not Improve Student Achievement

    ERIC Educational Resources Information Center

    Wainer, Howard; Zwerling, Harris L.

    2006-01-01

    If more small schools than "expected" are among the high achievers, then creating more small schools would raise achievement across the board, many proponents of small schools have argued. In this article, the authors challenge the faulty logic of such inferences. Many claims have been made about the advantages of smaller schools. One is…

  6. Improving the accuracy of brain tumor surgery via Raman-based technology.

    PubMed

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W; Sunney Xie, X; Orringer, Daniel A

    2016-03-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors.

  7. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  8. Design of a platinum resistance thermometer temperature measuring transducer and improved accuracy of linearizing the output voltage

    SciTech Connect

    Malygin, V.M.

    1995-06-01

    An improved method is presented for designing a temperature measuring transducer, the electrical circuit of which comprises an unbalanced bridge, in one arm of which is a platinum resistance thermometer, and containing a differential amplifier with feedback. Values are given for the coefficients, the minimum linearization error is determined, and an example is also given of the practical design of the transducer, using the given coefficients. A determination is made of the limiting achievable accuracy in linearizing the output voltage of the measuring transducer, as a function of the range of measured temperature.

  9. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  10. On the use of Numerical Weather Models for improving SAR geolocation accuracy

    NASA Astrophysics Data System (ADS)

    Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.

    2013-12-01

    Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of

  11. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    ERIC Educational Resources Information Center

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  12. Preoperative Measurement of Tibial Resection in Total Knee Arthroplasty Improves Accuracy of Postoperative Limb Alignment Restoration

    PubMed Central

    Wu, Pei-Hui; Zhang, Zhi-Qi; Fang, Shu-Ying; Yang, Zi-Bo; Kang, Yan; Fu, Ming; Liao, Wei-Ming

    2016-01-01

    Background: Accuracy of implant placement in total knee arthroplasty (TKA) is crucial. Traditional extramedullary alignment instruments are fairly effective for achieving the desired mean tibial component coronal alignment. We modified the traditional tibial plateau resection technique and evaluated its effect on alignment restoration. Methods: Two hundred and eighty-two primary TKAs in our hospital between January 2013 and December 2014 were enrolled in this retrospective study. Group A consisted of 128 primary TKAs performed by one senior surgeon. Preoperative measurement of the tibial resection was conducted on radiographs, and the measured thicknesses of the lateral and medial plateau resection were used to place the tibial alignment guide. Group B consisted of 154 primary TKAs performed by the other senior surgeon, using a traditional tibial plateau resection technique. In all patients, an extramedullary guide was used for tibial resection, and preoperative and postoperative full-leg standing radiographs were used to assess the hip-knee-ankle angle (HKA), femoral component alignment angle (FA), and tibial component alignment angle (TA). A deviation ≥3° was considered unsatisfactory. Data were analyzed by unpaired Student's t-test. Results: The mean postoperative HKA and TA angles were significantly different between Groups A and B (178.2 ± 3.2° vs. 177.0 ± 3.0°, t = 2.54, P = 0.01; 89.3 ± 1.8° vs. 88.3 ± 2.0°, t = 3.75, P = 0.00, respectively). The mean postoperative FA was 88.9 ± 2.5° in Group A and 88.9 ± 2.6° in Group B, and no significant difference was detected (t = 0.10, P = 0.92). There were 90 (70.3%) limbs with restoration of the mechanical axis to within 3° of neutral alignment and 38 (29.7%) outliers (>3° deviation) in Group A, whereas there were 89 (57.8%) limbs with restoration of the mechanical axis to within 3° of neutral alignment and 65 (42.2%) outliers (>3° deviation) in Group B. The severity of the preoperative alignment

  13. The improvement of OPC accuracy and stability by the model parameters' analysis and optimization

    NASA Astrophysics Data System (ADS)

    Chung, No-Young; Choi, Woon-Hyuk; Lee, Sung-Ho; Kim, Sung-Il; Lee, Sun-Yong

    2007-10-01

    The OPC model is very critical in the sub 45nm device because the Critical Dimension Uniformity (CDU) is so tight to meet the device performance and the process window latitude for the production level. The OPC model is generally composed of an optical model and a resist model. Each of them has physical terms to be calculated without any wafer data and empirical terms to be fitted with real wafer data to make the optical modeling and the resist modeling. Empirical terms are usually related to the OPC accuracy, but are likely to be overestimated with the wafer data and so those terms can deteriorate OPC stability in case of being overestimated by a small cost function. Several physical terms have been used with ideal value in the optical property and even weren't be considered because those parameters didn't give a critical impact on the OPC accuracy, but these parameters become necessary to be applied to the OPC modeling at the low k1 process. Currently, real optic parameter instead of ideal optical parameter like the laser bandwidth, source map, pupil polarization including the phase and intensity difference start to be measured and those real measured value are used for the OPC modeling. These measured values can improve the model accuracy and stability. In the other hand these parameters can make the OPC model to overcorrect the process proximity errors without careful handling. The laser bandwidth, source map, pupil polarization, and focus centering for the optical modeling are analyzed and the sample data weight scheme and resist model terms are investigated, too. The image blurring by actual laser bandwidth in the exposure system is modeled and the modeling result shows that the extraction of the 2D patterns is necessary to get a reasonable result due to the 2D patterns' measurement noise in the SEM. The source map data from the exposure machine shows lots of horizontal and vertical intensity difference and this phenomenon must come from the measurement noise

  14. Cognitive Processing Profiles of School-Age Children Who Meet Low-Achievement, IQ-Discrepancy, or Dual Criteria for Underachievement in Oral Reading Accuracy

    ERIC Educational Resources Information Center

    Van Santen, Frank W.

    2012-01-01

    The purpose of this study was to compare the cognitive processing profiles of school-age children (ages 7 to 17) who met criteria for underachievement in oral reading accuracy based on three different methods: 1) use of a regression-based IQ-achievement discrepancy only (REGonly), 2) use of a low-achievement cutoff only (LAonly), and 3) use of a…

  15. Incorporating the effect of DEM resolution and accuracy for improved flood inundation mapping

    NASA Astrophysics Data System (ADS)

    Saksena, Siddharth; Merwade, Venkatesh

    2015-11-01

    Topography plays a major role in determining the accuracy of flood inundation areas. However, many areas in the United States and around the world do not have access to high quality topographic data in the form of Digital Elevation Models (DEM). For such areas, an improved understanding of the effects of DEM properties such as horizontal resolution and vertical accuracy on flood inundation maps may eventually lead to improved flood inundation modeling and mapping. This study attempts to relate the errors arising from DEM properties such as spatial resolution and vertical accuracy to flood inundation maps, and then use this relationship to create improved flood inundation maps from coarser resolution DEMs with low accuracy. The results from the five stream reaches used in this study show that water surface elevations (WSE) along the stream and the flood inundation area have a linear relationship with both DEM resolution and accuracy. This linear relationship is then used to extrapolate the water surface elevations from coarser resolution DEMs to get water surface elevations corresponding to a finer resolution DEM. Application of this approach show that improved results can be obtained from flood modeling by using coarser and less accurate DEMs, including public domain datasets such as the National Elevation Dataset and Shuttle Radar Topography Mission (SRTM) DEMs. The improvement in the WSE and its application to obtain better flood inundation maps is dependent on the study reach characteristics such as land use, valley shape, reach length and width. Application of the approach presented in this study on more reaches may lead to development of guidelines for flood inundation mapping using coarser resolution and less accurate topographic datasets.

  16. The Role of Principal Leadership in Improving Student Achievement. Newsletter

    ERIC Educational Resources Information Center

    Center for Comprehensive School Reform and Improvement, 2005

    2005-01-01

    School and district leadership has been the focus of intense scrutiny in recent years as researchers try to define not only the qualities of effective leadership but the impact of leadership on the operation of schools, and even on student achievement. A recently published literature review entitled "How Leadership Influences Student Learning" …

  17. An Effective Way to Improve Mathematics Achievement in Urban Schools

    ERIC Educational Resources Information Center

    Kim, Taik

    2010-01-01

    The local Gaining Early Awareness and Readiness for Undergraduate Programs (GEARUP) partnership serves 11 K-8 schools with the lowest achievement scores and the highest poverty rates in a large Midwestern urban district. Recently, GEARUP launched a specially designed teaching program, Mathematics Enhancement Group (MEG), for underachievers in…

  18. Improving Academic Achievement in Reading and Writing in Primary Grades.

    ERIC Educational Resources Information Center

    Hubbard, Trina; Newell, Michelle

    This study describes a program designed to increase academic achievement in reading and writing among first and second grade students in a rural, middle-income area. Evidence for the existence of the problem includes reading comprehension tests, observation checklists for reading skills and reading behaviors, and writing samples. Analysis of…

  19. Helping Students Improve Academic Achievement and School Success Behavior

    ERIC Educational Resources Information Center

    Brigman, Greg; Campbell, Chari

    2003-01-01

    This article describes a study evaluating the impact of school-counselor-led interventions on student academic achievement and school success behavior. A group counseling and classroom guidance model called student success skills (SSS) was the primary intervention. The focus of the SSS model was on three sets of skills identified in several…

  20. Improving Secondary School Students' Achievement using Intrinsic Motivation

    ERIC Educational Resources Information Center

    Albrecht, Erik; Haapanen, Rebecca; Hall, Erin; Mantonya, Michelle

    2009-01-01

    This report describes a program for increasing students' intrinsic motivation in an effort to increase academic achievement. The targeted population consisted of secondary level students in a middle to upper-middle class suburban area. The students of the targeted secondary level classes appeared to be disengaged from learning due to a lack of…

  1. Capacity Building for a School Improvement Program, Achievement Directed Leadership.

    ERIC Educational Resources Information Center

    Graeber, Anna O.; And Others

    This report describes and evaluates efforts to enhance school districts' capacity to implement and institutionalize the monitoring and management system for an instructional leadership program called Achievement Directed Leadership (ADL). Chapter one introduces the report's methodology, limitations, and structure. Chapter two first states the…

  2. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    PubMed Central

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  3. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  4. After Detection: The Improved Accuracy of Lung Cancer Assessment Using Radiologic Computer-aided Diagnosis

    PubMed Central

    Amir, Guy J.; Lehmann, Harold P.

    2015-01-01

    Rationale and Objectives The aim of this study was to evaluate the improved accuracy of radiologic assessment of lung cancer afforded by computer-aided diagnosis (CADx). Materials and Methods Inclusion/exclusion criteria were formulated, and a systematic inquiry of research databases was conducted. Following title and abstract review, an in-depth review of 149 surviving articles was performed with accepted articles undergoing a Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-based quality review and data abstraction. Results A total of 14 articles, representing 1868 scans, passed the review. Increases in the receiver operating characteristic (ROC) area under the curve of .8 or higher were seen in all nine studies that reported it, except for one that employed subspecialized radiologists. Conclusions This systematic review demonstrated improved accuracy of lung cancer assessment using CADx over manual review, in eight high-quality observer-performance studies. The improved accuracy afforded by radiologic lung-CADx suggests the need to explore its use in screening and regular clinical workflow. PMID:26616209

  5. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  6. Improved speed and accuracy of calculations with a programmable calculator in pediatric emergency scenarios.

    PubMed

    Melzer-Lange, M; Wyatt, D; Walsh-Kelly, C; Smith, D; Hegenbarth, M A; Eisenberg, C S

    1991-03-01

    Both mathematical and selection errors may occur when ordering drug or fluid therapy in a busy emergency department. In an attempt to improve the speed and accuracy of such calculations, we programmed a hand-held calculator to assist in drug and intravenous fluid therapy dosages and rates for three emergency situations: diabetic ketoacidosis, asthma, and asystole. Performance by 58 subjects at various levels of training was compared when using either the programmable calculator or standard materials and methods. When standard methods were used, an average of 30.6 minutes was needed to complete the three scenarios, with an accuracy of 73%; by contrast, use of programmable calculator resulted in a significant decline in time needed to calculate doses (an average of only 8.5 minutes), with an improved accuracy of 98%. The use of a programmable calculator can result in a significant improvement in both speed and accuracy of drug and fluid selection and dosage and rate calculations, regardless of the level of the subject's medical training.

  7. Improving Student Achievement through the Enhancement of Study Skills.

    ERIC Educational Resources Information Center

    Smith, Marvin; Teske, Ralph; Gossmeyer, Matt

    This study described a program for improving students' study skills aimed at improving academic performance. The targeted population consisted of students in two public high schools and one parochial grade school in a medium-sized metropolitan area located in central Illinois. The lack of these skills by students at all levels had been…

  8. Partnering through Training and Practice to Achieve Performance Improvement

    ERIC Educational Resources Information Center

    Lyons, Paul R.

    2010-01-01

    This article presents a partnership effort among managers, trainers, and employees to spring to life performance improvement using the performance templates (P-T) approach. P-T represents a process model as well as a method of training leading to performance improvement. Not only does it add to our repertoire of training and performance management…

  9. Analysis and Improvement of Geo-Referencing Accuracy in Long Term Global AVHRR Data

    NASA Astrophysics Data System (ADS)

    Khlopenkov, K.; Minnis, P.

    2011-12-01

    Precise geolocation is one of the fundamental requirements for generating high-quality Advanced Very High Resolution Radiometer (AVHRR) Satellite Climate Data Record (SCDR) at 1-km spatial resolution for climate applications. The Global Climate Observing System (GCOS) and Committee on Earth Observing Satellites (CEOS) identified the requirement for the accuracy of geolocation of satellite data for climate applications as 1/3 field-of-view (FOV). This requirement for AVHRR series on the National Oceanic and Atmospheric Administration (NOAA) platforms cannot be met without implementing the ground control point (GCP) correction, especially for historical data, because of the limited accuracy of orbit models and uncertainty in the satellite attitude angles. This work presents a new analysis of the geo-referencing accuracy of global AVHRR data, that uses an automated image matching at pre-selected GCP locations. As a reference image, we have been using the clear-sky monthly composite imagery derived from Moderate Resolution Imaging Spectroradiometer (MODIS) MOD09 dataset at 250-m resolution. The image matching technique is applicable to processing not only the daytime observations from optical solar bands, but also the nighttime imagery by using the long wave thermal channels. The method includes the ortho-rectification to correct for surface elevation and achieves the sub-pixel accuracy in both along-scan and along-track directions. The produced image displacement map is then used to derive a correction to satellite clock error and the attitude angles. The statistics and pattern of these corrections have been analyzed for different NOAA Polar-orbiting satellites by using the HRPT, LAC, and GAC data sets. The application of the developed processing system showed that the algorithm achieved better than 1/3 FOV geolocation accuracy for most of AVHRR 1-km scenes. It has a high efficiency rate (over 97%) for global AVHRR data from NOAA-6 through NOAA-19.

  10. Developing an efficient technique for satellite image denoising and resolution enhancement for improving classification accuracy

    NASA Astrophysics Data System (ADS)

    Thangaswamy, Sree Sharmila; Kadarkarai, Ramar; Thangaswamy, Sree Renga Raja

    2013-01-01

    Satellite images are corrupted by noise during image acquisition and transmission. The removal of noise from the image by attenuating the high-frequency image components removes important details as well. In order to retain the useful information, improve the visual appearance, and accurately classify an image, an effective denoising technique is required. We discuss three important steps such as image denoising, resolution enhancement, and classification for improving accuracy in a noisy image. An effective denoising technique, hybrid directional lifting, is proposed to retain the important details of the images and improve visual appearance. The discrete wavelet transform based interpolation is developed for enhancing the resolution of the denoised image. The image is then classified using a support vector machine, which is superior to other neural network classifiers. The quantitative performance measures such as peak signal to noise ratio and classification accuracy show the significance of the proposed techniques.

  11. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively.

  12. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    SciTech Connect

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-02-11

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  13. Research on how to improve the accuracy of the SLM metallic parts

    NASA Astrophysics Data System (ADS)

    Pacurar, Razvan; Balc, Nicolae; Prem, Florica

    2011-05-01

    Selective laser melting (SLM) is one of the most important technologies used when complex metallic parts need to be rapidly manufactured. There are some requirements related to the quality of the manufactured part or the accuracy of the process control, in order to turn SLM process into a production technique. This paper presents a case study undertaken at the Technical University of Cluj-Napoca (TUCN) in cooperation with an industrial company from Romania, focusing on the accuracy issues. Finite element analysis (FEA) and Design Expert software were jointly used in order to determine the optimum process parameters required to improve the accuracy of the SLM metallic parts. Experimental results are also presented in the paper.

  14. HEAT: High accuracy extrapolated ab initio thermochemistry. III. Additional improvements and overview.

    SciTech Connect

    Harding, M. E.; Vazquez, J.; Ruscic, B.; Wilson, A. K.; Gauss, J.; Stanton, J. F.; Chemical Sciences and Engineering Division; Univ. t Mainz; The Univ. of Texas; Univ. of North Texas

    2008-01-01

    Effects of increased basis-set size as well as a correlated treatment of the diagonal Born-Oppenheimer approximation are studied within the context of the high-accuracy extrapolated ab initio thermochemistry (HEAT) theoretical model chemistry. It is found that the addition of these ostensible improvements does little to increase the overall accuracy of HEAT for the determination of molecular atomization energies. Fortuitous cancellation of high-level effects is shown to give the overall HEAT strategy an accuracy that is, in fact, higher than most of its individual components. In addition, the issue of core-valence electron correlation separation is explored; it is found that approximate additive treatments of the two effects have limitations that are significant in the realm of <1 kJ mol{sup -1} theoretical thermochemistry.

  15. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  16. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  17. Improve the ZY-3 Height Accuracy Using Icesat/glas Laser Altimeter Data

    NASA Astrophysics Data System (ADS)

    Li, Guoyuan; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-06-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What's more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  18. Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?

    NASA Astrophysics Data System (ADS)

    Van Dyk, J.; Battista, J.

    2014-03-01

    Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.

  19. Contour accuracy improvement of a flexure-based micro-motion stage for tracking repetitive trajectory

    NASA Astrophysics Data System (ADS)

    Jia, Shi; Jiang, Yao; Li, Tiemin; Du, Yunsong

    2017-01-01

    Flexure-based micro-motion mechanisms have been widely utilized in modern precision industry due to their inherent merits, while model uncertainty, uncertain nonlinearity, and cross-coupling effect will obviously deteriorate their contour accuracy, especially in the high-speed application. This paper aims at improving the contouring performance of a flexure-based micro-motion stage utilized for tracking repetitive trajectories. The dynamic characteristic of the micro-motion stage is first studied and modeled as a second-order system, which is identified through an open-loop sinusoidal sweeping test. Then the iterative learning control (ILC) scheme is utilized to improve the tracking performance of individual axis of the stage. A nonlinear cross-coupled iterative learning control (CCILC) scheme is proposed to reduce the coupling effect among each axis, and thus improves contour accuracy of the stage. The nonlinear gain function incorporated into the CCILC controller can effectively avoid amplifying the non-recurring disturbances and noises in the iterations, which can further improve the stage's contour accuracy in high-speed motion. Comparative experiments between traditional PID, ILC, ILC & CCILC, and the proposed ILC & nonlinear CCILC are carried out on the micro-motion stage to track circular and square trajectories. The results demonstrate that the proposed control scheme outperforms other control schemes much in improving the stage's contour accuracy in high-speed motion. The study in this paper provides a practically effective technique for the flexure-based micro-motion stage in high-speed contouring motion.

  20. Significant improvements to the GBT surface accuracy via high-resolution radio holography

    NASA Astrophysics Data System (ADS)

    Hunter, Todd R.; Schwab, Fred R.; White, Steve D.; Ford, John M.; Ghigo, Frank D.; Maddalena, Ron J.; Mason, Brian S.; Nelson, J. D.; Ray, Jason; Simon, Bob

    2010-01-01

    The 100-m diameter Green Bank Telescope (GBT) was built with an active surface of 2209 actuators in order to achieve and maintain an accurate paraboloidal shape. The actuator home positions were set originally via photogrammetry performed 10 years ago, which resulted in a surface accuracy of about 400 microns rms. In order to improve this performance, in late Fall 2008 we installed a Ku-band holography system on the telescope, composed of two external-reference low-noise block converters attached to linearly-polarized feeds, and followed by down conversion stages, anti-aliasing filters, and a digital correlator. The primary receiver illuminates the subreflector from the standard Gregorian focus while the reference receiver is coupled to an upward-looking 30cm diameter feed located at the tip of the vertical feed arm (above the subreflector). The system is tunable over the typical geostationary satellite downlink frequency band (11.7-12.2 GHz) and the correlated bandwidth is 10 kHz. We performed a spectral survey of a few dozen satellites visible from Green Bank, and identified a number of strong and stable continuous wave beacons near 11.700 GHz suitable for holographic mapping. The typical phase stability of the system is 2 degrees rms in 36 millisecond integrations, and is mostly limited by atmosphere. We began the holography campaign in January 2009. Maps are made with on-the-fly raster scanning over a 2 degree region with 1400 points in the scan direction and 201 points in the perpendicular direction, requiring approximately 3 hours. Surface features as small as 0.5m are visible, compared to the typical panel size of 2m by 2.5m. A number of large features coincident with specific actuators were identified, and traced to electrical problems either with the actuator motors, position sensors or cabling. These problems were repaired during the following months as the campaign continued through several iterations of holography mapping, surface adjustments, and

  1. Improving the accuracy and precision of cognitive testing in mild dementia.

    PubMed

    Wouters, Hans; Appels, Bregje; van der Flier, Wiesje M; van Campen, Jos; Klein, Martin; Zwinderman, Aeilko H; Schmand, Ben; van Gool, Willem A; Scheltens, Philip; Lindeboom, Robert

    2012-03-01

    The CAMCOG, ADAS-cog, and MMSE, designed to grade global cognitive ability in dementia have inadequate precision and accuracy in distinguishing mild dementia from normal ageing. Adding neuropsychological tests to their scale might improve precision and accuracy in mild dementia. We, therefore, pooled neuropsychological test-batteries from two memory clinics (ns = 135 and 186) with CAMCOG data from a population study and 2 memory clinics (n = 829) and ADAS-cog data from 3 randomized controlled trials (n = 713) to estimate a common dimension of global cognitive ability using Rasch analysis. Item difficulties and individuals' global cognitive ability levels were estimated. Difficulties of 57 items (of 64) could be validly estimated. Neuropsychological tests were more difficult than the CAMCOG, ADAS-cog, and MMSE items. Most neuropsychological tests had difficulties in the ability range of normal ageing to mild dementia. Higher than average ability levels were more precisely measured when neuropsychological tests were added to the MMSE than when these were measured with the MMSE alone. Diagnostic accuracy in mild dementia was consistently better after adding neuropsychological tests to the MMSE. We conclude that extending dementia specific instruments with neuropsychological tests improves measurement precision and accuracy of cognitive impairment in mild dementia.

  2. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  3. Achieving improved cycle efficiency via pressure gain combustors

    SciTech Connect

    Gemmen, R.S.; Janus, M.C.; Richards, G.A.; Norton, T.S.; Rogers, W.A.

    1995-04-01

    As part of the Department of Energy`s Advanced Gas Turbine Systems Program, an investigation is being performed to evaluate ``pressure gain`` combustion systems for gas turbine applications. This paper presents experimental pressure gain and pollutant emission data from such combustion systems. Numerical predictions for certain combustor geometries are also presented. It is reported that for suitable aerovalved pulse combustor geometries studied experimentally, an overall combustor pressure gain of nearly 1 percent can be achieved. It is also shown that for one combustion system operating under typical gas turbine conditions, NO{sub x} and CO emmissions, are about 30 ppmv and 8 ppmv, respectively.

  4. Organizational management practices for achieving software process improvement

    NASA Technical Reports Server (NTRS)

    Kandt, Ronald Kirk

    2004-01-01

    The crisis in developing software has been known for over thirty years. Problems that existed in developing software in the early days of computing still exist today. These problems include the delivery of low-quality products, actual development costs that exceed expected development costs, and actual development time that exceeds expected development time. Several solutions have been offered to overcome out inability to deliver high-quality software, on-time and within budget. One of these solutions involves software process improvement. However, such efforts often fail because of organizational management issues. This paper discusses business practices that organizations should follow to improve their chances of initiating and sustaining successful software process improvement efforts.

  5. Improving International Research with Clinical Specimens: 5 Achievable Objectives

    PubMed Central

    LaBaer, Joshua

    2012-01-01

    Our increased interest in translational research has created a large demand for blood, tissue and other clinical samples, which find use in a broad variety of research including genomics, proteomics, and metabolomics. Hundreds of millions of dollars have been invested internationally on the collection, storage and distribution of samples. Nevertheless, many researchers complain in frustration about their inability to obtain relevant and/or useful samples for their research. Lack of access to samples, poor condition of samples, and unavailability of appropriate control samples have slowed our progress in the study of diseases and biomarkers. In this editorial, I focus on five major challenges that thwart clinical sample use for translational research and propose near term objectives to address them. They include: (1) defining our biobanking needs; (2) increasing the use of and access to standard operating procedures; (3) mapping inter-observer differences for use in normalizing diagnoses; (4) identifying natural internal protein controls; and (5) redefining the clinical sample paradigm by building partnerships with the public. In each case, I believe that we have the tools at hand required to achieve the objective within 5 years. Potential paths to achieve these objectives are explored. However we solve these problems, the future of proteomics depends on access to high quality clinical samples, collected under standardized conditions, accurately annotated and shared under conditions that promote the research we need to do. PMID:22998582

  6. Improving pairwise sequence alignment accuracy using near-optimal protein sequence alignments

    PubMed Central

    2010-01-01

    Background While the pairwise alignments produced by sequence similarity searches are a powerful tool for identifying homologous proteins - proteins that share a common ancestor and a similar structure; pairwise sequence alignments often fail to represent accurately the structural alignments inferred from three-dimensional coordinates. Since sequence alignment algorithms produce optimal alignments, the best structural alignments must reflect suboptimal sequence alignment scores. Thus, we have examined a range of suboptimal sequence alignments and a range of scoring parameters to understand better which sequence alignments are likely to be more structurally accurate. Results We compared near-optimal protein sequence alignments produced by the Zuker algorithm and a set of probabilistic alignments produced by the probA program with structural alignments produced by four different structure alignment algorithms. There is significant overlap between the solution spaces of structural alignments and both the near-optimal sequence alignments produced by commonly used scoring parameters for sequences that share significant sequence similarity (E-values < 10-5) and the ensemble of probA alignments. We constructed a logistic regression model incorporating three input variables derived from sets of near-optimal alignments: robustness, edge frequency, and maximum bits-per-position. A ROC analysis shows that this model more accurately classifies amino acid pairs (edges in the alignment path graph) according to the likelihood of appearance in structural alignments than the robustness score alone. We investigated various trimming protocols for removing incorrect edges from the optimal sequence alignment; the most effective protocol is to remove matches from the semi-global optimal alignment that are outside the boundaries of the local alignment, although trimming according to the model-generated probabilities achieves a similar level of improvement. The model can also be used to

  7. Improving supervised classification accuracy using non-rigid multimodal image registration: detecting prostate cancer

    NASA Astrophysics Data System (ADS)

    Chappelow, Jonathan; Viswanath, Satish; Monaco, James; Rosen, Mark; Tomaszewski, John; Feldman, Michael; Madabhushi, Anant

    2008-03-01

    Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP); the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affi;ne schemes; one based on maximization of mutual information (MI) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMI), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two affine registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MRI, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that

  8. Using Curriculum-Based Measurement to Improve Achievement

    ERIC Educational Resources Information Center

    Clarke, Suzanne

    2009-01-01

    Response to intervention (RTI) is on the radar screen of most principals these days--finding out what it is, how it can improve teaching and learning, and what needs to be done to implement it effectively. One critical component of RTI that will require particular attention from principals is student progress monitoring, which is required in every…

  9. Improving Student Academic Achievement through Enhanced Communication Skills.

    ERIC Educational Resources Information Center

    Rivan, Christine A.; Weber, Annette M.

    This report describes a program implemented to improve inadequate student communication skills, specifically in the areas of listening, speaking, social, and emotional development. The targeted population consisted of first and second grade students in a middle class community, located in central Illinois. Evidence for the existence of the problem…

  10. Data as a Lever for Improving Instruction and Student Achievement

    ERIC Educational Resources Information Center

    Simmons, Warren

    2012-01-01

    This commentary draws on the articles in this issue to underscore the importance of community engagement and districtwide capacity building as central to efforts to use data to inform accountability and choice, along with school and instructional improvement. The author cautions against treating data as an all-purpose tool absent adequate…

  11. Achieving Continuous Improvement: Theories that Support a System Change.

    ERIC Educational Resources Information Center

    Armel, Donald

    Focusing on improvement is different than focusing on quality, quantity, customer satisfaction, and productivity. This paper discusses Open System Theory, and suggests ways to change large systems. Changing a system (meaning the way all the parts are connected) requires a considerable amount of data gathering and analysis. Choosing the proper…

  12. Improving the Accuracy of Lunar Laser Ranging Tests of Gravitational Theory: Modeling and Future Directions

    NASA Technical Reports Server (NTRS)

    Williams, James G.; Turyshev, Slava; Dickey, Jean O.

    2003-01-01

    Accurate analysis of precision ranges to the Moon have provided several tests of gravitational theory: the equivalence principle, geodetic precession, PPN parameters beta and gamma, and the constancy of the gravitational constant G. Other possible tests include the inverse square law at 20,000 km length scales and the PPN parameter 1. The uncertainties of these tests have decreased as data accuracies have improved and data time span has lengthened. We are exploring the modeling improvements necessary to proceed from cm to mm range accuracies. Looking to future exploration, what characteristics are desired for the next generation of ranging devices, what fundamental questions can be investigated, and what are the challenges for modeling and data analysis?

  13. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.; Jiang, N.; Dunn, A. K.; Yeh, H.-C.

    2015-10-01

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.

  14. Natural language processing with dynamic classification improves P300 speller accuracy and bit rate

    NASA Astrophysics Data System (ADS)

    Speier, William; Arnold, Corey; Lu, Jessica; Taira, Ricky K.; Pouratian, Nader

    2012-02-01

    The P300 speller is an example of a brain-computer interface that can restore functionality to victims of neuromuscular disorders. Although the most common application of this system has been communicating language, the properties and constraints of the linguistic domain have not to date been exploited when decoding brain signals that pertain to language. We hypothesized that combining the standard stepwise linear discriminant analysis with a Naive Bayes classifier and a trigram language model would increase the speed and accuracy of typing with the P300 speller. With integration of natural language processing, we observed significant improvements in accuracy and 40-60% increases in bit rate for all six subjects in a pilot study. This study suggests that integrating information about the linguistic domain can significantly improve signal classification.

  15. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    SciTech Connect

    Liu, C.; Liu, Y.-L.; Perillo, E. P.; Jiang, N.; Dunn, A. K. E-mail: tim.yeh@austin.utexas.edu; Yeh, H.-C. E-mail: tim.yeh@austin.utexas.edu

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.

  16. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  17. Improving the accuracy and efficiency of respiratory rate measurements in children using mobile devices.

    PubMed

    Karlen, Walter; Gan, Heng; Chiu, Michelle; Dunsmuir, Dustin; Zhou, Guohai; Dumont, Guy A; Ansermino, J Mark

    2014-01-01

    The recommended method for measuring respiratory rate (RR) is counting breaths for 60 s using a timer. This method is not efficient in a busy clinical setting. There is an urgent need for a robust, low-cost method that can help front-line health care workers to measure RR quickly and accurately. Our aim was to develop a more efficient RR assessment method. RR was estimated by measuring the median time interval between breaths obtained from tapping on the touch screen of a mobile device. The estimation was continuously validated by measuring consistency (% deviation from the median) of each interval. Data from 30 subjects estimating RR from 10 standard videos with a mobile phone application were collected. A sensitivity analysis and an optimization experiment were performed to verify that a RR could be obtained in less than 60 s; that the accuracy improves when more taps are included into the calculation; and that accuracy improves when inconsistent taps are excluded. The sensitivity analysis showed that excluding inconsistent tapping and increasing the number of tap intervals improved the RR estimation. Efficiency (time to complete measurement) was significantly improved compared to traditional methods that require counting for 60 s. There was a trade-off between accuracy and efficiency. The most balanced optimization result provided a mean efficiency of 9.9 s and a normalized root mean square error of 5.6%, corresponding to 2.2 breaths/min at a respiratory rate of 40 breaths/min. The obtained 6-fold increase in mean efficiency combined with a clinically acceptable error makes this approach a viable solution for further clinical testing. The sensitivity analysis illustrating the trade-off between accuracy and efficiency will be a useful tool to define a target product profile for any novel RR estimation device.

  18. Early diagnostic suggestions improve accuracy of GPs: a randomised controlled trial using computer-simulated patients

    PubMed Central

    Kostopoulou, Olga; Rosen, Andrea; Round, Thomas; Wright, Ellen; Douiri, Abdel; Delaney, Brendan

    2015-01-01

    Background Designers of computerised diagnostic support systems (CDSSs) expect physicians to notice when they need advice and enter into the CDSS all information that they have gathered about the patient. The poor use of CDSSs and the tendency not to follow advice once a leading diagnosis emerges would question this expectation. Aim To determine whether providing GPs with diagnoses to consider before they start testing hypotheses improves accuracy. Design and setting Mixed factorial design, where 297 GPs diagnosed nine patient cases, differing in difficulty, in one of three experimental conditions: control, early support, or late support. Method Data were collected over the internet. After reading some initial information about the patient and the reason for encounter, GPs requested further information for diagnosis and management. Those receiving early support were shown a list of possible diagnoses before gathering further information. In late support, GPs first gave a diagnosis and were then shown which other diagnoses they could still not discount. Results Early support significantly improved diagnostic accuracy over control (odds ratio [OR] 1.31; 95% confidence interval [95%CI] = 1.03 to 1.66, P = 0.027), while late support did not (OR 1.10; 95% CI = 0.88 to 1.37). An absolute improvement of 6% with early support was obtained. There was no significant interaction with case difficulty and no effect of GP experience on accuracy. No differences in information search were detected between experimental conditions. Conclusion Reminding GPs of diagnoses to consider before they start testing hypotheses can improve diagnostic accuracy irrespective of case difficulty, without lengthening information search. PMID:25548316

  19. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  20. Improving Ocean Color Data Products using a Purely Empirical Approach: Reducing the Requirement for Radiometric Calibration Accuracy

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2008-01-01

    Radiometric calibration is the foundation upon which ocean color remote sensing is built. Quality derived geophysical products, such as chlorophyll, are assumed to be critically dependent upon the quality of the radiometric calibration. Unfortunately, the goals of radiometric calibration are not typically met in global and large-scale regional analyses, and are especially deficient in coastal regions. The consequences of the uncertainty in calibration are very large in terms of global and regional ocean chlorophyll estimates. In fact, stability in global chlorophyll requires calibration uncertainty much greater than the goals, and outside of modern capabilities. Using a purely empirical approach, we show that stable and consistent global chlorophyll values can be achieved over very wide ranges of uncertainty. Furthermore, the approach yields statistically improved comparisons with in situ data, suggesting improved quality. The results suggest that accuracy requirements for radiometric calibration cab be reduced if alternative empirical approaches are used.

  1. Peritoneal dialysis: how we can achieve improvement of PD penetration.

    PubMed

    Van Biesen, W

    2007-07-01

    Peritoneal dialysis (PD) is a well established renal replacement therapy (RRT). It appears to have some excellent properties as a first line RRT, as it preserves residual renal function, improves clearance of middle and larger solutes and preserves vascular access. To improve PD penetration, it is necessary to have a well established pre-dialysis programme, as information seems to be the clue in the choice and the success of PD. Furthermore, it is important that patients and nurses are well educated in the practice of PD. This reduces the need for hypertonic bags by better compliance with the salt restrictive diet, reduces exposure to dialysate per se by adapting the number and length of the dwells to the needs of the patient, and increases peritonitis-free survival, thus prolonging the survival of the peritoneal membrane. In addition, it is clear that the use of new low glucose degradation products and normal pH solutions will also improve the technical success of PD. The collaboration of industry with local health care providers could be a necessity in overcoming the costs induced by the import of dialysate solutions paid for in foreign currency.

  2. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  3. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  4. Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset

    PubMed Central

    Lipps, David; Devineni, Sree

    2016-01-01

    MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy

  5. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  6. New technology in dietary assessment: a review of digital methods in improving food record accuracy.

    PubMed

    Stumbo, Phyllis J

    2013-02-01

    Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.

  7. Dynamic sea surface topography, gravity and improved orbit accuracies from the direct evaluation of SEASAT altimeter data

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Lerch, F.; Koblinsky, C. J.; Klosko, S. M.; Robbins, J. W.; Williamson, R. G.; Patel, G. B.

    1989-01-01

    A method for the simultaneous solution of dynamic ocean topography, gravity and orbits using satellite altimeter data is described. A GEM-T1 based gravitational model called PGS-3337 that incorporates Seasat altimetry, surface gravimetry and satellite tracking data has been determined complete to degree and order 50. The altimeter data is utilized as a dynamic observation of the satellite's height above the sea surface with a degree 10 model of dynamic topography being recovered simultaneously with the orbit parameters, gravity and tidal terms in this model. PGS-3337 has a geoid uncertainty of 60 cm root-mean-square (RMS) globally, with the uncertainty over the altimeter tracked ocean being in the 25 cm range. Doppler determined orbits for Seasat, show large improvements, with the sub-30 cm radial accuracies being achieved. When altimeter data is used in orbit determination, radial orbital accuracies of 20 cm are achieved. The RMS of fit to the altimeter data directly gives 30 cm fits for Seasat when using PGS-3337 and its geoid and dynamic topography model. This performance level is two to three times better than that achieved with earlier Goddard earth models (GEM) using the dynamic topography from long-term oceanographic averages. The recovered dynamic topography reveals the global long wavelength circulation of the oceans with a resolution of 1500 km. The power in the dynamic topography recovery is now found to be closer to that of oceanographic studies than for previous satellite solutions. This is attributed primarily to the improved modeling of the geoid which has occurred. Study of the altimeter residuals reveals regions where tidal models are poor and sea state effects are major limitations.

  8. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Leimkuhler, Benedict; Shang, Xiaocheng

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.

  9. Improving the accuracy of convexity splitting methods for gradient flow equations

    NASA Astrophysics Data System (ADS)

    Glasner, Karl; Orizaga, Saulo

    2016-06-01

    This paper introduces numerical time discretization methods which significantly improve the accuracy of the convexity-splitting approach of Eyre (1998) [7], while retaining the same numerical cost and stability properties. A first order method is constructed by iteration of a semi-implicit method based upon decomposing the energy into convex and concave parts. A second order method is also presented based on backwards differentiation formulas. Several extrapolation procedures for iteration initialization are proposed. We show that, under broad circumstances, these methods have an energy decreasing property, leading to good numerical stability. The new schemes are tested using two evolution equations commonly used in materials science: the Cahn-Hilliard equation and the phase field crystal equation. We find that our methods can increase accuracy by many orders of magnitude in comparison to the original convexity-splitting algorithm. In addition, the optimal methods require little or no iteration, making their computation cost similar to the original algorithm.

  10. An Initial Study of Airport Arrival Heinz Capacity Benefits Due to Improved Scheduling Accuracy

    NASA Technical Reports Server (NTRS)

    Meyn, Larry; Erzberger, Heinz

    2005-01-01

    The long-term growth rate in air-traffic demand leads to future air-traffic densities that are unmanageable by today's air-traffic control system. I n order to accommodate such growth, new technology and operational methods will be needed in the next generation air-traffic control system. One proposal for such a system is the Automated Airspace Concept (AAC). One of the precepts of AAC is to direct aircraft using trajectories that are sent via an air-ground data link. This greatly improves the accuracy in directing aircraft to specific waypoints at specific times. Studies of the Center-TRACON Automation System (CTAS) have shown that increased scheduling accuracy enables increased arrival capacity at CTAS equipped airports.

  11. Improving image accuracy of region-of-interest in cone-beam CT using prior image.

    PubMed

    Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong

    2014-03-06

    In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image.

  12. Improving CID, HCD, and ETD FT MS/MS degradome-peptidome identifications using high accuracy mass information

    SciTech Connect

    Shen, Yufeng; Tolic, Nikola; Purvine, Samuel O.; Smith, Richard D.

    2011-11-07

    The peptidome (i.e. processed and degraded forms of proteins) of e.g. blood can potentially provide insights into disease processes, as well as a source of candidate biomarkers that are unobtainable using conventional bottom-up proteomics approaches. MS dissociation methods, including CID, HCD, and ETD, can each contribute distinct identifications using conventional peptide identification methods (Shen et al. J. Proteome Res. 2011), but such samples still pose significant analysis and informatics challenges. In this work, we explored a simple approach for better utilization of high accuracy fragment ion mass measurements provided e.g. by FT MS/MS and demonstrate significant improvements relative to conventional descriptive and probabilistic scores methods. For example, at the same FDR level we identified 20-40% more peptides than SEQUEST and Mascot scoring methods using high accuracy fragment ion information (e.g., <10 mass errors) from CID, HCD, and ETD spectra. Species identified covered >90% of all those identified from SEQUEST, Mascot, and MS-GF scoring methods. Additionally, we found that the merging the different fragment spectra provided >60% more species using the UStags method than achieved previously, and enabled >1000 peptidome components to be identified from a single human blood plasma sample with a 0.6% peptide-level FDR, and providing an improved basis for investigation of potentially disease-related peptidome components.

  13. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  14. Improving the accuracy of helium and neon measurements in ocean waters

    NASA Astrophysics Data System (ADS)

    Vogt, M.; Roether, W.; Vogel, S.; Sueltenfuss, J.

    2012-04-01

    The helium and neon solubility disequilibria across the ocean-atmosphere interface serve to study the physics of air-sea gas exchange, but the effect is small so that only high-accuracy data give useful results. Weak points are measurement calibration and uncertain solubility equilibrium values in seawater, especially so for the helium isotopes. Calibration: The classical calibration of mass spectrometric helium and neon measurements uses aliquots of atmospheric air, which is convenient but limited in accuracy and long-term stability. Our alternative is to use water samples equilibrated with undisturbed air, so that their mass can be converted into equivalent volumes of air using a solubility function. In this way, the samples allow a precise recalibration of the air aliquots. A bias relative to regular samples is excluded because the equilibrated water is subjected to exactly the same treatment. The equilibration unit has a water capacity of 4.5 liters. The water is circulated over exchange mats, yielding full air-water equilibrium within two hours, and temperature, pressure, and humidity are precisely controlled. In consequence, we achieve solubility equilibrium within ± 0.03%, so that high accuracy and long-term stability of the calibration are guaranteed. The solubility equilibrium values are more uncertain, but a biased value will only introduce a common shift to the data, i.e., it will not affect the internal consistency of the calibration. The new calibration mode will also enable efficient intercalibration between laboratories. Solubility determination and sampling procedures: We shall use the equilibration unit to obtain solubility functions of helium and neon in distilled water and seawater with a projected accuracy of ± 0.2%. One measure to achieve this is to compare the mass spectrometric signals of the water and the air phase directly. In this context, we developed a procedure to sample water into glass ampoules to be flame-sealed. They are filled

  15. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  16. Individual variation in exploratory behaviour improves speed and accuracy of collective nest selection by Argentine ants

    PubMed Central

    Hui, Ashley; Pinter-Wollman, Noa

    2014-01-01

    Collective behaviours are influenced by the behavioural composition of the group. For example, a collective behaviour may emerge from the average behaviour of the group's constituents, or be driven by a few key individuals that catalyse the behaviour of others in the group. When ant colonies collectively relocate to a new nest site, there is an inherent trade-off between the speed and accuracy of their decision of where to move due to the time it takes to gather information. Thus, variation among workers in exploratory behaviour, which allows gathering information about potential new nest sites, may impact the ability of a colony to move quickly into a suitable new nest. The invasive Argentine ant, Linepithema humile, expands its range locally through the dispersal and establishment of propagules: groups of ants and queens. We examine whether the success of these groups in rapidly finding a suitable nest site is affected by their behavioural composition. We compared nest choice speed and accuracy among groups of all-exploratory, all-nonexploratory and half-exploratory–half-nonexploratory individuals. We show that exploratory individuals improve both the speed and accuracy of collective nest choice, and that exploratory individuals have additive, not synergistic, effects on nest site selection. By integrating an examination of behaviour into the study of invasive species we shed light on the mechanisms that impact the progression of invasion. PMID:25018558

  17. Accounting for filter bandwidth improves the quantitative accuracy of bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie L.; Cobbold, Mark; Dehghani, Hamid

    2015-09-01

    Bioluminescence imaging is a noninvasive technique whereby surface weighted images of luminescent probes within animals are used to characterize cell count and function. Traditionally, data are collected over the entire emission spectrum of the source using no filters and are used to evaluate cell count/function over the entire spectrum. Alternatively, multispectral data over several wavelengths can be incorporated to perform tomographic reconstruction of source location and intensity. However, bandpass filters used for multispectral data acquisition have a specific bandwidth, which is ignored in the reconstruction. In this work, ignoring the bandwidth is shown to introduce a dependence of the recovered source intensity on the bandwidth of the filters. A method of accounting for the bandwidth of filters used during multispectral data acquisition is presented and its efficacy in increasing the quantitative accuracy of bioluminescence tomography is demonstrated through simulation and experiment. It is demonstrated that while using filters with a large bandwidth can dramatically decrease the data acquisition time, if not accounted for, errors of up to 200% in quantitative accuracy are introduced in two-dimensional planar imaging, even after normalization. For tomographic imaging, the use of this method to account for filter bandwidth dramatically improves the quantitative accuracy.

  18. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  19. Improving accuracy and capabilities of X-ray fluorescence method using intensity ratios

    NASA Astrophysics Data System (ADS)

    Garmay, Andrey V.; Oskolok, Kirill V.

    2017-04-01

    An X-ray fluorescence analysis algorithm is proposed which is based on a use of ratios of X-ray fluorescence lines intensities. Such an analytical signal is more stable and leads to improved accuracy. Novel calibration equations are proposed which are suitable for analysis in a broad range of matrix compositions. To apply the algorithm to analysis of samples containing significant amount of undetectable elements a use of a dependence of a Rayleigh-to-Compton intensity ratio on a total content of these elements is suggested. The technique's validity is shown by analysis of standard steel samples, model metal oxides mixture and iron ore samples.

  20. A simple method for improving the time-stepping accuracy in atmosphere and ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-12-01

    In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A

  1. Processing data, for improved, accuracy, from device for measuring speed of sound in a gas

    DOEpatents

    Owen, Thomas E.

    2006-09-19

    A method, used in connection with a pulse-echo type sensor for determining the speed of sound in a gas, for improving the accuracy of speed of sound measurements. The sensor operates on the principle that speed of sound can be derived from the difference between the two-way travel time of signals reflected from two different target faces of the sensor. This time difference is derived by computing the cross correlation between the two reflections. The cross correlation function may be fitted to a parabola whose vertex represents the optimum time coordinate of the coherence peak, thereby providing an accurate measure of the two-way time diffference.

  2. Improving the accuracy and efficiency of identity-by-descent detection in population data.

    PubMed

    Browning, Brian L; Browning, Sharon R

    2013-06-01

    Segments of indentity-by-descent (IBD) detected from high-density genetic data are useful for many applications, including long-range phase determination, phasing family data, imputation, IBD mapping, and heritability analysis in founder populations. We present Refined IBD, a new method for IBD segment detection. Refined IBD achieves both computational efficiency and highly accurate IBD segment reporting by searching for IBD in two steps. The first step (identification) uses the GERMLINE algorithm to find shared haplotypes exceeding a length threshold. The second step (refinement) evaluates candidate segments with a probabilistic approach to assess the evidence for IBD. Like GERMLINE, Refined IBD allows for IBD reporting on a haplotype level, which facilitates determination of multi-individual IBD and allows for haplotype-based downstream analyses. To investigate the properties of Refined IBD, we simulate SNP data from a model with recent superexponential population growth that is designed to match United Kingdom data. The simulation results show that Refined IBD achieves a better power/accuracy profile than fastIBD or GERMLINE. We find that a single run of Refined IBD achieves greater power than 10 runs of fastIBD. We also apply Refined IBD to SNP data for samples from the United Kingdom and from Northern Finland and describe the IBD sharing in these data sets. Refined IBD is powerful, highly accurate, and easy to use and is implemented in Beagle version 4.

  3. Improving the accuracy of admitted subacute clinical costing: an action research approach.

    PubMed

    Hakkennes, Sharon; Arblaster, Ross; Lim, Kim

    2016-08-29

    Objective The aim of the present study was to determine whether action research could be used to improve the breadth and accuracy of clinical costing data in an admitted subacute settingMethods The setting was a 100-bed in-patient rehabilitation centre. Using a pre-post study design all admitted subacute separations during the 2011-12 financial year were eligible for inclusion. An action research framework aimed at improving clinical costing methodology was developed and implemented.Results In all, 1499 separations were included in the study. A medical record audit of a random selection of 80 separations demonstrated that the use of an action research framework was effective in improving the breadth and accuracy of the costing data. This was evidenced by a significant increase in the average number of activities costed, a reduction in the average number of activities incorrectly costed and a reduction in the average number of activities missing from the costing, per episode of care.Conclusions Engaging clinicians and cost centre managers was effective in facilitating the development of robust clinical costing data in an admitted subacute setting. Further investigation into the value of this approach across other care types and healthcare services is warranted.What is known about this topic? Accurate clinical costing data is essential for informing price models used in activity-based funding. In Australia, there is currently a lack of robust admitted subacute cost data to inform the price model for this care type.What does this paper add? The action research framework presented in this study was effective in improving the breadth and accuracy of clinical costing data in an admitted subacute setting.What are the implications for practitioners? To improve clinical costing practices, health services should consider engaging key stakeholders, including clinicians and cost centre managers, in reviewing clinical costing methodology. Robust clinical costing data has the

  4. Training to Improve Precision and Accuracy in the Measurement of Fiber Morphology

    PubMed Central

    Jeon, Jun; Wade, Mary Beth; Luong, Derek; Palmer, Xavier-Lewis; Bharti, Kapil; Simon, Carl G.

    2016-01-01

    An estimated $7.1 billion dollars a year is spent due to irreproducibility in pre-clinical data from errors in data analysis and reporting. Therefore, developing tools to improve measurement comparability is paramount. Recently, an open source tool, DiameterJ, has been deployed for the automated analysis of scanning electron micrographs of fibrous scaffolds designed for tissue engineering applications. DiameterJ performs hundreds to thousands of scaffold fiber diameter measurements from a single micrograph within a few seconds, along with a variety of other scaffold morphological features, which enables a more rigorous and thorough assessment of scaffold properties. Herein, an online, publicly available training module is introduced for educating DiameterJ users on how to effectively analyze scanning electron micrographs of fibers and the large volume of data that a DiameterJ analysis yields. The end goal of this training was to improve user data analysis and reporting to enhance reproducibility of analysis of nanofiber scaffolds. User performance was assessed before and after training to evaluate the effectiveness of the training modules. Users were asked to use DiameterJ to analyze reference micrographs of fibers that had known diameters. The results showed that training improved the accuracy and precision of measurements of fiber diameter in scanning electron micrographs. Training also improved the precision of measurements of pore area, porosity, intersection density, and characteristic fiber length between fiber intersections. These results demonstrate that the DiameterJ training module improves precision and accuracy in fiber morphology measurements, which will lead to enhanced data comparability. PMID:27907145

  5. SU-E-J-133: Autosegmentation of Linac CBCT: Improved Accuracy Via Penalized Likelihood Reconstruction

    SciTech Connect

    Chen, Y

    2015-06-15

    Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone and with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that

  6. Leveraging Improvements in Precipitation Measuring from GPM Mission to Achieve Prediction Improvements in Climate, Weather and Hydrometeorology

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2002-01-01

    the way for what ultimately is expected to become an internationally-organized operational global precipitation observing system. Notably, the broad societal applications of GPM are reflected in the United Nation s identification of this mission as a foremost candidate for its Peaceful Uses of Space Program. In this presentation, an overview of the GPM mission design will be presented, followed by an explanation of its scientific agenda as an outgrowth of making improvements in rain retrieval accuracy, microphysics dexterity, sampling frequency, and global coverage. All of these improvements offer new means to observe variability in precipitation and water cycle fluxes and to achieve improved predictability of weather, climate, and hydrometeorology. Specifically, the scientific agenda of GPM has been designed to leverage the measurement improvements to improve prognostic model performance, particularly quantitative precipitation forecasting and its linked phenomena at short, intermediate, and extended time scales. The talk will address how GPM measurements will enable better detection of accelerations and decelerations in regional and global water cycle processes and their relationship to climate variability, better impacts of precipitation data assimilation on numerical weather prediction and global climate reanalysis, and better performance from basin scale hydrometeorological models for short and long term flood-drought forecasting and seasonal fresh water resource assessment. Improved hydrometeorological forecasting will be possible by using continuous global precipitation observations to obtain better closure in water budgets and to generate more realistic forcing of the models themselves to achieve more accurate estimates of interception, infiltration, evaporation/transpiration fluxes, storage, and runoff.

  7. Improving the accuracy of a Shack-Hartmann wavefront sensor on extended scenes

    NASA Astrophysics Data System (ADS)

    Rais, M.; Morel, J.-M.; Thiebaut, C.; Delvit, J.-M.; Facciolo, G.

    2016-10-01

    In order to achieve higher resolutions, current earth-observation satellites use larger lightweight main mirrors which are usually deformed over time, impacting on image quality. In the context of active optics, we studied the problem of correcting this main mirror by performing wavefront estimation in a closed loop environment. To this end, a Shack-Hartman wavefront sensor (SHWFS) used on extended scenes could measure the incoming wavefront. The performance of the SHWFS on extended scenes depends entirely on the accuracy of the shift estimation algorithm employed, which should be fast enough to be executed on-board. In this paper we specifically deal with the problem of fast accurate shift estimation in this context. We propose a new algorithm, based on the global optical flow method, that estimates the shifts in linear time. In our experiments, our method proved to be more accurate and stable, as well as less sensitive to noise than all current state-of-the-art methods.

  8. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  9. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; ...

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  10. Improving piezoelectric cell printing accuracy and reliability through neutral buoyancy of suspensions.

    PubMed

    Chahal, Daljeet; Ahmadi, Ali; Cheung, Karen C

    2012-11-01

    The sedimentation and aggregation of cells within inkjet printing systems has been hypothesized to negatively impact printer performance. The purpose of this study was to investigate this influence through the use of neutral buoyancy. Ficoll PM400 was used to create neutrally buoyant MCF-7 breast cancer cell suspensions, which were ejected using a piezoelectric drop-on-demand inkjet printing system. It was found that using a neutrally buoyant suspension greatly increased the reproducibility of consistent cell counts, and eliminated nozzle clogging. Moreover, the use of Ficoll PM400 was shown to not affect cellular viability. This is the first demonstration of such scale and accuracy achieved using a piezoelectric inkjet printing system for cellular dispensing.

  11. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  12. Accuracy of Subcutaneous Continuous Glucose Monitoring in Critically Ill Adults: Improved Sensor Performance with Enhanced Calibrations

    PubMed Central

    Leelarathna, Lalantha; English, Shane W.; Thabit, Hood; Caldwell, Karen; Allen, Janet M.; Kumareswaran, Kavita; Wilinska, Malgorzata E.; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L.; Burnstein, Rowan

    2014-01-01

    Abstract Objective: Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle® Navigator® (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. Subjects and Methods: In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m2; mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6–19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1–6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. Results: In total, 1,060 CGM–ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122–213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Conclusions: Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non–critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements. PMID:24180327

  13. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    NASA Astrophysics Data System (ADS)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  14. Effects of Improvements in Interval Timing on the Mathematics Achievement of Elementary School Students

    ERIC Educational Resources Information Center

    Taub, Gordon E.; McGrew, Kevin S.; Keith, Timothy Z.

    2015-01-01

    This article examines the effect of improvements in timing/rhythmicity on mathematics achievement. A total of 86 participants attending 1st through 4th grades completed pre- and posttest measures of mathematics achievement from the Woodcock-Johnson III Tests of Achievement. Students in the experimental group participated in a 4-week intervention…

  15. Improvement of three-dimensional microstructure contour accuracy using maskless lithography technique based on DMD

    NASA Astrophysics Data System (ADS)

    Huang, Shengzhou; Li, Mujun; Shen, Lianguan; Qiu, Jinfeng; Zhou, Youquan

    2016-10-01

    A novel method is proposed to improve contour accuracy of three-dimensional (3D) microstructure in real-time maskless lithography technique based on a digital micro-mirror device (DMD). In this paper, firstly according to the study of theory and experiment on exposure doses and exposure thickness relation, the spatial distribution of the photo-resist exposure doses was derived, which could predict the resulting 3D contour. Secondly, an equal-arc slicing strategy was adopted, in which arc lengths between adjacent slicing point are kept constant while layer heights become variant. And an equal-arc-mean slicing strategy that takes the average of adjacent layers height was also proposed to further optimize the quality of contour and reduce the contour error on the basis of the equal-arc slicing. Finally, to estimate the validity of the method and as a study case, aspheric micro-lens array were fabricated with proposed method in experiments. Our results showed that the proposed method is feasible for improving and enhancing the 3D microstructure contour accuracy and smoothness.

  16. A statistical methodology to improve accuracy in differentiating schizophrenia patients from healthy controls.

    PubMed

    Peters, Rosalind M; Gjini, Klevest; Templin, Thomas N; Boutros, Nash N

    2014-05-30

    We present a methodology to statistically discriminate among univariate and multivariate indices to improve accuracy in differentiating schizophrenia patients from healthy controls. Electroencephalogram data from 71 subjects (37 controls/34 patients) were analyzed. Data included P300 event-related response amplitudes and latencies as well as amplitudes and sensory gating indices derived from the P50, N100, and P200 auditory-evoked responses resulting in 20 indices analyzed. Receiver operator characteristic (ROC) curve analyses identified significant univariate indices; these underwent principal component analysis (PCA). Logistic regression of PCA components created a multivariate composite used in the final ROC. Eleven univariate ROCs were significant with area under the curve (AUC) >0.50. PCA of these indices resulted in a three-factor solution accounting for 76.96% of the variance. The first factor was defined primarily by P200 and P300 amplitudes, the second by P50 ratio and difference scores, and the third by P300 latency. ROC analysis using the logistic regression composite resulted in an AUC of 0.793 (0.06), p<0.001 (CI=0.685-0.901). A composite score of 0.456 had a sensitivity of 0.829 (correctly identifying schizophrenia patients) and a specificity of 0.703 (correctly identifying healthy controls). Results demonstrated the usefulness of combined statistical techniques in creating a multivariate composite that improves diagnostic accuracy.

  17. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    PubMed Central

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-01-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl− electrodes, 10 F− electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity. PMID:28303939

  18. Case studies on forecasting for innovative technologies: frequent revisions improve accuracy.

    PubMed

    Lerner, Jeffrey C; Robertson, Diane C; Goldstein, Sara M

    2015-02-01

    Health technology forecasting is designed to provide reliable predictions about costs, utilization, diffusion, and other market realities before the technologies enter routine clinical use. In this article we address three questions central to forecasting's usefulness: Are early forecasts sufficiently accurate to help providers acquire the most promising technology and payers to set effective coverage policies? What variables contribute to inaccurate forecasts? How can forecasters manage the variables to improve accuracy? We analyzed forecasts published between 2007 and 2010 by the ECRI Institute on four technologies: single-room proton beam radiation therapy for various cancers; digital breast tomosynthesis imaging technology for breast cancer screening; transcatheter aortic valve replacement for serious heart valve disease; and minimally invasive robot-assisted surgery for various cancers. We then examined revised ECRI forecasts published in 2013 (digital breast tomosynthesis) and 2014 (the other three topics) to identify inaccuracies in the earlier forecasts and explore why they occurred. We found that five of twenty early predictions were inaccurate when compared with the updated forecasts. The inaccuracies pertained to two technologies that had more time-sensitive variables to consider. The case studies suggest that frequent revision of forecasts could improve accuracy, especially for complex technologies whose eventual use is governed by multiple interactive factors.

  19. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    NASA Astrophysics Data System (ADS)

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl‑ electrodes, 10 F‑ electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  20. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision.

    PubMed

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-17

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl(-) electrodes, 10 F(-) electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  1. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines

    NASA Astrophysics Data System (ADS)

    Sahadevan, Anand S.; Routray, Aurobinda; Das, Bhabani S.; Ahmad, Saquib

    2016-04-01

    Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS-Salinas, ROSIS-Pavia University, and AVIRIS-Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

  2. Optimizing stepwise rotation of dodecahedron sound source to improve the accuracy of room acoustic measures.

    PubMed

    Martellotta, Francesco

    2013-09-01

    Dodecahedron sound sources are widely used for acoustical measurement purposes as they produce a good approximation of omnidirectional radiation. Evidence shows that such an assumption is acceptable only in the low-frequency range (namely below 1 kHz), while at higher frequencies sound radiation is far from being uniform. In order to improve the accuracy of acoustical measurements obtained from dodecahedron sources, international standard ISO 3382 suggests an averaging of results after a source rotation. This paper investigates the effects of such rotations, both in terms of variations in acoustical parameters and spatial distribution of sound reflections. Taking advantage of a spherical microphone array, the different reflection patterns were mapped as a function of source rotation, showing that some reflections may be considerably attenuated for different aiming directions. This paper investigates the concept of averaging results while changing rotation angles and the minimum number of rotations required to improve the accuracy of the average value. Results show that averages of three measurements carried out at 30° angular steps are closer to actual values and show much less fluctuation. In addition, an averaging of the directional intensity components of the selected responses stabilizes the spatial distribution of the reflections.

  3. Improving the accuracy of mirror measurements by removing noise and lens distortion

    NASA Astrophysics Data System (ADS)

    Wang, Zhenzhou

    2016-11-01

    Telescope mirrors determine the imaging quality and observation ability of telescopes. Unfortunately, manufacturing highly accurate mirrors remains a bottleneck problem in space optics. One main factor is the lack of a technique for measuring the 3D shapes of mirrors accurately for inverse engineering. Researchers have studied and developed techniques for testing the quality of telescope mirrors and methods for measuring the 3D shapes of mirrors for centuries. Among these, interferometers have become popular in evaluating the surface errors of manufactured mirrors. However, interferometers are unable to measure some important mirror parameters directly and accurately, e.g. the paraxial radius, geometry dimension and eccentric errors, and these parameters are essential for mirror manufacturing. In this paper, we aim to remove the noise and lens distortion inherent in the system to improve the accuracy of a previously proposed one-shot projection mirror measurement method. To this end, we propose a ray modeling and a pattern modeling method. The experimental results show that the proposed ray modeling and pattern modeling method can improve the accuracy of the one-shot projection method significantly, making it feasible as a commercial device to measure the shapes of mirrors quantitatively and accurately.

  4. Improving accuracy of markerless tracking of lung tumours in fluoroscopic video by incorporating diaphragm motion

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Teske, H.; Stoll, M.; Bendl, Rolf

    2014-03-01

    Purpose: Conformal radiation of moving tumours is a challenging task in radiotherapy. Tumour motion induced by respiration can be visualized in fluoroscopic images recorded during patients breathing. Markerless methods making use of registration techniques can be used to estimate tumour motion. However, registration methods might fail when the tumour is hidden by ribs. Using motion of anatomical surrogates, like the diaphragm, is promising to model tumour motion. Methods: A sequence of 116 fluoroscopic images was analyzed and the tumour positions were manually defined by three experts. A block matching (BM) technique is used to calculate the displacement vector relatively to a selected reference image of the first breathing cycle. An enhanced method was developed: Positions, when the tumour is not located behind a rib, are taken as valid estimations of the tumour position. Furthermore, these valid estimations are used to establish a linear model of tumour position and diaphragm motion. For invalid estimations the calculated tumour positions are not taken into consideration, and instead the model is used to determine tumour motion. Results: Enhancing BM with a model of tumour motion from diaphragm motion improves the tracking accuracy when the tumour moves behind a rib. The error (mean ± SD) in longitudinal dimension was 2.0 ± 1.5mm using only BM and 1.0 ± 1.1mm when the enhanced approach was used. Conclusion: The enhanced tracking technique is capable to improve tracking accuracy compared to BM in the case that the tumour is occluded by ribs.

  5. Dual-wavelength retinal images denoising algorithm for improving the accuracy of oxygen saturation calculation

    NASA Astrophysics Data System (ADS)

    Xian, Yong-Li; Dai, Yun; Gao, Chun-Ming; Du, Rui

    2017-01-01

    Noninvasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and spectral absorption characteristics of tissue. Retinal images at 570 and 600 nm are simultaneously captured by dual-wavelength retinal oximetry based on fundus camera. SO2 is finally measured after vessel segmentation, image registration, and calculation of optical density ratio of two images. However, image noise can dramatically affect subsequent image processing and SO2 calculation accuracy. The aforementioned problem remains to be addressed. The purpose of this study was to improve image quality and SO2 calculation accuracy by noise analysis and denoising algorithm for dual-wavelength images. First, noise parameters were estimated by mixed Poisson-Gaussian (MPG) noise model. Second, an MPG denoising algorithm which we called variance stabilizing transform (VST) + dual-domain image denoising (DDID) was proposed based on VST and improved dual-domain filter. The results show that VST + DDID is able to effectively remove MPG noise and preserve image edge details. VST + DDID is better than VST + block-matching and three-dimensional filtering, especially in preserving low-contrast details. The following simulation and analysis indicate that MPG noise in the retinal images can lead to erroneously low measurement for SO2, and the denoised images can provide more accurate grayscale values for retinal oximetry.

  6. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  7. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations

    PubMed Central

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-01-01

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364

  8. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations.

    PubMed

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-05-26

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices.

  9. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    PubMed

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  10. Improvement of Field Accuracy and Plasma Performance in the RT-1 Device

    NASA Astrophysics Data System (ADS)

    Yano, Yoshihisa; Yoshida, Zensho; Morikawa, Junji; Saitoh, Haruhiko; Hayashi, Hiroyuki; Mizushima, Tatsunori

    To improve the accuracy of the magnetic field of Ring Trap-1 (RT-1), we have constructed a system of correction coils to cancel the geomagnetic field and control the attitude of the floating magnet. Without the geomagnetic field canccellation, the floating magnet tilts about 1.4 degrees. The previous prototype correction coils have been replaced by new coils that are much larger and farther from the chamber, so the error field due to the multipole components of the correction field is reduced by a factor of 30 (from 2.6% to 0.1% of the confinement field near the edge region). A significant improvement in plasma confinement has been observed (the stored energy of the plasma has been increased by a factor of 1.5).

  11. Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy

    PubMed Central

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715

  12. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  13. Achieving Coherence in District Improvement: Managing the Relationship between the Central Office and Schools

    ERIC Educational Resources Information Center

    Johnson, Susan Moore; Marietta, Geoff; Higgins, Monica C.; Mapp, Karen L.; Grossman, Allen

    2015-01-01

    "Achieving Coherence in District Improvement" focuses on a problem of practice faced by educational leaders across the nation: how to effectively manage the relationship between the central office and schools. The book is based on a study of five large urban districts that have demonstrated improvement in student achievement. The…

  14. Does probabilistic modelling of linkage disequilibrium evolution improve the accuracy of QTL location in animal pedigree?

    PubMed Central

    2010-01-01

    Background Since 2001, the use of more and more dense maps has made researchers aware that combining linkage and linkage disequilibrium enhances the feasibility of fine-mapping genes of interest. So, various method types have been derived to include concepts of population genetics in the analyses. One major drawback of many of these methods is their computational cost, which is very significant when many markers are considered. Recent advances in technology, such as SNP genotyping, have made it possible to deal with huge amount of data. Thus the challenge that remains is to find accurate and efficient methods that are not too time consuming. The study reported here specifically focuses on the half-sib family animal design. Our objective was to determine whether modelling of linkage disequilibrium evolution improved the mapping accuracy of a quantitative trait locus of agricultural interest in these populations. We compared two methods of fine-mapping. The first one was an association analysis. In this method, we did not model linkage disequilibrium evolution. Therefore, the modelling of the evolution of linkage disequilibrium was a deterministic process; it was complete at time 0 and remained complete during the following generations. In the second method, the modelling of the evolution of population allele frequencies was derived from a Wright-Fisher model. We simulated a wide range of scenarios adapted to animal populations and compared these two methods for each scenario. Results Our results indicated that the improvement produced by probabilistic modelling of linkage disequilibrium evolution was not significant. Both methods led to similar results concerning the location accuracy of quantitative trait loci which appeared to be mainly improved by using four flanking markers instead of two. Conclusions Therefore, in animal half-sib designs, modelling linkage disequilibrium evolution using a Wright-Fisher model does not significantly improve the accuracy of the

  15. SU-E-J-101: Improved CT to CBCT Deformable Registration Accuracy by Incorporating Multiple CBCTs

    SciTech Connect

    Godley, A; Stephans, K; Olsen, L Sheplan

    2015-06-15

    Purpose: Combining prior day CBCT contours with STAPLE was previously shown to improve automated prostate contouring. These accurate STAPLE contours are now used to guide the planning CT to pre-treatment CBCT deformable registration. Methods: Six IGRT prostate patients with daily kilovoltage CBCT had their original planning CT and 9 CBCTs contoured by the same physician. These physician contours for the planning CT and each prior CBCT are deformed to match the current CBCT anatomy, producing multiple contour sets. These sets are then combined using STAPLE into one optimal set (e.g. for day 3 CBCT, combine contours produced using the plan plus day 1 and 2 CBCTs). STAPLE computes a probabilistic estimate of the true contour from this collection of contours by maximizing sensitivity and specificity. The deformation field from planning CT to CBCT registration is then refined by matching its deformed contours to the STAPLE contours. ADMIRE (Elekta Inc.) was used for this. The refinement does not force perfect agreement of the contours, typically Dice’s Coefficient (DC) of > 0.9 is obtained, and the image difference metric remains in the optimization of the deformable registration. Results: The average DC between physician delineated CBCT contours and deformed planning CT contours for the bladder, rectum and prostate was 0.80, 0.79 and 0.75, respectively. The accuracy significantly improved to 0.89, 0.84 and 0.84 (P<0.001 for all) when using the refined deformation field. The average time to run STAPLE with five scans and refine the planning CT deformation was 66 seconds on a Telsa K20c GPU. Conclusion: Accurate contours generated from multiple CBCTs provided guidance for CT to CBCT deformable registration, significantly improving registration accuracy as measured by contour DC. A more accurate deformation field is now available for transferring dose or electron density to the CBCT for adaptive planning. Research grant from Elekta.

  16. The Effect of Written Corrective Feedback on Grammatical Accuracy of EFL Students: An Improvement over Previous Unfocused Designs

    ERIC Educational Resources Information Center

    Khanlarzadeh, Mobin; Nemati, Majid

    2016-01-01

    The effectiveness of written corrective feedback (WCF) in the improvement of language learners' grammatical accuracy has been a topic of interest in SLA studies for the past couple of decades. The present study reports the findings of a three-month study investigating the effect of direct unfocused WCF on the grammatical accuracy of elementary…

  17. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  18. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  19. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  20. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  1. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    SciTech Connect

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  2. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott; Morris, Richard V.; Ehlmann, Bethany; Dyar, M. Darby

    2017-03-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the laser-induced breakdown spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element's emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple "sub-model" method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then "blending" these "sub-models" into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares (PLS) regression, is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  3. Improving Low-Achieving Schools: Building State Capacity to Support School Improvement through Race to the Top

    ERIC Educational Resources Information Center

    Childs, Joshua; Russell, Jennifer Lin

    2017-01-01

    Improving low-achieving schools is a critical challenge facing urban education. Recent national policy shifts have pressed states to take an expanded role in school improvement efforts. In 2009, a federal grant competition called Race to the Top (RttT) compelled states to improve their capacity to implement ambitious education reform agendas.…

  4. METACOGNITIVE SCAFFOLDS IMPROVE SELF-JUDGMENTS OF ACCURACY IN A MEDICAL INTELLIGENT TUTORING SYSTEM

    PubMed Central

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S.

    2013-01-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of Nodular and Diffuse Dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for two hours before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman-Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments. PMID:24532850

  5. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  6. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    PubMed Central

    Kerr, Deborah A.; Wright, Janine L.; Dhaliwal, Satvinder S.; Boushey, Carol J.

    2015-01-01

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days. PMID:25984743

  7. Using skin impedance to improve prediction accuracy of continuous glucose monitoring system

    NASA Astrophysics Data System (ADS)

    Yu, Haixia; Liu, Jin; Shi, Ting; Li, Dachao; Du, Zhenhui; Xu, Kexin

    2008-02-01

    The continuous blood glucose monitoring system using interstitial fluid (ISF) extracted by ultrasound and vacuum is proposed in this paper. The skin impedance measurement is introduced into the system to monitor the skin permeability variation. Low-frequency ultrasound is applied on skin surface to enhance the skin permeability by disrupting the lipid bilayers of the stratum corneum (SC), and then ISF is extracted out of skin continuously by vacuum. The extracted ISF is diluted and the concentration of glucose in it is detected by a biosensor and used to predict the blood glucose concentration. The skin permeability is variable during the extraction, and its variation affects the prediction accuracy. The skin impedance is an excellent indicator of skin permeability in that the lipid bilayers of the SC, which offer electrical resistance to the skin, retard transdermal transport of molecules. So the skin impedance measured during the extraction is transformed to skin conductivity to estimate correlation coefficient between skin conductivity and permeability. Skin conductivity correlates well with skin permeability. The method and experiment system mentioned above may be significative for improving the prediction accuracy of continuous blood glucose monitoring system.

  8. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  9. Improvement on object detection accuracy by using two compound eye systems

    NASA Astrophysics Data System (ADS)

    Ma, Mengchao; Wang, Keyi

    2014-09-01

    Compound eye is a multiple apertures imaging device, indicates that it can be applied for three-dimensional object detection. In our previous report, an artificial compound eye system was developed for 3D object detection. The system consists of a layer of plano-convex microlenses and a prism-like beam steering lens. An innovative multi-position calibration method is developed to relate the incident light rays and the relevant image points. Theoretically, one compound eye system alone is capable of 3D objects detection. However, the detection accuracy is limited due to the relatively small baseline between the adjacent microlenses. In this work, an equivalent large baseline is obtained by using a two compound eyes system. Preliminary experiments were performed to verify the improvement on the accuracy of 3D object detection. The experimental results with two compound eyes are compared with that obtained by only one compound eye. Experimental results show that the system with two compound eyes can detect an object much more accurately, indicating the feasibility and flexibility of the proposed method.

  10. Accuracy improvement in the TDR-based localization of water leaks

    NASA Astrophysics Data System (ADS)

    Cataldo, Andrea; De Benedetto, Egidio; Cannazza, Giuseppe; Monti, Giuseppina; Demitri, Christian

    A time domain reflectometry (TDR)-based system for the localization of water leaks has been recently developed by the authors. This system, which employs wire-like sensing elements to be installed along the underground pipes, has proven immune to the limitations that affect the traditional, acoustic leak-detection systems. Starting from the positive results obtained thus far, in this work, an improvement of this TDR-based system is proposed. More specifically, the possibility of employing a low-cost, water-absorbing sponge to be placed around the sensing element for enhancing the accuracy in the localization of the leak is addressed. To this purpose, laboratory experiments were carried out mimicking a water leakage condition, and two sensing elements (one embedded in a sponge and one without sponge) were comparatively used to identify the position of the leak through TDR measurements. Results showed that, thanks to the water retention capability of the sponge (which maintains the leaked water more localized), the sensing element embedded in the sponge leads to a higher accuracy in the evaluation of the position of the leak.

  11. A metrological approach to improve accuracy and reliability of ammonia measurements in ambient air

    NASA Astrophysics Data System (ADS)

    Pogány, Andrea; Balslev-Harder, David; Braban, Christine F.; Cassidy, Nathan; Ebert, Volker; Ferracci, Valerio; Hieta, Tuomas; Leuenberger, Daiana; Martin, Nicholas A.; Pascale, Céline; Peltola, Jari; Persijn, Stefan; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard

    2016-11-01

    The environmental impacts of ammonia (NH3) in ambient air have become more evident in the recent decades, leading to intensifying research in this field. A number of novel analytical techniques and monitoring instruments have been developed, and the quality and availability of reference gas mixtures used for the calibration of measuring instruments has also increased significantly. However, recent inter-comparison measurements show significant discrepancies, indicating that the majority of the newly developed devices and reference materials require further thorough validation. There is a clear need for more intensive metrological research focusing on quality assurance, intercomparability and validations. MetNH3 (Metrology for ammonia in ambient air) is a three-year project within the framework of the European Metrology Research Programme (EMRP), which aims to bring metrological traceability to ambient ammonia measurements in the 0.5-500 nmol mol-1 amount fraction range. This is addressed by working in three areas: (1) improving accuracy and stability of static and dynamic reference gas mixtures, (2) developing an optical transfer standard and (3) establishing the link between high-accuracy metrological standards and field measurements. In this article we describe the concept, aims and first results of the project.

  12. A Bayesian statistical model for hybrid metrology to improve measurement accuracy

    NASA Astrophysics Data System (ADS)

    Silver, R. M.; Zhang, N. F.; Barnes, B. M.; Qin, J.; Zhou, H.; Dixson, R.

    2011-05-01

    We present a method to combine measurements from different techniques that reduces uncertainties and can improve measurement throughput. The approach directly integrates the measurement analysis of multiple techniques that can include different configurations or platforms. This approach has immediate application when performing model-based optical critical dimension (OCD) measurements. When modeling optical measurements, a library of curves is assembled through the simulation of a multi-dimensional parameter space. Parametric correlation and measurement noise lead to measurement uncertainty in the fitting process with fundamental limitations resulting from the parametric correlations. A strategy to decouple parametric correlation and reduce measurement uncertainties is described. We develop the rigorous underlying Bayesian statistical model and apply this methodology to OCD metrology. We then introduce an approach to damp the regression process to achieve more stable and rapid regression fitting. These methods that use a priori information are shown to reduce measurement uncertainty and improve throughput while also providing an improved foundation for comprehensive reference metrology.

  13. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  14. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors.

    PubMed

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02.

  15. How could the replica method improve accuracy of performance assessment of channel coding?

    NASA Astrophysics Data System (ADS)

    Kabashima, Yoshiyuki

    2009-12-01

    We explore the relation between the techniques of statistical mechanics and information theory for assessing the performance of channel coding. We base our study on a framework developed by Gallager in IEEE Trans. Inform. Theory IT-11, 3 (1965), where the minimum decoding error probability is upper-bounded by an average of a generalized Chernoff's bound over a code ensemble. We show that the resulting bound in the framework can be directly assessed by the replica method, which has been developed in statistical mechanics of disordered systems, whereas in Gallager's original methodology further replacement by another bound utilizing Jensen's inequality is necessary. Our approach associates a seemingly ad hoc restriction with respect to an adjustable parameter for optimizing the bound with a phase transition between two replica symmetric solutions, and can improve the accuracy of performance assessments of general code ensembles including low density parity check codes, although its mathematical justification is still open.

  16. Complete Dentures: Designing Occlusal Registration Blocks to Save Clinical Time and Improve Accuracy.

    PubMed

    Bishop, Mark; Johnson, Tony

    2015-04-01

    The techniques described in this article are based on facial measurements and an analysis of the patient's existing dentures to provide measurements that will enable registration blocks to be constructed for individual patients rather than the arbitrarily produced block more commonly seen. Employing the methods shown will lead to a saving in clinical time and contribute to a more accurate registration. It is important to remember that the technician can only provide occlusal registration blocks of the appropriate dimensions if the clinician has assessed the patient and existing dentures and then passed this information to the laboratory. Clinical Relevances: Being able to assess the clinical suitability of a patient's existing dentures and then take measurements from those dentures will allow occlusal registration blocks to be constructed that have the correct dimensions and anatomical features for a particular patient. This will save time during the registration stage and help to improve accuracy.

  17. Method for improving terahertz band absorption spectrum measurement accuracy using noncontact sample thickness measurement.

    PubMed

    Li, Zhi; Zhang, Zhaohui; Zhao, Xiaoyan; Su, Haixia; Yan, Fang; Zhang, Han

    2012-07-10

    The terahertz absorption spectrum has a complex nonlinear relationship with sample thickness, which is normally measured mechanically with limited accuracy. As a result, the terahertz absorption spectrum is usually determined incorrectly. In this paper, an iterative algorithm is proposed to accurately determine sample thickness. This algorithm is independent of the initial value used and results in convergent calculations. Precision in sample thickness can be improved up to 0.1 μm. A more precise absorption spectrum can then be extracted. By comparing the proposed method with the traditional method based on mechanical thickness measurements, quantitative analysis experiments on a three-component amino acid mixture shows that the global error decreased from 0.0338 to 0.0301.

  18. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  19. Combining Ground-Truthing and Technology to Improve Accuracy in Establishing Children's Food Purchasing Behaviors

    PubMed Central

    Coakley, Hannah Lee; Steeves, Elizabeth Anderson; Jones-Smith, Jessica C; Hopkins, Laura; Braunstein, Nadine; Mui, Yeeli; Gittelsohn, Joel

    2015-01-01

    Developing nutrition-focused environmental interventions for youth requires accurate assessment of where they purchase food. We have developed an innovative, technology-based method to improve the accuracy of food source recall among children using a tablet PC and ground-truthing methodologies. As part of the B'more Healthy Communties for Kids study, we mapped and digitally photographed every food source within a half-mile radius of 14 Baltimore City recreation centers. This food source database was then used with children from the surrounding neighborhoods to search for and identify the food sources they frequent. This novel integration of traditional data collection and technology enables researchers to gather highly accurate information on food source usage among children in Baltimore City. Funding is provided by the NICHD U-54 Grant #1U54HD070725-02. PMID:25729465

  20. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  1. Evaluating an educational intervention to improve the accuracy of death certification among trainees from various specialties

    PubMed Central

    Villar, Jesús; Pérez-Méndez, Lina

    2007-01-01

    Background The inaccuracy of death certification can lead to the misallocation of resources in health care programs and research. We evaluated the rate of errors in the completion of death certificates among medical residents from various specialties, before and after an educational intervention which was designed to improve the accuracy in the certification of the cause of death. Methods A 90-min seminar was delivered to seven mixed groups of medical trainees (n = 166) from several health care institutions in Spain. Physicians were asked to read and anonymously complete a same case-scenario of death certification before and after the seminar. We compared the rates of errors and the impact of the educational intervention before and after the seminar. Results A total of 332 death certificates (166 completed before and 166 completed after the intervention) were audited. Death certificates were completed with errors by 71.1% of the physicians before the educational intervention. Following the seminar, the proportion of death certificates with errors decreased to 9% (p < 0.0001). The most common error in the completion of death certificates was the listing of the mechanism of death instead of the cause of death. Before the seminar, 56.8% listed respiratory or cardiac arrest as the immediate cause of death. None of the participants listed any mechanism of death after the educational intervention (p < 0.0001). Conclusion Major errors in the completion of the correct cause of death on death certificates are common among medical residents. A simple educational intervention can dramatically improve the accuracy in the completion of death certificates by physicians. PMID:18005414

  2. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage.

    PubMed

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-11

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively.

  3. Multibody kinematics optimization with marker projection improves the accuracy of the humerus rotational kinematics.

    PubMed

    Begon, Mickaël; Bélaise, Colombe; Naaim, Alexandre; Lundberg, Arne; Chèze, Laurence

    2016-10-20

    Markers put on the arm undergo large soft tissue artefact (STA). Using markers on the forearm, multibody kinematics optimization (MKO) helps improve the accuracy of the arm kinematics especially its longitudinal rotation. However deleterious effect of STA may persist and affect other segment estimate. The objective was to present an innovative multibody kinematics optimization algorithm with projection of markers onto a requested axis of the local system of coordinates, to cancel their deleterious effect on this degree-of-freedom. Four subjects equipped with markers put on intracortical pins inserted into the humerus, on skin (scapula, arm and forearm) and subsequently on rigid cuffs (arm and forearm) performed analytic, daily-living, sports and range-of-motion tasks. Scapulohumeral kinematics was estimated using 1) pin markers (reference), 2) single-body optimization, 3) MKO, 4) MKO with projection of all arm markers and 5) MKO with projection of a selection of arm markers. Approaches 2-4 were applied to markers put on the skin and the cuff. The main findings were that multibody kinematics optimization improved the accuracy of 40-50% and the projection algorithm added an extra 20% when applied to cuff markers or a selection of skin markers (all but the medial epicondyle). Therefore, the projection algorithm performed better than multibody and single-body optimizations, especially when using markers put on a cuff. Error of humerus orientation was reduced by half to finally be less than 5°. To conclude, this innovative algorithm is a promising approach for estimating accurate upper-limb kinematics.

  4. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    NASA Astrophysics Data System (ADS)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a

  5. Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.

    PubMed

    Pearce, John A

    2015-12-01

    The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented

  6. Galectin-3 and HBME-1 improve the accuracy of core biopsy in indeterminate thyroid nodules.

    PubMed

    Trimboli, Pierpaolo; Guidobaldi, Leo; Amendola, Stefano; Nasrollah, Naim; Romanelli, Francesco; Attanasio, Daniela; Ramacciato, Giovanni; Saggiorato, Enrico; Valabrega, Stefano; Crescenzi, Anna

    2016-04-01

    Core needle biopsy (CNB) has been recently described as an accurate second-line test in thyroid inconclusive cytology (FNA). Here we retrospectively investigated the potential improvement given by Galectin-3, Cytokeratin-19, and HBME-1 on the accuracy of CNB in thyroid nodules with prior indeterminate FNA report. The study included 74 nodules. At CNB diagnosis, 15 were cancers, 40 were benign, and 19 had uncertain/non-diagnostic CNB report. The above immunohistochemical (IHC) panel was analyzed in all cases. After surgery, 19 malignant and 55 benign lesions were found. All 15 cancers and all 40 benign nodules diagnosed at CNB were confirmed at final histology. Regarding the uncertain CNB group, 4 (21 %) were malignant and 15 (79 %) benign. When we considered all the series, the most accurate IHC combination was Galectin-3 plus HBME-1, while HBME-1 was the most sensitive marker in those nodules with uncertain CNB report. The combination of CNB plus IHC could indentify 19/19 cancers and 53/55 benign lesions. Sensitivity and specificity of CNB increased from 79 to 100 % and from 73 to 96 %, respectively, by adding IHC. CNB can diagnose the majority of thyroid nodules with previous indeterminate FNA cytology, while the accuracy of CNB is increased by adding Galectin-3, Cytokeratin-19, and HBME-1 panel. We suggest to adopt CNB as a second-line approach to indeterminate thyroid FNA, and apply IHC in those lesions with uncertain/non-diagnostic CNB report. This approach should improve the pre-surgical diagnosis of patients. These results should be confirmed in larger prospective series.

  7. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  8. Improving the SNO calibration accuracy for the reflective solar bands of AVHRR and MODIS

    NASA Astrophysics Data System (ADS)

    Cao, Changyong; Wu, Xiangqian; Wu, Aisheng; Xiong, Xiaoxiong

    2007-09-01

    Analyses of a 4.5 year SNO (Simultaneous Nadir Overpass) time series between AVHRR on NOAA-16 and -17 suggest that the AVHRR observations based on operational vicarious calibration have become very consistent since mid 2004. This study also suggests that the SNO method has reached a high level of relative accuracy (~1.5%, 1 sigma) for both the 0.63 and 0.84 μm bands, which outperforms many other vicarious methods for satellite radiometer calibration. Meanwhile, for AVHRR and MODIS, a 3.5 year SNO time series suggests that the SNO method has achieved a 0.9% relative accuracy (1 sigma) for the 0.63 μm band, while the relative accuracy for the 0.84 um band is on the order of +/- 5% and significantly affected by the spectral response differences between AVHRR and MODIS. Although the AVHRR observations from NOAA-16 and -17 agree well, they significantly disagree with MODIS observations according to the SNO time series. A 9% difference was found for the 0.63 μm band (estimated uncertainty of 0.9%, 1 sigma), and the difference is even larger if the spectral response differences are taken into account. Similar bias for the 0.84 μm band is also found with a larger uncertainty due to major differences in the spectral response functions between MODIS and AVHRR. It is expected that further studies with Hyperion observations at the SNOs would help us estimate the biases and uncertainty due to spectral differences between AVHRR and MODIS. It is expected that in the near future, the calibration of the AVHRR type of instruments can be made consistent through rigorous cross-calibration using the SNO method. These efforts will contribute to the generation of fundamental climate data records (FCDRs) from the nearly 30 years of AVHRR data for a variety of geophysical products including aerosol, vegetation, and surface albedo, in support of global climate change detection studies.

  9. Improved coded exposure for enhancing imaging quality and detection accuracy of moving targets

    NASA Astrophysics Data System (ADS)

    Mao, Baoqi; Chen, Li; Han, Lin; Shen, Weimin

    2016-09-01

    The blur due to the rapidly relative motion between scene and camera during exposure has the well-known influence on the quality of acquired image and then target detection. An improved coded exposure is introduced in this paper to remove the image blur and obtain high quality image, so that the test accuracy of the surface defect and edge contour of motion objects can be enhanced. The improved exposure method takes advantage of code look-up table to control exposure process and image restoration. The restored images have higher Peak Signal-to-Noise Ratio (PSNR) and Structure SIMilarity (SSIM) than traditional deblur algorithm such as Wiener and regularization filter methods. The edge contour and defect of part samples, which move at constant speed relative to the industry camera used in our experiment, are detected with Sobel operator from the restored images. Experimental results verify that the improved coded exposure is better suitable for imaging moving object and detecting moving target than the traditional.

  10. Yemen: DOD Should Improve Accuracy of Its Data on Congressional Clearance of Projects as It Reevaluates Counterterrorism Assistance

    DTIC Science & Technology

    2015-04-01

    YEMEN DOD Should Improve Accuracy of Its Data on Congressional Clearance of Projects as It Reevaluates...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE APR 2015 2. REPORT...TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Yemen: DOD Should Improve Accuracy of Its Data on Congressional Clearance

  11. Qmerit-calibrated overlay to improve overlay accuracy and device performance

    NASA Astrophysics Data System (ADS)

    Ullah, Md Zakir; Jazim, Mohamed Fazly Mohamed; Sim, Stella; Lim, Alan; Hiem, Biow; Chuen, Lieu Chia; Ang, Jesline; Lim, Ek Chow; Klein, Dana; Amit, Eran; Volkovitch, Roie; Tien, David; Choi, DongSub

    2015-03-01

    In advanced semiconductor industries, the overlay error budget is getting tighter due to shrinkage in technology. To fulfill the tighter overlay requirements, gaining every nanometer of improved overlay is very important in order to accelerate yield in high-volume manufacturing (HVM) fabs. To meet the stringent overlay requirements and to overcome other unforeseen situations, it is becoming critical to eliminate the smallest imperfections in the metrology targets used for overlay metrology. For standard cases, the overlay metrology recipe is selected based on total measurement uncertainty (TMU). However, under certain circumstances, inaccuracy due to target imperfections can become the dominant contributor to the metrology uncertainty and cannot be detected and quantified by the standard TMU. For optical-based overlay (OBO) metrology targets, mark asymmetry is a common issue which can cause measurement inaccuracy, and it is not captured by standard TMU. In this paper, a new calibration method, Archer Self-Calibration (ASC), has been established successfully in HVM fabs to improve overlay accuracy on image-based overlay (IBO) metrology targets. Additionally, a new color selection methodology has been developed for the overlay metrology recipe as part of this calibration method. In this study, Qmerit-calibrated data has been used for run-to-run control loop at multiple devices. This study shows that color filter can be chosen more precisely with the help of Qmerit data. Overlay stability improved by 10~20% with best color selection, without causing any negative impact to the products. Residual error, as well as overlay mean plus 3-sigma, showed an improvement of up to 20% when Qmerit-calibrated data was used. A 30% improvement was seen in certain electrical data associated with tested process layers.

  12. Accuracy of Suture Passage During Arthroscopic Remplissage—What Anatomic Landmarks Can Improve It?

    PubMed Central

    Garcia, Grant H.; Degen, Ryan M.; Liu, Joseph N.; Kahlenberg, Cynthia A.; Dines, Joshua S.

    2016-01-01

    Background: Recent data suggest that inaccurate suture passage during remplissage may contribute to a loss of external rotation, with the potential to cause posterior shoulder pain because of the proximity to the musculotendinous junction. Purpose: To evaluate the accuracy of suture passage during remplissage and identify surface landmarks to improve accuracy. Study Design: Descriptive laboratory study. Methods: Arthroscopic remplissage was performed on 6 cadaveric shoulder specimens. Two single-loaded suture anchors were used for each remplissage. After suture passage, position was recorded in reference to the posterolateral acromion (PLA), with entry perpendicular to the humeral surface. After these measurements, the location of posterior cuff penetration was identified by careful surgical dissection. Results: Twenty-four sutures were passed in 6 specimens: 6 sutures (25.0%) were correctly passed through the infraspinatus tendon, 12 (50%) were through the infraspinatus muscle or musculotendinous junction (MTJ), and 6 (25%) were through the teres minor. Suture passage through the infraspinatus were on average 25 ± 5.4 mm inferior to the PLA, while sutures passing through the teres minor were on average 35.8 ± 5.7 mm inferior to the PLA. There was an odds ratio of 25 (95% CI, 2.1-298.3; P < .001) that the suture would be through the infraspinatus if the passes were less than 3 cm inferior to the PLA. Sutures passing through muscle and the MTJ were significantly more medial than those passing through tendon, measuring on average 8.1 ± 5.1 mm lateral to the PLA compared with 14.5 ± 5.5 mm (P < .02). If suture passes were greater than 1 cm lateral to the PLA, it was significantly more likely to be in tendon (P = .013). Conclusion: We found remplissage suture passage was inaccurate, with only 25% of sutures penetrating the infraspinatus tendon. Passing sutures 1 cm lateral and within 3 cm inferior of the PLA improves the odds of successful infraspinatus tenodesis

  13. On Improvement of the Accuracy and Speed in the Process of Measuring Characteristics of Thermoelectric Materials

    NASA Astrophysics Data System (ADS)

    Anatychuk, L. I.; Lysko, V. V.

    2014-10-01

    Results are presented on creation of novel methods for reduction of errors in measured properties of thermoelectric materials obtained by using object-oriented computer simulation for study of real physical models of the absolute method. The effects of radiation, heat losses along the electrodes, design elements of the measurement setup, non-dot-matrix of probes and sensors, and imperfection of thermal and electric contacts have been determined. Methods of eliminating errors due to these effects have been developed. Automated measuring equipment for complex study of thermoelectric material properties has been created, offering accuracy in thermoelectric figure of merit determination several times higher than conventional analogs. Values of errors obtained during measurements of Bi-Te-based materials within the temperature range from 30°C to 500°C include ˜0.5% for electrical conductivity, ˜0.7% for thermoelectromotive force, ˜3% for thermal conductivity, and ˜4.7% for figure of merit ( Z). The dynamic processes of achieving steady-state measurement conditions and possible errors due to deviations from these conditions are investigated. Functions of current through the sample, reference heater, and radiation shield heater are determined, whereby measurement speed is increased, which is of particular importance for investigation of large-size samples, such as parts of thermoelectric material ingots.

  14. Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen

    2017-03-01

    smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.

  15. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy.

    PubMed

    Smith, Tom; Heger, Andreas; Sudbery, Ian

    2017-03-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package.

  16. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  17. Coval: Improving Alignment Quality and Variant Calling Accuracy for Next-Generation Sequencing Data

    PubMed Central

    Kosugi, Shunichi; Natsume, Satoshi; Yoshida, Kentaro; MacLean, Daniel; Cano, Liliana; Kamoun, Sophien; Terauchi, Ryohei

    2013-01-01

    Accurate identification of DNA polymorphisms using next-generation sequencing technology is challenging because of a high rate of sequencing error and incorrect mapping of reads to reference genomes. Currently available short read aligners and DNA variant callers suffer from these problems. We developed the Coval software to improve the quality of short read alignments. Coval is designed to minimize the incidence of spurious alignment of short reads, by filtering mismatched reads that remained in alignments after local realignment and error correction of mismatched reads. The error correction is executed based on the base quality and allele frequency at the non-reference positions for an individual or pooled sample. We demonstrated the utility of Coval by applying it to simulated genomes and experimentally obtained short-read data of rice, nematode, and mouse. Moreover, we found an unexpectedly large number of incorrectly mapped reads in ‘targeted’ alignments, where the whole genome sequencing reads had been aligned to a local genomic segment, and showed that Coval effectively eliminated such spurious alignments. We conclude that Coval significantly improves the quality of short-read sequence alignments, thereby increasing the calling accuracy of currently available tools for SNP and indel identification. Coval is available at http://sourceforge.net/projects/coval105/. PMID:24116042

  18. Optimal algorithm to improve the calculation accuracy of energy deposition for betavoltaic MEMS batteries design

    NASA Astrophysics Data System (ADS)

    Li, Sui-xian; Chen, Haiyang; Sun, Min; Cheng, Zaijun

    2009-11-01

    Aimed at improving the calculation accuracy when calculating the energy deposition of electrons traveling in solids, a method we call optimal subdivision number searching algorithm is proposed. When treating the energy deposition of electrons traveling in solids, large calculation errors are found, we are conscious of that it is the result of dividing and summing when calculating the integral. Based on the results of former research, we propose a further subdividing and summing method. For β particles with the energy in the entire spectrum span, the energy data is set only to be the integral multiple of keV, and the subdivision number is set to be from 1 to 30, then the energy deposition calculation error collections are obtained. Searching for the minimum error in the collections, we can obtain the corresponding energy and subdivision number pairs, as well as the optimal subdivision number. The method is carried out in four kinds of solid materials, Al, Si, Ni and Au to calculate energy deposition. The result shows that the calculation error is reduced by one order with the improved algorithm.

  19. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    NASA Astrophysics Data System (ADS)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  20. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    PubMed Central

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  1. Improving the accuracy of macromolecular structure refinement at 7 Å resolution.

    PubMed

    Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F

    2012-06-06

    In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution.

  2. Automatic digital filtering for the accuracy improving of a digital holographic measurement system

    NASA Astrophysics Data System (ADS)

    Matrecano, Marcella; Miccio, Lisa; Persano, Anna; Quaranta, Fabio; Siciliano, Pietro; Ferraro, Pietro

    2014-05-01

    Digital holography (DH) is a well-established interferometric tool in optical metrology allowing the investigation of engineered surface shapes with microscale lateral resolution and nanoscale axial precision. With the advent of charged coupled devices (CCDs) with smaller pixel sizes, high speed computers and greater pixel numbers, DH became a very feasible technology which offers new possibilities for a large variety of applications. DH presents numerous advantages such as the direct access to the phase information, numerical correction of optical aberrations and the ability of a numerical refocusing from a single hologram. Furthermore, as an interferometric method, DH offers both a nodestructive and no-contact approach to very fragile objects combined with flexibility and a high sensitivity to geometric quantities such as thicknesses and displacements. These features recommend it for the solution of many imaging and measurements problems, such as microelectro-optomechanical systems (MEMS/MEOMS) inspection and characterization. In this work, we propose to improve the performance of a DH measurement on MEMS devices, through digital filters. We have developed an automatic procedure, inserted in the hologram reconstruction process, to selectively filter the hologram spectrum. The purpose is to provide very few noisy reconstructed images, thus increasing the accuracy of the conveyed information and measures performed on images. Furthermore, improving the image quality, we aim to make this technique application as simple and as accurate as possible.

  3. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy.

    PubMed

    Mugge, Winfred; Kuling, Irene A; Brenner, Eli; Smeets, Jeroen B J

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects' errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints.

  4. Improving diagnostic accuracy using agent-based distributed data mining system.

    PubMed

    Sridhar, S

    2013-09-01

    The use of data mining techniques to improve the diagnostic system accuracy is investigated in this paper. The data mining algorithms aim to discover patterns and extract useful knowledge from facts recorded in databases. Generally, the expert systems are constructed for automating diagnostic procedures. The learning component uses the data mining algorithms to extract the expert system rules from the database automatically. Learning algorithms can assist the clinicians in extracting knowledge automatically. As the number and variety of data sources is dramatically increasing, another way to acquire knowledge from databases is to apply various data mining algorithms that extract knowledge from data. As data sets are inherently distributed, the distributed system uses agents to transport the trained classifiers and uses meta learning to combine the knowledge. Commonsense reasoning is also used in association with distributed data mining to obtain better results. Combining human expert knowledge and data mining knowledge improves the performance of the diagnostic system. This work suggests a framework of combining the human knowledge and knowledge gained by better data mining algorithms on a renal and gallstone data set.

  5. What Districts Can Do To Improve Instruction and Achievement in All Schools.

    ERIC Educational Resources Information Center

    Togneri, Wendy

    2003-01-01

    A study of five high-poverty districts making strides in improving student achievement revealed that these districts focused on systemwide strategies including new approaches to professional development; making decisions based on data, not instinct; and redefining leadership roles. (MLF)

  6. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    PubMed

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy.

  7. Strategies for Achieving High Sequencing Accuracy for Low Diversity Samples and Avoiding Sample Bleeding Using Illumina Platform

    PubMed Central

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy. PMID:25860802

  8. Study on Improvement of Accuracy in Inertial Photogrammetry by Combining Images with Inertial Measurement Unit

    NASA Astrophysics Data System (ADS)

    Kawasaki, Hideaki; Anzai, Shojiro; Koizumi, Toshio

    2016-06-01

    Inertial photogrammetry is defined as photogrammetry that involves using a camera on which an inertial measurement unit (IMU) is mounted. In inertial photogrammetry, the position and inclination of a shooting camera are calculated using the IMU. An IMU is characterized by error growth caused by time accumulation because acceleration is integrated with respect to time. This study examines the procedure to estimate the position of the camera accurately while shooting using the IMU and the structure from motion (SfM) technology, which is applied in many fields, such as computer vision. When neither the coordinates of the position of the camera nor those of feature points are known, SfM provides a similar positional relationship between the position of the camera and feature points. Therefore, the actual length of positional coordinates is not determined. If the actual length of the position of the camera is unknown, the camera acceleration is obtained by calculating the second order differential of the position of the camera, with respect to the shooting time. The authors had determined the actual length by assigning the position of IMU to the SfM-calculated position. Hence, accuracy decreased because of the error growth, which was the characteristic feature of IMU. In order to solve this problem, a new calculation method was proposed. Using this method, the difference between the IMU-calculated acceleration and the camera-calculated acceleration can be obtained using the method of least squares, and the magnification required for calculating the actual dimension from the position of the camera can be obtained. The actual length can be calculated by multiplying all the SfM point groups by the obtained magnification factor. This calculation method suppresses the error growth, which is due to the time accumulation in IMU, and improves the accuracy of inertial photogrammetry.

  9. Reduction of Residual Stress and Improvement of Dimensional Accuracy by Uphill Quenching for Al6061 Tube

    NASA Astrophysics Data System (ADS)

    Lim, Hak-Jin; Ko, Dae-Hoon; Ko, Dae-Cheol; Kim, Byung-Min

    2014-04-01

    The purpose of this study is to reduce the residual stress and machining distortion of an Al6061 tube by using uphill quenching. During uphill quenching, solid-solution heat-treated aluminum parts are usually immersed in LN2 at 77 K (-196 °C), followed by the rapid heating of the parts, to produce a new residual stress that is opposite in nature to the original. The uphill quenching method used in this study employed two types of heating methods: boiling water at 373 K (100 °C) and high-velocity steam at 448 K (175 °C). First, FE-simulation coupled with a CFD analysis was performed to predict the residual stress of the backward hot-extruded Al6061 tube with the following dimensions: Ø200 mm × h200 mm × t10 mm. Experiment of uphill quenching was also conducted to measure the residual stress using the boiling water and high-velocity steam uphill quenching methods. The predicted residual stresses were compared with the experimental results obtained via micro-indentation and saw-cutting tests, and a deviation of about 10.4 pct was found. In addition, the experimental results showed that uphill quenching could relieve up to 91 pct of the residual stress induced by water quenching. Finally, the dimensional accuracy of uphill quenched tubes was evaluated by measuring the roundness after the machining process, which showed that the uphill quenching method could improve the dimensional accuracy of an Al6061 tube by reducing the residual stress.

  10. Improving the accuracy of protein stability predictions with multistate design using a variety of backbone ensembles.

    PubMed

    Davey, James A; Chica, Roberto A

    2014-05-01

    Multistate computational protein design (MSD) with backbone ensembles approximating conformational flexibility can predict higher quality sequences than single-state design with a single fixed backbone. However, it is currently unclear what characteristics of backbone ensembles are required for the accurate prediction of protein sequence stability. In this study, we aimed to improve the accuracy of protein stability predictions made with MSD by using a variety of backbone ensembles to recapitulate the experimentally measured stability of 85 Streptococcal protein G domain β1 sequences. Ensembles tested here include an NMR ensemble as well as those generated by molecular dynamics (MD) simulations, by Backrub motions, and by PertMin, a new method that we developed involving the perturbation of atomic coordinates followed by energy minimization. MSD with the PertMin ensembles resulted in the most accurate predictions by providing the highest number of stable sequences in the top 25, and by correctly binning sequences as stable or unstable with the highest success rate (≈90%) and the lowest number of false positives. The performance of PertMin ensembles is due to the fact that their members closely resemble the input crystal structure and have low potential energy. Conversely, the NMR ensemble as well as those generated by MD simulations at 500 or 1000 K reduced prediction accuracy due to their low structural similarity to the crystal structure. The ensembles tested herein thus represent on- or off-target models of the native protein fold and could be used in future studies to design for desired properties other than stability.

  11. Improving Student Motivation and Achievement in Mathematics through Teaching to the Multiple Intelligences.

    ERIC Educational Resources Information Center

    Bednar, Janet; Coughlin, Jane; Evans, Elizabeth; Sievers, Theresa

    This action research project described strategies for improving student motivation and achievement in mathematics through multiple intelligences. The targeted population consisted of kindergarten, third, fourth, and fifth grade students located in two major Midwestern cities. Documentation proving low student motivation and achievement in…

  12. Effective Strategies Urban Superintendents Utilize That Improve the Academic Achievement for African American Males

    ERIC Educational Resources Information Center

    Prioleau, Lushandra

    2013-01-01

    This study examined the effective strategies, resources, and programs urban superintendents utilize to improve the academic achievement for African-American males. This study employed a mixed-methods approach to answer the following research questions regarding urban superintendents and the academic achievement for African-American males: What…

  13. Improving the accuracy of estimates of animal path and travel distance using GPS drift-corrected dead reckoning.

    PubMed

    Dewhirst, Oliver P; Evans, Hannah K; Roskilly, Kyle; Harvey, Richard J; Hubel, Tatjana Y; Wilson, Alan M

    2016-09-01

    Route taken and distance travelled are important parameters for studies of animal locomotion. They are often measured using a collar equipped with GPS. Collar weight restrictions limit battery size, which leads to a compromise between collar operating life and GPS fix rate. In studies that rely on linear interpolation between intermittent GPS fixes, path tortuosity will often lead to inaccurate path and distance travelled estimates. Here, we investigate whether GPS-corrected dead reckoning can improve the accuracy of localization and distance travelled estimates while maximizing collar operating life. Custom-built tracking collars were deployed on nine freely exercising domestic dogs to collect high fix rate GPS data. Simulations were carried out to measure the extent to which combining accelerometer-based speed and magnetometer heading estimates (dead reckoning) with low fix rate GPS drift correction could improve the accuracy of path and distance travelled estimates. In our study, median 2-dimensional root-mean-squared (2D-RMS) position error was between 158 and 463 m (median path length 16.43 km) and distance travelled was underestimated by between 30% and 64% when a GPS position fix was taken every 5 min. Dead reckoning with GPS drift correction (1 GPS fix every 5 min) reduced 2D-RMS position error to between 15 and 38 m and distance travelled to between an underestimation of 2% and an overestimation of 5%. Achieving this accuracy from GPS alone would require approximately 12 fixes every minute and result in a battery life of approximately 11 days; dead reckoning reduces the number of fixes required, enabling a collar life of approximately 10 months. Our results are generally applicable to GPS-based tracking studies of quadrupedal animals and could be applied to studies of energetics, behavioral ecology, and locomotion. This low-cost approach overcomes the limitation of low fix rate GPS and enables the long-term deployment of lightweight GPS collars.

  14. DOD SCHOOLS: Additional Reporting Could Improve Accountability for Academic Achievement of Students with Dyslexia

    DTIC Science & Technology

    2007-12-01

    Representatives DOD SCHOOLS Additional Reporting Could Improve Accountability for Academic Achievement of Students with Dyslexia December...Could Improve Accountability for Academic Achievement of Students with Dyslexia 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...Students with Dyslexia Highlights of GAO-08-70, a report to the Chairman, Committee on Science and Technology, House of Representatives Many of our

  15. Breaking through barriers: using technology to address executive function weaknesses and improve student achievement.

    PubMed

    Schwartz, David M

    2014-01-01

    Assistive technologies provide significant capabilities for improving student achievement. Improved accessibility, cost, and diversity of applications make integration of technology a powerful tool to compensate for executive function weaknesses and deficits and their impact on student performance, learning, and achievement. These tools can be used to compensate for decreased working memory, poor time management, poor planning and organization, poor initiation, and decreased memory. Assistive technology provides mechanisms to assist students with diverse strengths and weaknesses in mastering core curricular concepts.

  16. Teachers' Perception of Their Principal's Leadership Style and the Effects on Student Achievement in Improving and Non-Improving Schools

    ERIC Educational Resources Information Center

    Hardman, Brenda Kay

    2011-01-01

    Teachers' perceptions of their school leaders influence student achievement in their schools. The extent of this influence is examined in this study. This quantitative study examined teachers' perceptions of the leadership style of their principals as transformational, transactional or passive-avoidant in improving and non-improving schools in…

  17. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes.

    PubMed

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-10-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement.

  18. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes

    PubMed Central

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-01-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. PMID:26178332

  19. Can Practice Calibrating by Test Topic Improve Public School Students' Calibration Accuracy and Performance on Tests?

    ERIC Educational Resources Information Center

    Riggs, Rose M.

    2012-01-01

    The effect of a calibration strategy requiring students to predict their scores for each topic on a high stakes test was investigated. The utility of self-efficacy towards predicting achievement and calibration accuracy was also explored. One hundred and ten sixth grade math students enrolled in an urban middle school participated. Students were…

  20. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence

    PubMed Central

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-01-01

    Motivation: Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. Methods: We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using ‘learning to rank’. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. Results: MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. Availability and implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn PMID:26072501

  1. Improving accuracy for identifying related PubMed queries by an integrated approach.

    PubMed

    Lu, Zhiyong; Wilbur, W John

    2009-10-01

    PubMed is the most widely used tool for searching biomedical literature online. As with many other online search tools, a user often types a series of multiple related queries before retrieving satisfactory results to fulfill a single information need. Meanwhile, it is also a common phenomenon to see a user type queries on unrelated topics in a single session. In order to study PubMed users' search strategies, it is necessary to be able to automatically separate unrelated queries and group together related queries. Here, we report a novel approach combining both lexical and contextual analyses for segmenting PubMed query sessions and identifying related queries and compare its performance with the previous approach based solely on concept mapping. We experimented with our integrated approach on sample data consisting of 1539 pairs of consecutive user queries in 351 user sessions. The prediction results of 1396 pairs agreed with the gold-standard annotations, achieving an overall accuracy of 90.7%. This demonstrates that our approach is significantly better than the previously published method. By applying this approach to a one day query log of PubMed, we found that a significant proportion of information needs involved more than one PubMed query, and that most of the consecutive queries for the same information need are lexically related. Finally, the proposed PubMed distance is shown to be an accurate and meaningful measure for determining the contextual similarity between biological terms. The integrated approach can play a critical role in handling real-world PubMed query log data as is demonstrated in our experiments.

  2. School Improvement Plans and Student Achievement: Preliminary Evidence from the Quality and Merit Project in Italy

    ERIC Educational Resources Information Center

    Caputo, Andrea; Rastelli, Valentina

    2014-01-01

    This study provides preliminary evidence from an Italian in-service training program addressed to lower secondary school teachers which supports school improvement plans (SIPs). It aims at exploring the association between characteristics/contents of SIPs and student improvement in math achievement. Pre-post standardized tests and text analysis of…

  3. Training Theory of Mind and Executive Control: A Tool for Improving School Achievement?

    ERIC Educational Resources Information Center

    Kloo, Daniela; Perner, Josef

    2008-01-01

    In the preschool years, there are marked improvements in theory of mind (ToM) and executive functions. And, children's competence in these two core cognitive domains is associated with their academic achievement. Therefore, training ToM and executive control could be a valuable tool for improving children's success in school. This article reviews…

  4. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study

    PubMed Central

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  5. Free Form Deformation–Based Image Registration Improves Accuracy of Traction Force Microscopy

    PubMed Central

    Jorge-Peñas, Alvaro; Izquierdo-Alvarez, Alicia; Aguilar-Cuenca, Rocio; Vicente-Manzanares, Miguel; Garcia-Aznar, José Manuel; Van Oosterwyck, Hans; de-Juan-Pardo, Elena M.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate

    2015-01-01

    Traction Force Microscopy (TFM) is a widespread method used to recover cellular tractions from the deformation that they cause in their surrounding substrate. Particle Image Velocimetry (PIV) is commonly used to quantify the substrate’s deformations, due to its simplicity and efficiency. However, PIV relies on a block-matching scheme that easily underestimates the deformations. This is especially relevant in the case of large, locally non-uniform deformations as those usually found in the vicinity of a cell’s adhesions to the substrate. To overcome these limitations, we formulate the calculation of the deformation of the substrate in TFM as a non-rigid image registration process that warps the image of the unstressed material to match the image of the stressed one. In particular, we propose to use a B-spline -based Free Form Deformation (FFD) algorithm that uses a connected deformable mesh to model a wide range of flexible deformations caused by cellular tractions. Our FFD approach is validated in 3D fields using synthetic (simulated) data as well as with experimental data obtained using isolated endothelial cells lying on a deformable, polyacrylamide substrate. Our results show that FFD outperforms PIV providing a deformation field that allows a better recovery of the magnitude and orientation of tractions. Together, these results demonstrate the added value of the FFD algorithm for improving the accuracy of traction recovery. PMID:26641883

  6. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  7. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.

    PubMed

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-09-19

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images.

  8. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    NASA Astrophysics Data System (ADS)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  9. Improving the local solution accuracy of large-scale digital image-based finite element analyses.

    PubMed

    Charras, G T; Guldberg, R E

    2000-02-01

    Digital image-based finite element modeling (DIBFEM) has become a widely utilized approach for efficiently meshing complex biological structures such as trabecular bone. While DIBFEM can provide accurate predictions of apparent mechanical properties, its application to simulate local phenomena such as tissue failure or adaptation has been limited by high local solution errors at digital model boundaries. Furthermore, refinement of digital meshes does not necessarily reduce local maximum errors. The purpose of this study was to evaluate the potential to reduce local mean and maximum solution errors in digital meshes using a post-processing filtration method. The effectiveness of a three-dimensional, boundary-specific filtering algorithm was found to be mesh size dependent. Mean absolute and maximum errors were reduced for meshes with more than five elements through the diameter of a cantilever beam considered representative of a single trabecula. Furthermore, mesh refinement consistently decreased errors for filtered solutions but not necessarily for non-filtered solutions. Models with more than five elements through the beam diameter yielded absolute mean errors of less than 15% for both Von Mises stress and maximum principal strain. When applied to a high-resolution model of trabecular bone microstructure, boundary filtering produced a more continuous solution distribution and reduced the predicted maximum stress by 30%. Boundary-specific filtering provides a simple means of improving local solution accuracy while retaining the model generation and numerical storage efficiency of the DIBFEM technique.

  10. Multimodal nonlinear optical microscopy improves the accuracy of early diagnosis of squamous intraepithelial neoplasia

    NASA Astrophysics Data System (ADS)

    Teh, Seng Khoon; Zheng, Wei; Li, Shuxia; Li, Dong; Zeng, Yan; Yang, Yanqi; Qu, Jianan Y.

    2013-03-01

    We explore diagnostic utility of a multicolor excitation multimodal nonlinear optical (NLO) microscopy for noninvasive detection of squamous epithelial precancer in vivo. The 7,12-dimenthylbenz(a)anthracene treated hamster cheek pouch was used as an animal model of carcinogenesis. The NLO microscope system employed was equipped with the ability to collect multiple tissue endogenous NLO signals such as two-photon excited fluorescence of keratin, nicotinamide adenine dinucleotide, collagen, and tryptophan, and second harmonic generation of collagen in spectral and time domains simultaneously. A total of 34 (11 controlled and 23 treated) Golden Syrian hamsters with 62 in vivo spatially distinct measurement sites were assessed in this study. High-resolution label-free NLO images were acquired from stratum corneum, stratum granulosum-stratum basale, and stroma for all tissue measurement sites. A total of nine and eight features from 745 and 600 nm excitation wavelengths, respectively, involving tissue structural and intrinsic biochemical properties were found to contain significant diagnostic information for precancers detection (p<0.05). Particularly, 600 nm excited tryptophan fluorescence signals emanating from stratum corneum was revealed to provide remarkable diagnostic utility. Multivariate statistical techniques confirmed the integration of diagnostically significant features from multicolor excitation wavelengths yielded improved diagnostic accuracy as compared to using the individual wavelength alone.

  11. A novel method for crosstalk analysis of biological networks: improving accuracy of pathway annotation

    PubMed Central

    Ogris, Christoph; Guala, Dimitri; Helleday, Thomas; Sonnhammer, Erik L. L.

    2017-01-01

    Analyzing gene expression patterns is a mainstay to gain functional insights of biological systems. A plethora of tools exist to identify significant enrichment of pathways for a set of differentially expressed genes. Most tools analyze gene overlap between gene sets and are therefore severely hampered by the current state of pathway annotation, yet at the same time they run a high risk of false assignments. A way to improve both true positive and false positive rates (FPRs) is to use a functional association network and instead look for enrichment of network connections between gene sets. We present a new network crosstalk analysis method BinoX that determines the statistical significance of network link enrichment or depletion between gene sets, using the binomial distribution. This is a much more appropriate statistical model than previous methods have employed, and as a result BinoX yields substantially better true positive and FPRs than was possible before. A number of benchmarks were performed to assess the accuracy of BinoX and competing methods. We demonstrate examples of how BinoX finds many biologically meaningful pathway annotations for gene sets from cancer and other diseases, which are not found by other methods. BinoX is available at http://sonnhammer.org/BinoX. PMID:27664219

  12. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  13. Utilisation of multi-frequency VEMPs improves diagnostic accuracy for Meniere's disease.

    PubMed

    Maxwell, Rebecca; Jerin, Claudia; Gürkov, Robert

    2017-01-01

    To determine whether vestibular evoked myogenic potential (VEMP) measurements that combine the VEMP 500/1000 Hz frequency tuning ratio and the inter-aural asymmetry ratio can reliably detect unilateral Meniere's disease ears as compared to healthy controls. Forty-two consecutive patients with certain unilateral Meniere's disease (as confirmed using a locally enhanced inner ear MRI (LEIM)) were assessed. Cervical vestibular evoked myogenic potentials (cVEMP) and ocular vestibular evoked myogenic potentials (oVEMP) were recorded at 500 and 1000 Hz. The VEMP amplitudes, asymmetry ratios, and the 500/1000 Hz amplitude ratios were compared with those of 21 age-matched healthy controls. A multi-frequency VEMPs score that combined: (1) the cVEMP 500/1000 Hz amplitude ratio, (2) the oVEMP 500/1000 Hz amplitude ratio, (3) the 500 Hz cVEMP asymmetry ratio, (4) the 1000 Hz cVEMP asymmetry ratio, produced a ROC curve with an area under the curve (AUC) of 0.814. The inclusion of audiology data further improved the result to 0.906. This score can be used to discriminate with a good degree of clinical accuracy between Meniere's ears (unilateral) and those of healthy controls. Multi-frequency VEMP analysis offers a simple, cost-effective solution to the diagnostic difficulties presented by Meniere's disease.

  14. Improving the Accuracy of Urban Environmental Quality Assessment Using Geographically-Weighted Regression Techniques

    PubMed Central

    Faisal, Kamil; Shaker, Ahmed

    2017-01-01

    Urban Environmental Quality (UEQ) can be treated as a generic indicator that objectively represents the physical and socio-economic condition of the urban and built environment. The value of UEQ illustrates a sense of satisfaction to its population through assessing different environmental, urban and socio-economic parameters. This paper elucidates the use of the Geographic Information System (GIS), Principal Component Analysis (PCA) and Geographically-Weighted Regression (GWR) techniques to integrate various parameters and estimate the UEQ of two major cities in Ontario, Canada. Remote sensing, GIS and census data were first obtained to derive various environmental, urban and socio-economic parameters. The aforementioned techniques were used to integrate all of these environmental, urban and socio-economic parameters. Three key indicators, including family income, higher level of education and land value, were used as a reference to validate the outcomes derived from the integration techniques. The results were evaluated by assessing the relationship between the extracted UEQ results and the reference layers. Initial findings showed that the GWR with the spatial lag model represents an improved precision and accuracy by up to 20% with respect to those derived by using GIS overlay and PCA techniques for the City of Toronto and the City of Ottawa. The findings of the research can help the authorities and decision makers to understand the empirical relationships among environmental factors, urban morphology and real estate and decide for more environmental justice. PMID:28272334

  15. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    SciTech Connect

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-11-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

  16. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    PubMed Central

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-01-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel. PMID:25370644

  17. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    PubMed

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  18. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Shen, Liangbo; Todorich, Bozho; Shieh, Christine; Kuo, Anthony; Toth, Cynthia; Izatt, Joseph A.

    2016-03-01

    Ophthalmic surgeons manipulate micron-scale tissues using stereopsis through an operating microscope and instrument shadowing for depth perception. While ophthalmic microsurgery has benefitted from rapid advances in instrumentation and techniques, the basic principles of the stereo operating microscope have not changed since the 1930's. Optical Coherence Tomography (OCT) has revolutionized ophthalmic imaging and is now the gold standard for preoperative and postoperative evaluation of most retinal and many corneal procedures. We and others have developed initial microscope-integrated OCT (MIOCT) systems for concurrent OCT and operating microscope imaging, but these are limited to 2D real-time imaging and require offline post-processing for 3D rendering and visualization. Our previously presented 4D MIOCT system can record and display the 3D surgical field stereoscopically through the microscope oculars using a dual-channel heads-up display (HUD) at up to 10 micron-scale volumes per second. In this work, we show that 4D MIOCT guidance improves the accuracy of depth-based microsurgical maneuvers (with statistical significance) in mock surgery trials in a wet lab environment. Additionally, 4D MIOCT was successfully performed in 38/45 (84%) posterior and 14/14 (100%) anterior eye human surgeries, and revealed previously unrecognized lesions that were invisible through the operating microscope. These lesions, such as residual and potentially damaging retinal deformation during pathologic membrane peeling, were visualized in real-time by the surgeon. Our integrated system provides an enhanced 4D surgical visualization platform that can improve current ophthalmic surgical practice and may help develop and refine future microsurgical techniques.

  19. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    PubMed

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-06

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty.

  20. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  1. How to achieve and prove performance improvement - 15 years of experience in German wastewater benchmarking.

    PubMed

    Bertzbach, F; Franz, T; Möller, K

    2012-01-01

    This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.

  2. Improving the accuracy of the discrete gradient method in the one-dimensional case.

    PubMed

    Cieśliński, Jan L; Ratkiewicz, Bogusław

    2010-01-01

    We present two numerical schemes of high accuracy for one-dimensional dynamical systems. They are modifications of the discrete gradient method and keep its advantages, including stability and conservation of the energy integral. However, their accuracy is higher by several orders of magnitude.

  3. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    ERIC Educational Resources Information Center

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  4. Sampling multiple scoring functions can improve protein loop structure prediction accuracy.

    PubMed

    Li, Yaohang; Rata, Ionel; Jakobsson, Eric

    2011-07-25

    Accurately predicting loop structures is important for understanding functions of many proteins. In order to obtain loop models with high accuracy, efficiently sampling the loop conformation space to discover reasonable structures is a critical step. In loop conformation sampling, coarse-grain energy (scoring) functions coupling with reduced protein representations are often used to reduce the number of degrees of freedom as well as sampling computational time. However, due to implicitly considering many factors by reduced representations, the coarse-grain scoring functions may have potential insensitivity and inaccuracy, which can mislead the sampling process and consequently ignore important loop conformations. In this paper, we present a new computational sampling approach to obtain reasonable loop backbone models, so-called the Pareto optimal sampling (POS) method. The rationale of the POS method is to sample the function space of multiple, carefully selected scoring functions to discover an ensemble of diversified structures yielding Pareto optimality to all sampled conformations. The POS method can efficiently tolerate insensitivity and inaccuracy in individual scoring functions and thereby lead to significant accuracy improvement in loop structure prediction. We apply the POS method to a set of 4-12-residue loop targets using a function space composed of backbone-only Rosetta and distance-scale finite ideal-gas reference (DFIRE) and a triplet backbone dihedral potential developed in our lab. Our computational results show that in 501 out of 502 targets, the model sets generated by POS contain structure models are within subangstrom resolution. Moreover, the top-ranked models have a root mean square deviation (rmsd) less than 1 A in 96.8, 84.1, and 72.2% of the short (4-6 residues), medium (7-9 residues), and long (10-12 residues) targets, respectively, when the all-atom models are generated by local optimization from the backbone models and are ranked by our

  5. Cost-effective improvements of a rotating platform by integration of a high-accuracy inclinometer and encoders for attitude evaluation

    NASA Astrophysics Data System (ADS)

    Wen, Chenyang; He, Shengyang; Bu, Changgen; Hu, Peida

    2017-01-01

    Attitude heading reference systems (AHRSs) based on micro-electromechanical system (MEMS) inertial sensors are widely used because of their low cost, light weight, and low power. However, low-cost AHRSs suffer from large inertial sensor errors. Therefore, experimental performance evaluation of MEMS-based AHRSs after system implementation is necessary. High-accuracy turntables can be used to verify the performance of MEMS-based AHRSs indoors, but they are expensive and unsuitable for outdoor tests. This study developed a low-cost two-axis rotating platform for indoor and outdoor attitude determination. A high-accuracy inclinometer and encoders were integrated into the platform to improve the achievable attitude test accuracy. An attitude error compensation method was proposed to calibrate the initial attitude errors caused by the movements and misalignment angles of the platform. The proposed attitude error determination method was examined through rotating experiments, which showed that the standard deviations of the pitch and roll errors were 0.050° and 0.090°, respectively. The pitch and roll errors both decreased to 0.024° when the proposed attitude error determination method was used. This decrease validates the effectiveness of the compensation method. Experimental results demonstrated that the integration of the inclinometer and encoders improved the performance of the low-cost, two-axis, rotating platform in terms of attitude accuracy.

  6. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  7. Improvements in dose accuracy delivered with static-MLC IMRT on an integrated linear accelerator control system

    SciTech Connect

    Li Ji; Wiersma, Rodney D.; Stepaniak, Christopher J.; Farrey, Karl J.; Al-Hallaq, Hania A.

    2012-05-15

    Purpose: Dose accuracy has been shown to vary with dose per segment and dose rate when delivered with static multileaf collimator (SMLC) intensity modulated radiation therapy (IMRT) by Varian C-series MLC controllers. The authors investigated the impact of monitor units (MUs) per segment and dose rate on the dose delivery accuracy of SMLC-IMRT fields on a Varian TrueBeam linear accelerator (LINAC), which delivers dose and manages motion of all components using a single integrated controller. Methods: An SMLC sequence was created consisting of ten identical 10 x 10 cm{sup 2} segments with identical MUs. Beam holding between segments was achieved by moving one out-of-field MLC leaf pair. Measurements were repeated for various combinations of MU/segment ranging from 1 to 40 and dose rates of 100-600 MU/min for a 6 MV photon beam (6X) and dose rates of 800-2400 MU/min for a 10 MV flattening-filter free photon (10XFFF) beam. All measurements were made with a Farmer (0.6 cm{sup 3}) ionization chamber placed at the isocenter in a solid-water phantom at 10 cm depth. The measurements were performed on two Varian LINACs: C-series Trilogy and TrueBeam. Each sequence was delivered three times and the dose readings for the corresponding segments were averaged. The effects of MU/segment, dose rate, and LINAC type on the relative dose variation ({Delta}{sub i}) were compared using F-tests ({alpha} = 0.05). Results: On the Trilogy, large {Delta}{sub i} was observed in small MU segments: at 1 MU/segment, the maximum {Delta}{sub i} was 10.1% and 57.9% at 100 MU/min and 600 MU/min, respectively. Also, the first segment of each sequence consistently overshot ({Delta}{sub i} > 0), while the last segment consistently undershot ({Delta}{sub i} < 0). On the TrueBeam, at 1 MU/segment, {Delta}{sub i} ranged from 3.0% to 4.5% at 100 and 600 MU/min; no obvious overshoot/undershoot trend was observed. F-tests showed statistically significant difference [(1 - {beta}) =1.0000] between the

  8. Compensation of Environment and Motion Error for Accuracy Improvement of Ultra-Precision Lathe

    NASA Astrophysics Data System (ADS)

    Kwac, Lee-Ku; Kim, Jae-Yeol; Kim, Hong-Gun

    The technological manipulation of the piezo-electric actuator could compensate for the errors of the machining precision during the process of machining which lead to an elevation and enhancement in overall precisions. This manipulation is a very convenient method to advance the precision for nations without the solid knowledge of the ultra-precision machining technology. There were 2 divisions of researches conducted to develop the UPCU for precision enhancement of the current lathe and compensation for the environmental errors as shown below; The first research was designed to measure and real-time correct any deviations in variety of areas to achieve a compensation system through more effective optical fiber laser encoder than the encoder resolution which was currently used in the existing lathe. The deviations for a real-time correction were composed of followings; the surrounding air temperature, the thermal deviations of the machining materials, the thermal deviations in spindles, and the overall thermal deviation occurred due to the machine structure. The second research was to develop the UPCU and to improve the machining precision through the ultra-precision positioning and the real-time operative error compensation. The ultimate goal was to improve the machining precision of the existing lathe through completing the 2 research tasks mentioned above.

  9. Improved EPMA Trace Element Accuracy Using a Matrix Iterated Quantitative Blank Correction

    NASA Astrophysics Data System (ADS)

    Donovan, J. J.; Wark, D. A.; Jercinovic, M. J.

    2007-12-01

    At trace element levels below several hundred PPM, accuracy is more often the limiting factor for EPMA quantification rather than precision. Modern EPMA instruments equipped with low noise detectors, counting electronics and large area analyzing crystals can now routinely achieve sensitivities for most elements in the 10 to 100 PPM levels (or even lower). But due to various sample and instrumental artifacts in the x-ray continuum, absolute accuracy is often the limiting factor for ultra trace element quantification. These artifacts have various mechanisms, but are usually attributed to sample artifacts (e.g., sample matrix absorption edges)1, detector artifacts (e.g., Ar or Xe absorption edges) 2 and analyzing crystal artifacts (extended peak tails preventing accurate determination of the true background and ¡§negative peaks¡¨ or ¡§holes¡¨ in the x-ray continuum). The latter being first described3 by Self, et al. and recently documented for the Ti kÑ in quartz geo-thermometer. 4 Ti (ka) Ti (ka) Ti (ka) Ti (ka) Ti (ka) Si () O () Total Average: -.00146 -.00031 -.00180 .00013 .00240 46.7430 53.2563 99.9983 Std Dev: .00069 .00075 .00036 .00190 .00117 .00000 .00168 .00419 The general magnitude of these artifacts can be seen in the above analyses of Ti ka in a synthetic quartz standard. The values for each spectrometer/crystal vary systematically from ¡V18 PPM to + 24 PPM. The exact mechanism for these continuum ¡§holes¡¨ is not known but may be related to secondary lattice diffraction occurring at certain Bragg angles depending on crystal mounting orientation for non-isometric analyzing crystals5. These x-ray continuum artifacts can produce systematic errors at levels up to 100 PPM or more depending on the particular analytical situation. In order to correct for these inaccuracies, a ¡§blank¡¨ correction has been developed that applies a quantitative correction to the measured x-ray intensities during the matrix iteration, by calculating the intensity

  10. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  11. Improvement of brain segmentation accuracy by optimizing non-uniformity correction using N3.

    PubMed

    Zheng, Weili; Chee, Michael W L; Zagorodnov, Vitali

    2009-10-15

    Smoothly varying and multiplicative intensity variations within MR images that are artifactual, can reduce the accuracy of automated brain segmentation. Fortunately, these can be corrected. Among existing correction approaches, the nonparametric non-uniformity intensity normalization method N3 (Sled, J.G., Zijdenbos, A.P., Evans, A.C., 1998. Nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Trans. Med. Imag. 17, 87-97.) is one of the most frequently used. However, at least one recent study (Boyes, R.G., Gunter, J.L., Frost, C., Janke, A.L., Yeatman, T., Hill, D.L.G., Bernstein, M.A., Thompson, P.M., Weiner, M.W., Schuff, N., Alexander, G.E., Killiany, R.J., DeCarli, C., Jack, C.R., Fox, N.C., 2008. Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils. NeuroImage 39, 1752-1762.) suggests that its performance on 3 T scanners with multichannel phased-array receiver coils can be improved by optimizing a parameter that controls the smoothness of the estimated bias field. The present study not only confirms this finding, but additionally demonstrates the benefit of reducing the relevant parameter values to 30-50 mm (default value is 200 mm), on white matter surface estimation as well as the measurement of cortical and subcortical structures using FreeSurfer (Martinos Imaging Centre, Boston, MA). This finding can help enhance precision in studies where estimation of cerebral cortex thickness is critical for making inferences.

  12. A new way in intelligent recognition improves control accuracy and efficiency for spacecrafts' rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Wang, JiaQing; Lu, Yaodong; Wang, JiaFa

    2013-08-01

    Spacecrafts rendezvous and docking (RVD) by human or autonomous control is a complicated and difficult problem especially in the final approach stage. Present control methods have their key technology weakness. It is a necessary, important and difficult step for RVD through human's aiming chaser spacecraft at target spacecraft in a coaxial line by a three-dimension bulge cross target. At present, there is no technology to quantify the alignment in image recognition direction. We present a new practical autonomous method to improve the accuracy and efficiency of RVD control by adding image recognition algorithm instead of human aiming and control. Target spacecraft has a bulge cross target which is designed for chaser spacecraft's aiming accurately and have two center points, one is a plate surface center point(PSCP), another is a bulge cross center point(BCCP), while chaser spacecraft has a monitoring ruler cross center point(RCCP) of the video telescope optical system for aiming . If the three center points are coincident at the monitoring image, the two spacecrafts keep aligning which is suitable for closing to docking. Using the trace spacecraft's video telescope optical system to acquire the real-time monitoring image of the target spacecraft's bulge cross target. Appling image processing and intelligent recognition algorithm to get rid of interference source to compute the three center points' coordinate and exact digital offset of two spacecrafts' relative position and attitude real-timely, which is used to control the chaser spacecraft pneumatic driving system to change the spacecraft attitude in six direction: up, down, front, back, left, right, pitch, drift and roll precisely. This way is also practical and economical because it needs not adding any hardware, only adding the real-time image recognition software into spacecrafts' present video system. It is suitable for autonomous control and human control.

  13. Improving accuracy in the MPM method using a null space filter

    NASA Astrophysics Data System (ADS)

    Gritton, Chris; Berzins, Martin

    2017-01-01

    The material point method (MPM) has been very successful in providing solutions to many challenging problems involving large deformations. Nevertheless there are some important issues that remain to be resolved with regard to its analysis. One key challenge applies to both MPM and particle-in-cell (PIC) methods and arises from the difference between the number of particles and the number of the nodal grid points to which the particles are mapped. This difference between the number of particles and the number of grid points gives rise to a non-trivial null space of the linear operator that maps particle values onto nodal grid point values. In other words, there are non-zero particle values that when mapped to the grid point nodes result in a zero value there. Moreover, when the nodal values at the grid points are mapped back to particles, part of those particle values may be in that same null space. Given positive mapping weights from particles to nodes such null space values are oscillatory in nature. While this problem has been observed almost since the beginning of PIC methods there are still elements of it that are problematical today as well as methods that transcend it. The null space may be viewed as being connected to the ringing instability identified by Brackbill for PIC methods. It will be shown that it is possible to remove these null space values from the solution using a null space filter. This filter improves the accuracy of the MPM methods using an approach that is based upon a local singular value decomposition (SVD) calculation. This local SVD approach is compared against the global SVD approach previously considered by the authors and to a recent MPM method by Zhang and colleagues.

  14. Improved reliability, accuracy and quality in automated NMR structure calculation with ARIA.

    PubMed

    Mareuil, Fabien; Malliavin, Thérèse E; Nilges, Michael; Bardiaux, Benjamin

    2015-08-01

    In biological NMR, assignment of NOE cross-peaks and calculation of atomic conformations are critical steps in the determination of reliable high-resolution structures. ARIA is an automated approach that performs NOE assignment and structure calculation in a concomitant manner in an iterative procedure. The log-harmonic shape for distance restraint potential and the Bayesian weighting of distance restraints, recently introduced in ARIA, were shown to significantly improve the quality and the accuracy of determined structures. In this paper, we propose two modifications of the ARIA protocol: (1) the softening of the force field together with adapted hydrogen radii, which is meaningful in the context of the log-harmonic potential with Bayesian weighting, (2) a procedure that automatically adjusts the violation tolerance used in the selection of active restraints, based on the fitting of the structure to the input data sets. The new ARIA protocols were fine-tuned on a set of eight protein targets from the CASD-NMR initiative. As a result, the convergence problems previously observed for some targets was resolved and the obtained structures exhibited better quality. In addition, the new ARIA protocols were applied for the structure calculation of ten new CASD-NMR targets in a blind fashion, i.e. without knowing the actual solution. Even though optimisation of parameters and pre-filtering of unrefined NOE peak lists were necessary for half of the targets, ARIA consistently and reliably determined very precise and highly accurate structures for all cases. In the context of integrative structural biology, an increasing number of experimental methods are used that produce distance data for the determination of 3D structures of macromolecules, stressing the importance of methods that successfully make use of ambiguous and noisy distance data.

  15. Effects of simulated interventions to improve school entry academic skills on socioeconomic inequalities in educational achievement.

    PubMed

    Chittleborough, Catherine R; Mittinty, Murthy N; Lawlor, Debbie A; Lynch, John W

    2014-01-01

    Randomized controlled trial evidence shows that interventions before age 5 can improve skills necessary for educational success; the effect of these interventions on socioeconomic inequalities is unknown. Using trial effect estimates, and marginal structural models with data from the Avon Longitudinal Study of Parents and Children (n = 11,764, imputed), simulated effects of plausible interventions to improve school entry academic skills on socioeconomic inequality in educational achievement at age 16 were examined. Progressive universal interventions (i.e., more intense intervention for those with greater need) to improve school entry academic skills could raise population levels of educational achievement by 5% and reduce absolute socioeconomic inequality in poor educational achievement by 15%.

  16. Effects of Simulated Interventions to Improve School Entry Academic Skills on Socioeconomic Inequalities in Educational Achievement

    PubMed Central

    Chittleborough, Catherine R; Mittinty, Murthy N; Lawlor, Debbie A; Lynch, John W

    2014-01-01

    Randomized controlled trial evidence shows that interventions before age 5 can improve skills necessary for educational success; the effect of these interventions on socioeconomic inequalities is unknown. Using trial effect estimates, and marginal structural models with data from the Avon Longitudinal Study of Parents and Children (n = 11,764, imputed), simulated effects of plausible interventions to improve school entry academic skills on socioeconomic inequality in educational achievement at age 16 were examined. Progressive universal interventions (i.e., more intense intervention for those with greater need) to improve school entry academic skills could raise population levels of educational achievement by 5% and reduce absolute socioeconomic inequality in poor educational achievement by 15%. PMID:25327718

  17. The Consequences of "School Improvement": Examining the Association between Two Standardized Assessments Measuring School Improvement and Student Science Achievement

    ERIC Educational Resources Information Center

    Maltese, Adam V.; Hochbein, Craig D.

    2012-01-01

    For more than half a century concerns about the ability of American students to compete in a global workplace focused policymakers' attention on improving school performance generally, and student achievement in science, technology, engineering, and mathematics (STEM) specifically. In its most recent form--No Child Left Behind--there is evidence…

  18. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  19. Improvement in absolute calibration accuracy of Landsat-5 TM with Landsat-7 ETM+ data

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Micijevic, E.; Teillet, P.M.; Helder, D.L.; ,

    2005-01-01

    The ability to detect and quantify changes in the Earth's environment depends on satellites sensors that can provide calibrated, consistent measurements of Earth's surface features through time. A critical step in this process is to put image data from subsequent generations of sensors onto a common radiometric scale. To evaluate Landsat-5 (L5) Thematic Mapper's (TM) utility in this role, image pairs from the L5 TM and Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors were compared. This approach involves comparison of surface observations based on image statistics from large common areas observed eight days apart by the two sensors. The results indicate a significant improvement in the consistency of L5 TM data with respect to L7 ETM+ data, achieved using a revised Look-Up-Table (LUT) procedure as opposed to the historical Internal Calibrator (IC) procedure previously used in the L5 TM product generation system. The average percent difference in reflectance estimates obtained from the L5 TM agree with those from the L7 ETM+ in the Visible and Near Infrared (VNIR) bands to within four percent and in the Short Wave Infrared (SWIR) bands to within six percent.

  20. Improved accuracy of acute graft-versus-host disease staging among multiple centers.

    PubMed

    Levine, John E; Hogan, William J; Harris, Andrew C; Litzow, Mark R; Efebera, Yvonne A; Devine, Steven M; Reshef, Ran; Ferrara, James L M

    2014-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD.

  1. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy

    PubMed Central

    Monaghan, Kieran A.

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an

  2. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    ERIC Educational Resources Information Center

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  3. Why Does Rereading Improve Metacomprehension Accuracy? Evaluating the Levels-of-Disruption Hypothesis for the Rereading Effect

    ERIC Educational Resources Information Center

    Dunlosky, John; Rawson, Katherine A.

    2005-01-01

    Rereading can improve the accuracy of people's predictions of future test performance for text material. This research investigated this rereading effect by evaluating 2 predictions from the levels-of-disruption hypothesis: (a) The rereading effect will occur when the criterion test measures comprehension of the text, and (b) the rereading effect…

  4. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  5. Improving accuracy of cell and chromophore concentration measurements using optical density

    PubMed Central

    2013-01-01

    Background UV–vis spectrophotometric optical density (OD) is the most commonly-used technique for estimating chromophore formation and cell concentration in liquid culture. OD wavelength is often chosen with little thought given to its effect on the quality of the measurement. Analysis of the contributions of absorption and scattering to the measured optical density provides a basis for understanding variability among spectrophotometers and enables a quantitative evaluation of the applicability of the Beer-Lambert law. This provides a rational approach for improving the accuracy of OD measurements used as a proxy for direct dry weight (DW), cell count, and pigment levels. Results For pigmented organisms, the choice of OD wavelength presents a tradeoff between the robustness and the sensitivity of the measurement. The OD at a robust wavelength is primarily the result of light scattering and does not vary with culture conditions; whereas, the OD at a sensitive wavelength is additionally dependent on light absorption by the organism’s pigments. Suitably robust and sensitive wavelengths are identified for a wide range of organisms by comparing their spectra to the true absorption spectra of dyes. The relative scattering contribution can be reduced either by measurement at higher OD, or by the addition of bovine serum albumin. Reduction of scattering or correlation with off-peak light attenuation provides for more accurate assessment of chromophore levels within cells. Conversion factors between DW, OD, and colony-forming unit density are tabulated for 17 diverse organisms to illustrate the scope of variability of these correlations. Finally, an inexpensive short pathlength LED-based flow cell is demonstrated for the online monitoring of growth in a bioreactor at culture concentrations greater than 5 grams dry weight per liter which would otherwise require off-line dilutions to obtain non-saturated OD measurements. Conclusions OD is most accurate as a time

  6. Instructional Leadership Influence on Collective Teacher Efficacy to Improve School Achievement

    ERIC Educational Resources Information Center

    Fancera, Samuel F.; Bliss, James R.

    2011-01-01

    The purpose of this study was to examine whether instructional leadership functions, as defined in Hallinger's Principal Instructional Management Rating Scale, positively influence collective teacher efficacy to improve school achievement. Teachers from sample schools provided data for measures of collective teacher efficacy and instructional…

  7. Investing in Educator Data Literacy Improves Student Achievement. Evidence of Impact: The Oregon Data Project

    ERIC Educational Resources Information Center

    Data Quality Campaign, 2012

    2012-01-01

    Since 2007 the Oregon DATA Project has been investing resources to provide educators on-the-job training around effective data use to improve student achievement. New evidence shows that their efforts are paying off. A 2011 Oregon DATA Project report detailed the impact of their investment in the state's educators, finding the following: (1)…

  8. Improving Achievement in Low-Performing Schools: Key Results for School Leaders

    ERIC Educational Resources Information Center

    Ward, Randolph E.; Burke, Mary Ann

    2004-01-01

    As accountability in schools becomes more crucial, educators are looking for comprehensive and innovative management practices that respond to challenges and realities of student academic achievement. In order to improve academic performance and the quality of instruction, the entire school community needs to be involved. This book provides six…

  9. Expansion of Out-of-School Programs Aims at Improving Student Achievement. Report.

    ERIC Educational Resources Information Center

    Perry, Mary; Teague, Jackie; Frey, Susan

    There is a growing conviction that out-of-school programs can play an important role in improving student achievement. Both government and private sources are investing in them. This report focuses on the expanding prevalence of after-school programs in California, and profiles their nature and the demands that they face. Funding has been…

  10. Promoting Student Achievement through Improved Health Policy. Policy Update. Vol. 22, No. 11

    ERIC Educational Resources Information Center

    Fobbs, Erima

    2015-01-01

    "Promoting Student Achievement through Improved Health Policy" is a quick primer of the [Centers for Disease Control and Prevention] CDC's "Whole School, Whole Community, Whole Child" model, which highlights 10 important areas for connecting health and learning: health education; physical education and physical activity;…

  11. Improving High School Students' Mathematics Achievement through the Use of Motivational Strategies.

    ERIC Educational Resources Information Center

    Portal, Jamie; Sampson, Lisa

    This report describes a program for motivating students in mathematics in order to improve achievement at the high school level. The targeted population consisted of high school students in a middle class community located in a suburb of a large metropolitan area. The problems of underachievement were documented through data collected from surveys…

  12. Analyzing Academic Achievement of Junior High School Students by an Improved Rough Set Model

    ERIC Educational Resources Information Center

    Pai, Ping-Feng; Lyu, Yi-Jia; Wang, Yu-Min

    2010-01-01

    Rough set theory (RST) is an emerging technique used to deal with problems in data mining and knowledge acquisition. However, the RST approach has not been widely explored in the field of academic achievement. This investigation developed an improved RST (IMRST) model, which employs linear discriminant analysis to determine a reduct of RST, and…

  13. Using Cooperative Learning To Improve the Academic Achievements of Inner-City Middle School Students.

    ERIC Educational Resources Information Center

    Holliday, Dwight C.

    Whether using cooperative learning can improve the academic achievement of inner city middle school students was studied in Gary, Indiana at a school with a population of 503 students. Two seventh-grade classes taught by 1 African American male teacher served as 1 treatment group of 20 at-risk students and one nontreatment group of 24 high…

  14. A Better Return on Investment: Reallocating Resources To Improve Student Achievement. [Booklet with Audiotapes].

    ERIC Educational Resources Information Center

    North Central Regional Educational Lab., Oak Brook, IL.

    Standards-based educational reform has prompted the education system as a whole to examine whether the dollars put into the system reflect an investment in meeting the overarching goals of school reform. Driven by a common goal of improving the achievement of all students to increase the productivity of society in general, the education industry…

  15. Improving Students' Creative Thinking and Achievement through the Implementation of Multiple Intelligence Approach with Mind Mapping

    ERIC Educational Resources Information Center

    Widiana, I. Wayan; Jampel, I. Nyoman

    2016-01-01

    This classroom action research aimed to improve the students' creative thinking and achievement in learning science. It conducted through the implementation of multiple intelligences with mind mapping approach and describing the students' responses. The subjects of this research were the fifth grade students of SD 8 Tianyar Barat, Kubu, and…

  16. Geometry-Related Children's Literature Improves the Geometry Achievement and Attitudes of Second-Grade Students

    ERIC Educational Resources Information Center

    McAndrew, Erica M.; Morris, Wendy L.; Fennell, Francis

    2017-01-01

    Use of mathematics-related literature can engage students' interest and increase their understanding of mathematical concepts. A quasi-experimental study of two second-grade classrooms assessed whether daily inclusion of geometry-related literature in the classroom improved attitudes toward geometry and achievement in geometry. Consistent with the…

  17. The Effectiveness of the SSHA in Improving Prediction of Academic Achievement.

    ERIC Educational Resources Information Center

    Wikoff, Richard L.; Kafka, Gene F.

    1981-01-01

    Investigated the effectiveness of the Survey of Study Habits (SSHA) in improving prediction of achievement. The American College Testing Program English and mathematics subtests were good predictors of gradepoint average. The SSHA subtests accounted for an additional 3 percent of the variance. Sex differences were noted. (Author)

  18. Improving Student Academic Reading Achievement through the Use of Multiple Intelligence Teaching Strategies.

    ERIC Educational Resources Information Center

    Uhlir, Pamela

    This report describes an action research project improving student academic reading achievement. The targeted population consisted of fifth grade students in a growing suburb of a major midwestern metropolitan area. The evidence for existence of the problem included student surveys, assessments, teacher observations and checklists. Analysis of…

  19. Improving Teaching Capacity to Increase Student Achievement: The Key Role of Data Interpretation by School Leaders

    ERIC Educational Resources Information Center

    Lynch, David; Smith, Richard; Provost, Steven; Madden, Jake

    2016-01-01

    Purpose: This paper argues that in a well-organised school with strong leadership and vision coupled with a concerted effort to improve the teaching performance of each teacher, student achievement can be enhanced. The purpose of this paper is to demonstrate that while macro-effect sizes such as "whole of school" metrics are useful for…

  20. VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data

    DTIC Science & Technology

    2014-11-01

    VA HEALTH CARE Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing...00-2014 4. TITLE AND SUBTITLE VA Health Care: Improvements Needed in Monitoring Antidepressant Use for Major Depressive Disorder and in Increasing...Use for Major Depressive Disorder and in Increasing Accuracy of Suicide Data Why GAO Did This Study In 2013, VA estimated that about 1.5 million

  1. High-resolution traction force microscopy on small focal adhesions - improved accuracy through optimal marker distribution and optical flow tracking

    PubMed Central

    Holenstein, Claude N.; Silvan, Unai; Snedeker, Jess G.

    2017-01-01

    The accurate determination of cellular forces using Traction Force Microscopy at increasingly small focal attachments to the extracellular environment presents an important yet substantial technical challenge. In these measurements, uncertainty regarding accuracy is prominent since experimental calibration frameworks at this size scale are fraught with errors – denying a gold standard against which accuracy of TFM methods can be judged. Therefore, we have developed a simulation platform for generating synthetic traction images that can be used as a benchmark to quantify the influence of critical experimental parameters and the associated errors. Using this approach, we show that TFM accuracy can be improved >35% compared to the standard approach by placing fluorescent beads as densely and closely as possible to the site of applied traction. Moreover, we use the platform to test tracking algorithms based on optical flow that measure deformation directly at the beads and show that these can dramatically outperform classical particle image velocimetry algorithms in terms of noise sensitivity and error. We then report how optimized experimental and numerical strategy can improve traction map accuracy, and further provide the best available benchmark to date for defining practical limits to TFM accuracy as a function of focal adhesion size. PMID:28164999

  2. Utilizing artificial neural networks in MATLAB to achieve parts-per-billion mass measurement accuracy with a fourier transform ion cyclotron resonance mass spectrometer.

    PubMed

    Williams, D Keith; Kovach, Alexander L; Muddiman, David C; Hanck, Kenneth W

    2009-07-01

    Fourier transform ion cyclotron resonance mass spectrometry has the ability to realize exceptional mass measurement accuracy (MMA); MMA is one of the most significant attributes of mass spectrometric measurements as it affords extraordinary molecular specificity. However, due to space-charge effects, the achievable MMA significantly depends on the total number of ions trapped in the ICR cell for a particular measurement, as well as relative ion abundance of a given species. Artificial neural network calibration in conjunction with automatic gain control (AGC) is utilized in these experiments to formally account for the differences in total ion population in the ICR cell between the external calibration spectra and experimental spectra. In addition, artificial neural network calibration is used to account for both differences in total ion population in the ICR cell as well as relative ion abundance of a given species, which also affords mean MMA values at the parts-per-billion level.

  3. Metacognitive Scaffolds Improve Self-Judgments of Accuracy in a Medical Intelligent Tutoring System

    ERIC Educational Resources Information Center

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S.

    2014-01-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were…

  4. Knowing What You Know: Improving Metacomprehension and Calibration Accuracy in Digital Text

    ERIC Educational Resources Information Center

    Reid, Alan J.; Morrison, Gary R.; Bol, Linda

    2017-01-01

    This paper presents results from an experimental study that examined embedded strategy prompts in digital text and their effects on calibration and metacomprehension accuracies. A sample population of 80 college undergraduates read a digital expository text on the basics of photography. The most robust treatment (mixed) read the text, generated a…

  5. Bureau of Indian Affairs Schools: New Facilities Management Information System Promising, but Improved Data Accuracy Needed.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    A General Accounting Office (GAO) study evaluated the Bureau of Indian Affairs' (BIA) new facilities management information system (FMIS). Specifically, the study examined whether the new FMIS addresses the old system's weaknesses and meets BIA's management needs, whether BIA has finished validating the accuracy of data transferred from the old…

  6. Improving Accuracy Is Not the Only Reason for Writing, and Even If It Were...

    ERIC Educational Resources Information Center

    Bruton, Anthony

    2009-01-01

    For research into language development in L2 writing to have any relevance, it has to be situated within a framework of decisions in writing pedagogy. Furthermore, a perspective on L2 language development cannot be limited only to accuracy levels. Even if this is the case, it is counter-intuitive that further input may be detrimental to language…

  7. Limitations and strategies to improve measurement accuracy in differential pulse-width pair Brillouin optical time-domain analysis sensing.

    PubMed

    Minardo, Aldo; Bernini, Romeo; Zeni, Luigi

    2013-05-01

    In this work, we analyze the effects of Brillouin gain and Brillouin frequency drifts on the accuracy of the differential pulse-width pair Brillouin optical time-domain analysis (DPP-BOTDA). In particular, we demonstrate numerically that the differential gain is highly sensitive to variations in the Brillouin gain and/or Brillouin shift occurring during the acquisition process, especially when operating with a small pulse pair duration difference. We also propose and demonstrate experimentally a method to compensate for these drifts and consequently improve measurement accuracy.

  8. Does tumor size improve the accuracy of prognostic prediction in patients with esophageal squamous cell carcinoma after surgical resection?

    PubMed Central

    Gao, Yongyin; Shang, Xiaobin; Gong, Lei; Ma, Zhao; Yang, Mingjian; Jiang, Hongjing; Zhan, Zhongli; Meng, Bin; Yu, Zhentao

    2016-01-01

    This study aimed to investigate whether the inclusion of tumor size could improve the prognostic accuracy in patients with esophageal squamous cell cancer (ESCC). A total of 387 patients with ESCC who underwent curative resection were enrolled in this analysis. The patients were categorized into small-sized tumors (SSTs) and large-sized tumors (LSTs) using an appropriate cut-off point for tumor size. Kaplan–Meier survival curve and log–rank test were used to evaluate the prognostic value of tumor size. A Cox regression model was adopted for multivariate analysis. Their accuracy was compared based on the presence or absence of tumor size. Using 3.5 cm as the optimal cut-off point, 228 and 159 patients presented with LSTs (≥ 3.5 cm) and SSTs (< 3.5 cm), respectively. The patients with LSTs had significantly worse prognoses than patients with SSTs (23.9% vs. 43.2%, P < 0.001). Multivariate analysis revealed that tumor size, histological type, invasion depth, and lymph node metastasis were independent predictors of overall survival. The addition of tumor size to the AJCC TNM staging improved the predictive accuracy of the 5-year survival rate by 3.9%. Further study showed that tumor size and T stage were independent predictors of the prognosis of node-negative patients, and the combination of tumor size and T stage improved the predictive accuracy by 3.7%. In conclusion, tumor size is indeed a simple and practical prognostic factor in patients with ESCC. It can be used to improve the prognostic accuracy of the current TNM staging, especially for patients with node-negative disease. PMID:27579613

  9. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2009-01-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  10. Power outage estimation for tropical cyclones: improved accuracy with simpler models.

    PubMed

    Nateghi, Roshanak; Guikema, Seth; Quiring, Steven M

    2014-06-01

    In this article, we discuss an outage-forecasting model that we have developed. This model uses very few input variables to estimate hurricane-induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.

  11. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  12. Improving Mars-GRAM: Increasing the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    Extensively utilized for numerous mission applications, the Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model. In a Monte-Carlo mode, Mars-GRAM's perturbation modeling capability is used to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM has been found to be inexact when used during the Mars Science Laboratory (MSL) site selection process for sensitivity studies for MapYear=0 and large optical depth values such as tau=3. Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM) from the surface to 80 km altitude. Mars-GRAM with the MapYear parameter set to 0 utilizes results from a MGCM run with a fixed value of tau=3 at all locations for the entire year. Imprecise atmospheric density and pressure at all altitudes is a consequence of this use of MGCM with tau=3. Density factor values have been determined for tau=0.3, 1 and 3 as a preliminary fix to this pressure-density problem. These factors adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. These density factors are fixed values for all latitudes and Ls and are included in Mars-GRAM Release 1.3. Work currently being done, to derive better multipliers by including variations with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data, will be highlighted in the presentation. The TES limb data utilized in this process has been validated by a comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS). This comparison study was undertaken for locations on Mars of varying latitudes, Ls, and LTST. The more precise density factors will be included in Mars-GRAM 2005 Release 1.4 and thus improve the results of future sensitivity studies done for large

  13. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections.

    PubMed

    Linsenmeyer, Katherine; Strymish, Judith; Gupta, Kalpana

    2015-12-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P < 0.001). Genitourinary tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules.

  14. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections

    PubMed Central

    Strymish, Judith; Gupta, Kalpana

    2015-01-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P < 0.001). Genitourinary tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. PMID:26416859

  15. Recipe for Success: An Updated Parents' Guide to Improving Colorado Schools and Student Achievement. Second Edition.

    ERIC Educational Resources Information Center

    Taher, Bonnie; Durr, Pamela

    This guide describes ways that parents can help improve student achievement and school quality. It answers such questions as how to choose the right early-education opportunity for a preschooler, how to make sure a 5-year-old is ready for school, how to help a daughter do well in school, how to work with a daughter's or son's teachers, how to help…

  16. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    PubMed

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety.

  17. Intensive Care Unit Admission Parameters Improve the Accuracy of Operative Mortality Predictive Models in Cardiac Surgery

    PubMed Central

    Ranucci, Marco; Ballotta, Andrea; Castelvecchio, Serenella; Baryshnikova, Ekaterina; Brozzi, Simonetta; Boncilli, Alessandra

    2010-01-01

    Background Operative mortality risk in cardiac surgery is usually assessed using preoperative risk models. However, intraoperative factors may change the risk profile of the patients, and parameters at the admission in the intensive care unit may be relevant in determining the operative mortality. This study investigates the association between a number of parameters at the admission in the intensive care unit and the operative mortality, and verifies the hypothesis that including these parameters into the preoperative risk models may increase the accuracy of prediction of the operative mortality. Methodology 929 adult patients who underwent cardiac surgery were admitted to the study. The preoperative risk profile was assessed using the logistic EuroSCORE and the ACEF score. A number of parameters recorded at the admission in the intensive care unit were explored for univariate and multivariable association with the operative mortality. Principal Findings A heart rate higher than 120 beats per minute and a blood lactate value higher than 4 mmol/L at the admission in the intensive care unit were independent predictors of operative mortality, with odds ratio of 6.7 and 13.4 respectively. Including these parameters into the logistic EuroSCORE and the ACEF score increased their accuracy (area under the curve 0.85 to 0.88 for the logistic EuroSCORE and 0.81 to 0.86 for the ACEF score). Conclusions A double-stage assessment of operative mortality risk provides a higher accuracy of the prediction. Elevated blood lactates and tachycardia reflect a condition of inadequate cardiac output. Their inclusion in the assessment of the severity of the clinical conditions after cardiac surgery may offer a useful tool to introduce more sophisticated hemodynamic monitoring techniques. Comparison between the predicted operative mortality risk before and after the operation may offer an assessment of the operative performance. PMID:21042411

  18. Significant Improvement of Puncture Accuracy and Fluoroscopy Reduction in Percutaneous Transforaminal Endoscopic Discectomy With Novel Lumbar Location System

    PubMed Central

    Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng

    2015-01-01

    Abstract Prospective nonrandomized control study. The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED). Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory. The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic time, and the secondary endpoint was location time and operation time. A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture time(s) were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P < 0.001). The fluoroscopic times were 14.03 ± 2.54 in Group A and 25.19 ± 4.28 in Group B (P < 0.001). The preoperative location time was 4.67 ± 1.41 minutes in Group A and 6.98 ± 0.94 minutes in Group B (P < 0.001). The operation time was 79.42 ± 10.15 minutes in Group A and 89.65 ± 14.06 minutes in Group B (P

  19. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  20. Does achieving the best practice tariff improve outcomes in hip fracture patients? An observational cohort study

    PubMed Central

    Oakley, B; Nightingale, J; Moran, CG; Moppett, IK

    2017-01-01

    Objectives To determine if the introduction of the best practice tariff (BPT) has improved survival of the elderly hip fracture population, or if achieving BPT results in improved survival for an individual. Setting A single university-affiliated teaching hospital. Participants 2541 patients aged over 60 admitted with a neck of femur fracture between 2008 and 2010 and from 2012 to 2014 were included, to create two cohorts of patients, before and after the introduction of BPT. The post-BPT cohort was divided into two groups, those who achieved the criteria and those who did not. Primary and secondary outcome measures Primary outcomes of interest were differences in mortality across cohorts. Secondary analysis was performed to identify associations between individual BPT criteria and mortality. Results The introduction of BPT did not significantly alter overall 30-mortality in the hip fracture population (8.3% pre-BPT vs 10.0% post-BPT; p=0.128). Neither was there a significant reduction in length of stay (15 days (IQR 9–21) pre-BPT vs 14 days (IQR 11–22); p=0.236). However, the introduction of BPT was associated with a reduction in the time from admission to theatre (median 44 hours pre-BPT (IQR 24–44) vs 23 hours post-BPT (IQR 17–30); p<0.005). 30-day mortality in those who achieved BPT was significantly lower (6.0% vs 21.0% in those who did not achieve-BPT; p<0.005). There was a survival benefit at 1 year for those who achieved BPT (28.6% vs 42.0% did not achieve-BPT; p<0.005). Multivariate logistic regression revealed that of the BPT criteria, AMT monitoring and expedited surgery were the only BPT criteria that significantly influenced survival. Conclusions The introduction of the BPT has not led to a demonstrable improvement in outcomes at organisational level, though other factors may have confounded any benefits. However, patients where BPT criteria are met appear to have improved outcomes. PMID:28167748

  1. Improving the accuracy of hohlraum simulations by calibrating the `SNB' multigroup diffusion model for nonlocal heat transport against a VFP code

    NASA Astrophysics Data System (ADS)

    Brodrick, Jonathan; Ridgers, Christopher; Dudson, Ben; Kingham, Robert; Marinak, Marty; Patel, Mehul; Umansky, Maxim; Chankin, Alex; Omotani, John

    2016-10-01

    Nonlocal heat transport, occurring when temperature gradients become steep on the scale of the electron mean free path (mfp), has proven critical in accurately predicting ignition-scale hohlraum energetics. A popular approach, and modern alternative to flux limiters, is the `SNB' model. This is implemented in both the HYDRA code used for simulating National Ignition Facility experiments and the CHIC code developed at the CELIA laboratory. We have performed extensive comparisons of the SNB heat flow predictions with two VFP codes, IMPACT and KIPP and found that calibrating the mfp to achieve agreement for a linear problem also improves nonlinear accuracy. Furthermore, we identify that using distinct electron-ion and electron-electron mfp's instead of a geometrically averaged one improves predictive capability when there are strong ionisation (Z) gradients. This work is funded by EPSRC Grant EP/K504178/1.

  2. Bloch-Siegert B1-Mapping Improves Accuracy and Precision of Longitudinal Relaxation Measurements in the Breast at 3 T.

    PubMed

    Whisenant, Jennifer G; Dortch, Richard D; Grissom, William; Kang, Hakmook; Arlinghaus, Lori R; Yankeelov, Thomas E

    2016-12-01

    Variable flip angle (VFA) sequences are a popular method of calculating T1 values, which are required in a quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI). B1 inhomogeneities are substantial in the breast at 3 T, and these errors negatively impact the accuracy of the VFA approach, thus leading to large errors in the DCE-MRI parameters that could limit clinical adoption of the technique. This study evaluated the ability of Bloch-Siegert B1 mapping to improve the accuracy and precision of VFA-derived T1 measurements in the breast. Test-retest MRI sessions were performed on 16 women with no history of breast disease. T1 was calculated using the VFA sequence, and B1 field variations were measured using the Bloch-Siegert methodology. As a gold standard, inversion recovery (IR) measurements of T1 were performed. Fibroglandular tissue and adipose tissue from each breast were segmented using the IR images, and the mean T1 was calculated for each tissue. Accuracy was evaluated by percent error (%err). Reproducibility was assessed via the 95% confidence interval (CI) of the mean difference and repeatability coefficient (r). After B1 correction, %err significantly (P < .001) decreased from 17% to 8.6%, and the 95% CI and r decreased from ±94 to ±38 milliseconds and from 276 to 111 milliseconds, respectively. Similar accuracy and reproducibility results were observed in the adipose tissue of the right breast and in both tissues of the left breast. Our data show that Bloch-Siegert B1 mapping improves accuracy and precision of VFA-derived T1 measurements in the breast.

  3. CT-based surgical planning software improves the accuracy of total hip replacement preoperative planning.

    PubMed

    Viceconti, M; Lattanzi, R; Antonietti, B; Paderni, S; Olmi, R; Sudanese, A; Toni, A

    2003-06-01

    The present study is aimed to compare accuracy and the repeatability in planning total hip replacements with the conventional templates on radiographs to that attainable on the same clinical cases when using CT-based planning software. The sizes of the cementless components planned with new computer aided preoperative planning system called Hip-Op and with standard templates were compared to those effectively implanted. The study group intentionally included only difficult clinical cases. The most common aetiology was congenital dysplasia of hip (65.6%). The Hip-Op planning system allowed the surgeons to obtain a preoperative planning more accurate than with templates, especially for the socket. Assuming correct a size planned one calliper above or below that implanted the accuracy increased from 83% for the stem and 69% for the socket when using templates to 86% for the stem and 93% for the socket when using the Hip-Op system. The repeatability of the Hip-Op system was found comparable to that of the template procedure, which is much more familiar to the surgeons. Furthermore, the repeatability of the preoperative planning with the Hip-Op system was consistent between surgeons, independently from their major or minor experience. The study clearly shows the advantages of a three-dimensional computer-based preoperative planning over the traditional template planning, especially when deformed anatomies are involved. The surgical planning performed with the Hip-Op system is accurate and repeatable, especially for the socket and for less experienced surgeons.

  4. Quantification of terrestrial laser scanner (TLS) elevation accuracy in oil palm plantation for IFSAR improvement

    NASA Astrophysics Data System (ADS)

    Muhadi, N. A.; Abdullah, A. F.; Kassim, M. S. M.

    2016-06-01

    In order to ensure the oil palm productivity is high, plantation site should be chosen wisely. Slope is one of the essential factors that need to be taken into consideration when doing a site selection. High quality of plantation area map with elevation information is needed for decision-making especially when dealing with hilly and steep area. Therefore, accurate digital elevation models (DEMs) are required. This research aims to increase the accuracy of Interferometric Synthetic Aperture Radar (IFSAR) by integrating Terrestrial Laser Scanner (TLS) to generate DEMs. However, the focus of this paper is to evaluate the z-value accuracy of TLS data and Real-Time Kinematic GPS (RTK-GPS) as a reference. Besides, this paper studied the importance of filtering process in developing an accurate DEMs. From this study, it has been concluded that the differences of z-values between TLS and IFSAR were small if the points were located on route and when TLS data has been filtered. This paper also concludes that laser scanner (TLS) should be set up on the route to reduce elevation error.

  5. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  6. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    NASA Astrophysics Data System (ADS)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  7. Impact of improved models for precise orbits of altimetry satellites on the orbit accuracy and regional mean sea level trends

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Esselborn, Saskia; Dettmering, Denise; Schöne, Tilo; Neumayer, Karl-Hans

    2015-04-01

    Precise orbits of altimetry satellites are a prerequisite for investigations of global and regional sea level changes. We show a significant progress obtained in the recent decades in modeling and determination of the orbits of altimetry satellites. This progress was reached due to the improved knowledge of the Earth gravity field obtained by using CHAMP (CHAllenging Mini-Satellite Payload), GRACE (Gravity Recovery and Climate Experiment) and GOCE (Gravity field and Ocean Circulation Explorer) data, improved realizations of the terrestrial and celestial reference frames and transformations between these reference frames, improved modeling of ocean and solid Earth tides, improved accuracy of observations and other effects. New precise orbits of altimetry satellites ERS-1 (1991-1996), TOPEX/Poseidon (1992-2005), ERS-2 (1995-2006), Envisat (2002-2012) and Jason-1 (2002-2012) have been recently derived at the time intervals given within the DFG UHR-GravDat project and the ESA Climate Change Initiative Sea Level project using satellite laser ranging (SLR), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), Precise Range And Range-Rate Equipment (PRARE) and altimetry single-satellite crossover data (various observation types were used for various satellites). We show the current state of the orbit accuracy and the improvements obtained in the recent years. In particular, we demonstrate the impact of recently developed time-variable Earth gravity field models, improved tropospheric refraction models for DORIS observations, latest release 05 of the atmosphere-ocean dealiasing product (AOD1B) and some other models on the orbit accuracy of these altimetry satellites and regional mean sea level trends computed using these new orbit solutions.

  8. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection is an attractive technology to generate rapid genetic gains in switchgrass and ...

  9. Use of the isabel decision support system to improve diagnostic accuracy of pediatric nurse practitioner and family nurse practitioner students.

    PubMed

    John, Rita Marie; Hall, Elizabeth; Bakken, Suzanne

    2012-01-01

    Patient safety is a priority for healthcare today. Despite a large proportion of malpractice claims the result of diagnostic error, the use of diagnostic decision support to improve diagnostic accuracy has not been widely used among healthcare professionals. Moreover, while the use of diagnostic decision support has been studied in attending physicians, residents, medical students and advanced practice nurses, the use of decision support among Advanced Practice Nurse (APN) students has not been studied. The authors have implemented the Isabel diagnostic decision support system into the curriculum and are evaluating its impact. The goals of the evaluation study are to describe the diagnostic accuracy and self-reported confidence levels of Pediatric Nurse Practitioner (PNP) and Family Nurse Practitioner (FNP) students over the course of their programs, to examine changes in diagnostic accuracy and self-reported confidence levels over the study period, and to evaluate differences between FNP and PNP students in diagnostic accuracy and self-reported confidence levels for pediatric cases. This paper summarizes establishment of the academic/industry collaboration, case generation, integration of Isabel into the curriculum, and evaluation design.

  10. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  11. Reconciling multiple data sources to improve accuracy of large-scale prediction of forest disease incidence

    USGS Publications Warehouse

    Hanks, E.M.; Hooten, M.B.; Baker, F.A.

    2011-01-01

    Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.

  12. Methods to improve pressure, temperature and velocity accuracies of filtered Rayleigh scattering measurements in gaseous flows

    NASA Astrophysics Data System (ADS)

    Doll, Ulrich; Burow, Eike; Stockhausen, Guido; Willert, Christian

    2016-12-01

    Frequency scanning filtered Rayleigh scattering is able to simultaneously provide time-averaged measurements of pressure, temperature and velocity in gaseous flows. By extending the underlying mathematical model, a robust alternative to existing approaches is introduced. Present and proposed model functions are then characterized during a detailed uncertainty analysis. Deviations between the analytical solution of a jet flow experiment and measured results could be related to laser-induced background radiation as well as the Rayleigh scattering’s spectral distribution. In applying a background correction method and by replacing the standard lineshape model by an empirical formulation, detrimental effects on pressure, temperature and velocity accuracies could be reduced below 15 hPa, 2.5 K and 2.7 m s-1.

  13. Laser Measurements Based for Volumetric Accuracy Improvement of Multi-axis Systems

    NASA Astrophysics Data System (ADS)

    Vladimir, Sokolov; Konstantin, Basalaev

    The paper describes a new developed approach to CNC-controlled multi-axis systems geometric errors compensation based on optimal error correction strategy. Multi-axis CNC-controlled systems - machine-tools and CMM's are the basis of modern engineering industry. Similar design principles of both technological and measurement equipment allow usage of similar approaches to precision management. The approach based on geometric errors compensation are widely used at present time. The paper describes a system for compensation of geometric errors of multi-axis equipment based on the new approach. The hardware basis of the developed system is a multi-function laser interferometer. The principles of system's implementation, results of measurements and system's functioning simulation are described. The effectiveness of application of described principles to multi-axis equipment of different sizes and purposes for different machining directions and zones within workspace is presented. The concepts of optimal correction strategy is introduced and dynamic accuracy control is proposed.

  14. Use of the fulcrum axis improves the accuracy of true anteroposterior radiographs of the shoulder.

    PubMed

    Braunstein, V; Kirchhoff, C; Ockert, B; Sprecher, C M; Korner, M; Mutschler, W; Wiedemann, E; Biberthaler, P

    2009-08-01

    In 100 patients the fulcrum axis which is the line connecting the anterior tip of the coracoid and the posterolateral angle of the acromion, was used to position true anteroposterior radiographs of the shoulder. This method was then compared with the conventional radiological technique in a further 100 patients. Three orthopaedic surgeons counted the number of images without overlap between the humeral head and glenoid and calculated the amount of the glenoid surface visible in each radiograph. The analysis was repeated for intraobserver reliability. The learning curves of both techniques were studied. The amount of free visible glenoid space was significantly higher using the fulcrum-axis method (64 vs 31) and the comparable glenoid size increased significantly (8.56 vs 6.47). Thus the accuracy of the anteroposterior radiographs of the shoulder is impaired by using this technique. The intra and interobserver reliability showed a high consistency. No learning curve was observed for either technique.

  15. The Application of Digital Pathology to Improve Accuracy in Glomerular Enumeration in Renal Biopsies

    PubMed Central

    Troost, Jonathan P.; Gasim, Adil; Bagnasco, Serena; Avila-Casado, Carmen; Johnstone, Duncan; Hodgin, Jeffrey B.; Conway, Catherine; Gillespie, Brenda W.; Nast, Cynthia C.; Barisoni, Laura; Hewitt, Stephen M.

    2016-01-01

    Background In renal biopsy reporting, quantitative measurements, such as glomerular number and percentage of globally sclerotic glomeruli, is central to diagnostic accuracy and prognosis. The aim of this study is to determine the number of glomeruli and percent globally sclerotic in renal biopsies by means of registration of serial tissue sections and manual enumeration, compared to the numbers in pathology reports from routine light microscopic assessment. Design We reviewed 277 biopsies from the Nephrotic Syndrome Study Network (NEPTUNE) digital pathology repository, enumerating 9,379 glomeruli by means of whole slide imaging. Glomerular number and the percentage of globally sclerotic glomeruli are values routinely recorded in the official renal biopsy pathology report from the 25 participating centers. Two general trends in reporting were noted: total number per biopsy or average number per level/section. Both of these approaches were assessed for their accuracy in comparison to the analogous numbers of annotated glomeruli on WSI. Results The number of glomeruli annotated was consistently higher than those reported (p<0.001); this difference was proportional to the number of glomeruli. In contrast, percent globally sclerotic were similar when calculated on total glomeruli, but greater in FSGS when calculated on average number of glomeruli (p<0.01). The difference in percent globally sclerotic between annotated and those recorded in pathology reports was significant when global sclerosis is greater than 40%. Conclusions Although glass slides were not available for direct comparison to whole slide image annotation, this study indicates that routine manual light microscopy assessment of number of glomeruli is inaccurate, and the magnitude of this error is proportional to the total number of glomeruli. PMID:27310011

  16. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    SciTech Connect

    Olivi, Alessandro, M.D.

    2010-08-28

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  17. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    ScienceCinema

    Olivi, Alessandro, M.D.

    2016-07-12

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  18. Assessing and improving the spatial accuracy in MEG source localization by depth-weighted minimum-norm estimates.

    PubMed

    Lin, Fa-Hsuan; Witzel, Thomas; Ahlfors, Seppo P; Stufflebeam, Steven M; Belliveau, John W; Hämäläinen, Matti S

    2006-05-15

    Cerebral currents responsible for the extra-cranially recorded magnetoencephalography (MEG) data can be estimated by applying a suitable source model. A popular choice is the distributed minimum-norm estimate (MNE) which minimizes the l2-norm of the estimated current. Under the l2-norm constraint, the current estimate is related to the measurements by a linear inverse operator. However, the MNE has a bias towards superficial sources, which can be reduced by applying depth weighting. We studied the effect of depth weighting in MNE using a shift metric. We assessed the localization performance of the depth-weighted MNE as well as depth-weighted noise-normalized MNE solutions under different cortical orientation constraints, source space densities, and signal-to-noise ratios (SNRs) in multiple subjects. We found that MNE with depth weighting parameter between 0.6 and 0.8 showed improved localization accuracy, reducing the mean displacement error from 12 mm to 7 mm. The noise-normalized MNE was insensitive to depth weighting. A similar investigation of EEG data indicated that depth weighting parameter between 2.0 and 5.0 resulted in an improved localization accuracy. The application of depth weighting to auditory and somatosensory experimental data illustrated the beneficial effect of depth weighting on the accuracy of spatiotemporal mapping of neuronal sources.

  19. Intellijoint HIP®: a 3D mini-optical navigation tool for improving intraoperative accuracy during total hip arthroplasty

    PubMed Central

    Paprosky, Wayne G; Muir, Jeffrey M

    2016-01-01

    Total hip arthroplasty is an increasingly common procedure used to address degenerative changes in the hip joint due to osteoarthritis. Although generally associated with good results, among the challenges associated with hip arthroplasty are accurate measurement of biomechanical parameters such as leg length, offset, and cup position, discrepancies of which can lead to significant long-term consequences such as pain, instability, neurological deficits, dislocation, and revision surgery, as well as patient dissatisfaction and, increasingly, litigation. Current methods of managing these parameters are limited, with manual methods such as outriggers or calipers being used to monitor leg length; however, these are susceptible to small intraoperative changes in patient position and are therefore inaccurate. Computer-assisted navigation, while offering improved accuracy, is expensive and cumbersome, in addition to adding significantly to procedural time. To address the technological gap in hip arthroplasty, a new intraoperative navigation tool (Intellijoint HIP®) has been developed. This innovative, 3D mini-optical navigation tool provides real-time, intraoperative data on leg length, offset, and cup position and allows for improved accuracy and precision in component selection and alignment. Benchtop and simulated clinical use testing have demonstrated excellent accuracy, with the navigation tool able to measure leg length and offset to within <1 mm and cup position to within <1° in both anteversion and inclination. This study describes the indications, procedural technique, and early accuracy results of the Intellijoint HIP surgical tool, which offers an accurate and easy-to-use option for hip surgeons to manage leg length, offset, and cup position intraoperatively. PMID:27920583

  20. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    PubMed

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  1. Fusion of range camera and photogrammetry: a systematic procedure for improving 3-D models metric accuracy.

    PubMed

    Guidi, G; Beraldin, J A; Ciofi, S; Atzeni, C

    2003-01-01

    The generation of three-dimensional (3-D) digital models produced by optical technologies in some cases involves metric errors. This happens when small high-resolution 3-D images are assembled together in order to model a large object. In some applications, as for example 3-D modeling of Cultural Heritage, the problem of metric accuracy is a major issue and no methods are currently available for enhancing it. The authors present a procedure by which the metric reliability of the 3-D model, obtained through iterative alignments of many range maps, can be guaranteed to a known acceptable level. The goal is the integration of the 3-D range camera system with a close range digital photogrammetry technique. The basic idea is to generate a global coordinate system determined by the digital photogrammetric procedure, measuring the spatial coordinates of optical targets placed around the object to be modeled. Such coordinates, set as reference points, allow the proper rigid motion of few key range maps, including a portion of the targets, in the global reference system defined by photogrammetry. The other 3-D images are normally aligned around these locked images with usual iterative algorithms. Experimental results on an anthropomorphic test object, comparing the conventional and the proposed alignment method, are finally reported.

  2. An extended dynamometer setup to improve the accuracy of knee joint moment assessment.

    PubMed

    Van Campen, Anke; De Groote, Friedl; Jonkers, Ilse; De Schutter, Joris

    2013-05-01

    This paper analyzes an extended dynamometry setup that aims at obtaining accurate knee joint moments. The main problem of the standard setup is the misalignment of the joint and the dynamometer axes of rotation due to nonrigid fixation, and the determination of the joint axis of rotation by palpation. The proposed approach 1) combines 6-D registration of the contact forces with 3-D motion capturing (which is a contribution to the design of the setup); 2) includes a functional axis of rotation in the model to describe the knee joint (which is a contribution to the modeling); and 3) calculates joint moments by a model-based 3-D inverse dynamic analysis. Through a sensitivity analysis, the influence of the accuracy of all model parameters is evaluated. Dynamics resulting from the extended setup are quantified, and are compared to those provided by the dynamometer. Maximal differences between the 3-D joint moment resulting from the inverse dynamics and measured by the dynamometer were 16.4 N ·m (16.9%) isokinetically and 18.3 N ·m (21.6%) isometrically. The calculated moment is most sensitive to the orientation and location of the axis of rotation. In conclusion, more accurate experimental joint moments are obtained using a model-based 3-D inverse dynamic approach that includes a good estimate of the pose of the joint axis.

  3. Iterative error correction of long sequencing reads maximizes accuracy and improves contig assembly.

    PubMed

    Sameith, Katrin; Roscito, Juliana G; Hiller, Michael

    2017-01-01

    Next-generation sequencers such as Illumina can now produce reads up to 300 bp with high throughput, which is attractive for genome assembly. A first step in genome assembly is to computationally correct sequencing errors. However, correcting all errors in these longer reads is challenging. Here, we show that reads with remaining errors after correction often overlap repeats, where short erroneous k-mers occur in other copies of the repeat. We developed an iterative error correction pipeline that runs the previously published String Graph Assembler (SGA) in multiple rounds of k-mer-based correction with an increasing k-mer size, followed by a final round of overlap-based correction. By combining the advantages of small and large k-mers, this approach corrects more errors in repeats and minimizes the total amount of erroneous reads. We show that higher read accuracy increases contig lengths two to three times. We provide SGA-Iteratively Correcting Errors (https://github.com/hillerlab/IterativeErrorCorrection/) that implements iterative error correction by using modules from SGA.

  4. Investigation of smoothness-increasing accuracy-conserving filters for improving streamline integration through discontinuous fields.

    PubMed

    Steffen, Michael; Curtis, Sean; Kirby, Robert M; Ryan, Jennifer K

    2008-01-01

    Streamline integration of fields produced by computational fluid mechanics simulations is a commonly used tool for the investigation and analysis of fluid flow phenomena. Integration is often accomplished through the application of ordinary differential equation (ODE) integrators--integrators whose error characteristics are predicated on the smoothness of the field through which the streamline is being integrated--smoothness which is not available at the inter-element level of finite volume and finite element data. Adaptive error control techniques are often used to ameliorate the challenge posed by inter-element discontinuities. As the root of the difficulties is the discontinuous nature of the data, we present a complementary approach of applying smoothness-enhancing accuracy-conserving filters to the data prior to streamline integration. We investigate whether such an approach applied to uniform quadrilateral discontinuous Galerkin (high-order finite volume) data can be used to augment current adaptive error control approaches. We discuss and demonstrate through numerical example the computational trade-offs exhibited when one applies such a strategy.

  5. Modified surface loading process for achieving improved performance of the quantum dot-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Jin, Zhongxiu; Zhu, Jun; Xu, Yafeng; Zhou, Li; Dai, Songyuan

    2016-06-01

    Achieving high surface coverage of the colloidal quantum dots (QDs) on TiO2 films has been challenging for quantum dot-sensitized solar cells (QDSCs). Herein, a general surface engineering approach was proposed to increase the loading of these QDs. It was found that S2- treatment/QD re-uptake process can significantly improve the attachment of the QDs on TiO2 films. Surface concentration of the QDs was improved by ∼60%, which in turn greatly enhances light absorption and decreases carrier recombination in QDSCs. Ensuing QDSCs with optimized QD loading exhibit a power conversion efficiency of 3.66%, 83% higher than those fabricated with standard procedures.

  6. US objectives generally achieved at broadcasting satellite international conference. Improvements can help in future conferences

    NASA Astrophysics Data System (ADS)

    1984-08-01

    The implementation of broadcasting satellite service for the Western Hemisphere was planned. Broadcasting satellites transmit television programs and other information services from Earth orbit to home or office antennas. At the request of the Senate Appropriations Subcommittee on Commerce, Justice, State and the Judiciary, GAO reviewed conference results as compared to established conference objectives and examined the interagency coordination of U.S. participation in this international conference. The United States basically achieved its two most important conference objectives: adopting a technically and procedurally flexible plan for broadcasting satellite service and obtaining a sufficient allocation of satellite orbit slots and frequencies to meet domestic needs. The U.S. was unable, however, to obtain agreement on adopting a maximum signal power level for satellites. The Department of State could improve its preparation, internal coordination, and administrative support for future international conferences and recommends actions to the Secretary of State to improve its international telecommunications activities.

  7. Improving the Grammatical Accuracy of the Spoken English of Indonesian International Kindergarten Students

    ERIC Educational Resources Information Center

    Gozali, Imelda; Harjanto, Ignatius

    2014-01-01

    The need to improve the spoken English of kindergarten students in an international preschool in Surabaya prompted this Classroom Action Research (CAR). It involved the implementation of Form-Focused Instruction (FFI) strategy coupled with Corrective Feedback (CF) in Grammar lessons. Four grammar topics were selected, namely Regular Plural form,…

  8. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  9. Improving the Quality of Nursing Home Care and Medical-Record Accuracy with Direct Observational Technologies

    ERIC Educational Resources Information Center

    Schnelle, John F.; Osterweil, Dan; Simmons, Sandra F.

    2005-01-01

    Nursing home medical-record documentation of daily-care occurrence may be inaccurate, and information is not documented about important quality-of-life domains. The inadequacy of medical record data creates a barrier to improving care quality, because it supports an illusion of care consistent with regulations, which reduces the motivation and…

  10. Electron Microprobe Analysis of Hf in Zircon: Suggestions for Improved Accuracy of a Difficult Measurement

    NASA Astrophysics Data System (ADS)

    Fournelle, J.; Hanchar, J. M.

    2013-12-01

    It is not commonly recognized as such, but the accurate measurement of Hf in zircon is not a trivial analytical issue. This is important to assess because Hf is often used as an internal standard for trace element analyses of zircon by LA-ICPMS. The issues pertaining to accuracy revolve around: (1) whether the Hf Ma or the La line is used; (2) what accelerating voltage is applied if Zr La is also measured, and (3) what standard for Hf is used. Weidenbach, et al.'s (2004) study of the 91500 zircon demonstrated the spread (in accuracy) of possible EPMA values for six EPMA labs, 2 of which used Hf Ma, 3 used Hf La, and one used Hf Lb, and standards ranged from HfO2, a ZrO2-HfO2 compound, Hf metal, and hafnon. Weidenbach, et al., used the ID-TIMS values as the correct value (0.695 wt.% Hf.), for which not one of the EPMA labs came close to that value (3 were low and 3 were high). Those data suggest: (1) that there is a systematic underestimation error of the 0.695 wt% Hf (ID-TIMS Hf) value if Hf Ma is used; most likely an issue with the matrix correction, as the analytical lines and absorption edges of Zr La, Si Ka and Hf Ma are rather tightly packed in the electromagnetic spectrum. Mass absorption coefficients are easily in error (e.g., Donovan's determination of the MAC of Hf by Si Ka of 5061 differs from the typically used Henke value of 5449 (Donovan et al, 2002); and (2) For utilization of the Hf La line, however, the second order Zr Ka line interferes with Hf La if the accelerating voltage is greater than 17.99 keV. If this higher keV is used and differential mode PHA is applied, only a portion of the interference is removed (e.g., removal of escape peaks), causing an overestimation of Hf content. Unfortunately, it is virtually impossible to apply an interference correction in this case, as it is impossible to locate Hf-free Zr probe standard. We have examined many of the combinations used by those six EPMA labs and concluded that the optimal EPMA is done with Hf

  11. Improved solution accuracy for TDRSS-based TOPEX/Poseidon orbit determination

    NASA Astrophysics Data System (ADS)

    Doll, C. E.; Mistretta, G. D.; Hart, R. C.; Oza, D. H.; Bolvin, D. T.; Cox, C. M.; Nemesure, M.; Niklewski, D. J.; Samii, M. V.

    1994-05-01

    Orbit determination results are obtained by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) using a batch-least-squares estimator available in the Goddard Trajectory Determination System (GTDS) and an extended Kalman filter estimation system to process Tracking and Data Relay Satellite (TDRS) System (TDRSS) measurements. GTDS is the operational orbit determination system used by the FDD in support of the Ocean Topography Experiment (TOPEX)/Poseidon spacecraft navigation and health and safety operations. The extended Kalman filter was implemented in an orbit determination analysis prototype system, closely related to the Real-Time Orbit Determination System/Enhanced (RTOD/E) system. In addition, the Precision Orbit Determination (POD) team within the GSFC Space Geodesy Branch generated an independent set of high-accuracy trajectories to support the TOPEX/Poseidon scientific data. These latter solutions use the geodynamics (GEODYN) orbit determination system with laser ranging and Doppler Orbitography and Radiopositioning integrated by satellite (DORIS) tracking measurements. The TOPEX/Poseidon trajectories were estimated for November 7 through November 11, 1992, the timeframe under study. Independent assessments were made of the consistencies of solutions produced by the batch and sequential methods. The batch-least-squares solutions were assessed based on the solution residuals, while the sequential solutions were assessed based on primarily the estimated covariances. The batch-least-squares and sequential orbit solutions were compared with the definitive POD orbit solutions. The solution differences were generally less than 2 meters for the batch-least-squares and less than 13 meters for the sequential estimation solutions. After the sequential estimation solutions were processed with a smoother algorithm, position differences with POD orbit solutions of less than 7 meters were obtained. The differences among the POD, GTDS, and filter

  12. Quality improvement in diabetes--successful in achieving better care with hopes for prevention.

    PubMed

    Haw, J Sonya; Narayan, K M Venkat; Ali, Mohammed K

    2015-09-01

    Diabetes affects 29 million Americans and is associated with billions of dollars in health expenditures and lost productivity. Robust evidence has shown that lifestyle interventions in people at high risk for diabetes and comprehensive management of cardiometabolic risk factors like glucose, blood pressure, and lipids can delay the onset of diabetes and its complications, respectively. However, realizing the "triple aim" of better health, better care, and lower cost in diabetes has been hampered by low adoption of lifestyle interventions to prevent diabetes and poor achievement of care goals for those with diabetes. To achieve better care, a number of quality improvement (QI) strategies targeting the health system, healthcare providers, and/or patients have been evaluated in both controlled trials and real-world programs, and have shown some successes, though barriers still impede wider adoption, effectiveness, real-world feasibility, and scalability. Here, we summarize the effectiveness and cost-effectiveness data regarding QI strategies in diabetes care and discuss the potential role of quality monitoring and QI in trying to implement primary prevention of diabetes more widely and effectively. Over time, achieving better care and better health will likely help bend the ever-growing cost curve.

  13. Interprofessional Curbside Consults to Develop Team Communication and Improve Student Achievement of Learning Outcomes.

    PubMed

    Kirwin, Jennifer; Greenwood, Kristin Curry; Rico, Janet; Nalliah, Romesh; DiVall, Margarita

    2017-02-25

    Objective. To design and implement a series of activities focused on developing interprofessional communication skills and to assess the impact of the activities on students' attitudes and achievement of educational goals. Design. Prior to the first pharmacy practice skills laboratory session, pharmacy students listened to a classroom lecture about team communication and viewed short videos describing the roles, responsibilities, and usual work environments of four types of health care professionals. In each of four subsequent laboratory sessions, students interacted with a different standardized health care professional role-played by a pharmacy faculty member who asked them a medication-related question. Students responded in verbal and written formats. Assessment. Student performance was assessed with a three-part rubric. The impact of the exercise was assessed by conducting pre- and post-intervention surveys and analyzing students' performance on relevant Center for the Advancement of Pharmacy Education (CAPE) outcomes. Survey results showed improvement in student attitudes related to team-delivered care. Students' performance on the problem solver and collaborator CAPE outcomes improved, while performance on the educator outcome worsened. Conclusions. The addition of an interprofessional communication activity with standardized health care professionals provided the opportunity for students to develop skills related to team communication. Students felt the activity was valuable and realistic; however, analysis of outcome achievement from the exercise revealed a need for more exposure to team communication skills.

  14. Interprofessional Curbside Consults to Develop Team Communication and Improve Student Achievement of Learning Outcomes

    PubMed Central

    Greenwood, Kristin Curry; Rico, Janet; Nalliah, Romesh; DiVall, Margarita

    2017-01-01

    Objective. To design and implement a series of activities focused on developing interprofessional communication skills and to assess the impact of the activities on students’ attitudes and achievement of educational goals. Design. Prior to the first pharmacy practice skills laboratory session, pharmacy students listened to a classroom lecture about team communication and viewed short videos describing the roles, responsibilities, and usual work environments of four types of health care professionals. In each of four subsequent laboratory sessions, students interacted with a different standardized health care professional role-played by a pharmacy faculty member who asked them a medication-related question. Students responded in verbal and written formats. Assessment. Student performance was assessed with a three-part rubric. The impact of the exercise was assessed by conducting pre- and post-intervention surveys and analyzing students’ performance on relevant Center for the Advancement of Pharmacy Education (CAPE) outcomes. Survey results showed improvement in student attitudes related to team-delivered care. Students’ performance on the problem solver and collaborator CAPE outcomes improved, while performance on the educator outcome worsened. Conclusions. The addition of an interprofessional communication activity with standardized health care professionals provided the opportunity for students to develop skills related to team communication. Students felt the activity was valuable and realistic; however, analysis of outcome achievement from the exercise revealed a need for more exposure to team communication skills. PMID:28289305

  15. Medication Harmony: A Framework to Save Time, Improve Accuracy and Increase Patient Activation

    PubMed Central

    Pandolfe, Frank; Crotty, Bradley H; Safran, Charles

    2016-01-01

    Incompletely reconciled medication lists contribute to prescribing errors and adverse drug events. Providers expend time and effort at every point of patient contact attempting to curate a best possible medication list, and yet often the list is incomplete or inaccurate. We propose a framework that builds upon the existing infrastructure of a health information exchange (HIE), centralizes data and encourages patient activation. The solution is a constantly accessible, singular, patient-adjudicated medication list that incorporates useful information and features into the list itself. We aim to decrease medication errors across transitions of care, increase awareness of potential drug-drug interactions, improve patient knowledge and self-efficacy regarding medications, decrease polypharmacy, improve prescribing safety and ultimately decrease cost to the health-care system. PMID:28269955

  16. Medication Harmony: A Framework to Save Time, Improve Accuracy and Increase Patient Activation.

    PubMed

    Pandolfe, Frank; Crotty, Bradley H; Safran, Charles

    2016-01-01

    Incompletely reconciled medication lists contribute to prescribing errors and adverse drug events. Providers expend time and effort at every point of patient contact attempting to curate a best possible medication list, and yet often the list is incomplete or inaccurate. We propose a framework that builds upon the existing infrastructure of a health information exchange (HIE), centralizes data and encourages patient activation. The solution is a constantly accessible, singular, patient-adjudicated medication list that incorporates useful information and features into the list itself. We aim to decrease medication errors across transitions of care, increase awareness of potential drug-drug interactions, improve patient knowledge and self-efficacy regarding medications, decrease polypharmacy, improve prescribing safety and ultimately decrease cost to the health-care system.

  17. A case control study to improve accuracy of an electronic fall prevention toolkit.

    PubMed

    Dykes, Patricia C; I-Ching, Evita Hou; Soukup, Jane R; Chang, Frank; Lipsitz, Stuart

    2012-01-01

    Patient falls are a serious and commonly report adverse event in hospitals. In 2009, our team conducted the first randomized control trial of a health information technology-based intervention that significantly reduced falls in acute care hospitals. However, some patients on intervention units with access to the electronic toolkit fell. The purpose of this case control study was to use data mining and modeling techniques to identify the factors associated with falls in hospitalized patients when the toolkit was in place. Our ultimate aim was to apply our findings to improve the toolkit logic and to generate practice recommendations. The results of our evaluation suggest that the fall prevention toolkit logic is accurate but strategies are needed to improve adherence with the fall prevention intervention recommendations generated by the electronic toolkit.

  18. ANS shell elements with improved transverse shear accuracy. [Assumed Natural Coordinate Strain

    NASA Technical Reports Server (NTRS)

    Jensen, Daniel D.; Park, K. C.

    1992-01-01

    A method of forming assumed natural coordinate strain (ANS) plate and shell elements is presented. The ANS method uses equilibrium based constraints and kinematic constraints to eliminate hierarchical degrees of freedom which results in lower order elements with improved stress recovery and displacement convergence. These techniques make it possible to easily implement the element into the standard finite element software structure, and a modified shape function matrix can be used to create consistent nodal loads.

  19. Improving quality and reducing inequities: a challenge in achieving best care

    PubMed Central

    Nicewander, David A.; Qin, Huanying; Ballard, David J.

    2006-01-01

    The health care quality chasm is better described as a gulf for certain segments of the population, such as racial and ethnic minority groups, given the gap between actual care received and ideal or best care quality. The landmark Institute of Medicine report Crossing the Quality Chasm: A New Health System for the 21st Century challenges all health care organizations to pursue six major aims of health care improvement: safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness. “Equity” aims to ensure that quality care is available to all and that the quality of care provided does not differ by race, ethnicity, or other personal characteristics unrelated to a patient's reason for seeking care. Baylor Health Care System is in the unique position of being able to examine the current state of equity in a typical health care delivery system and to lead the way in health equity research. Its organizational vision, “culture of quality,” and involved leadership bode well for achieving equitable best care. However, inequities in access, use, and outcomes of health care must be scrutinized; the moral, ethical, and economic issues they raise and the critical injustice they create must be remedied if this goal is to be achieved. Eliminating any observed inequities in health care must be synergistically integrated with quality improvement. Quality performance indicators currently collected and evaluated indicate that Baylor Health Care System often performs better than the national average. However, there are significant variations in care by age, gender, race/ethnicity, and socioeconomic status that indicate the many remaining challenges in achieving “best care” for all. PMID:16609733

  20. Employee Perceptions of Progress with Implementing a Student-Centered Model of Institutional Improvement: An Achieving the Dream Case Study

    ERIC Educational Resources Information Center

    Cheek, Annesa LeShawn

    2011-01-01

    Achieving the Dream is a national initiative focused on helping more community college students succeed, particularly students of color and low-income students. Achieving the Dream's student-centered model of institutional improvement focuses on eliminating gaps and raising student achievement by helping institutions build a culture of evidence…

  1. From Guide to Practice: Improving Your After School Science Program to Increase Student Academic Achievement

    NASA Astrophysics Data System (ADS)

    Taylor, J.

    2013-12-01

    Numerous science organizations, such as NASA, offer educational outreach activities geared towards after school. For some programs, the primary goal is to grow students' love of science. For others, the programs are also intended to increase academic achievement. For those programs looking to support student learning in out-of-school time environments, aligning the program with learning during the classroom day can be a challenge. The Institute for Education Sciences, What Works Clearinghouse, put together a 'Practice Guide' for maximizing learning time beyond the regular school day. These practice guides provide concrete recommendations for educators supported by research. While this guide is not specific to any content or subject-area, the recommendations provided align very well with science education. After school science is often viewed as a fun, dynamic environment for students. Indeed, one of the recommendations to ensure time is structured according to students' needs is to provide relevant and interesting experiences. Given that our after school programs provide such creative environments for students, what other components are needed to promote increased academic achievement? The recommendations provided to academic achievement, include: 1. Align Instruction, 2. Maximize Attendance and Participation, 3. Adapt Instruction, 4. Provide Engaging Experiences, and 5. Evaluate Program. In this session we will examine these five recommendations presented in the Practice Guide, discuss how these strategies align with science programs, and examine what questions each program should address in order to provide experiences that lend themselves to maximizing instruction. Roadblocks and solutions for overcoming challenges in each of the five areas will be presented. Jessica Taylor will present this research based on her role as an author on the Practice Guide, 'Improving Academic Achievement in Out-of-School Time' and her experience working in various informal science

  2. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  3. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  4. Progress in Improving the Accuracy of Hugoniot Equation-of-State Measurements at the AWE Helen Laser.

    NASA Astrophysics Data System (ADS)

    Rothman, Stephen; Evans, Andrew; Graham, Peter; Horsfield, Colin

    1998-11-01

    For several years we have been conducting a series of equation-of-state (EOS) experiments using the Helen laser at AWE with the aim of an accuracy of 1% in shock velocity measurements(A.M. Evans, N.J. Freeman, P. Graham, C.J. Horsfield, S.D. Rothman, B.R. Thomas and A.J. Tyrrell, Laser and Particle Beams, vol. 14, no. 2, pp. 113-123, 1996.). Our best results to date are 1.2% in velocity on copper and aluminium double-step targets which lead to 4% in copper principal Hugoniot pressures. The accuracy in pressure depends not only on two measured shock velocities but also target density and the EOS of Al which is used here as a standard. In order to quantify sources of error and to improve accuracy we have measured the preheat-induced expansion of target surfaces using a Michelson interferometer. Analysis of streaks from this has also given reflectivity measurements. We are also investigating the use of a shaped laser pulse designed to give constant pressure for 2.5ns which will reduce the fractional errors in both step transit time and height by allowing the use of a thicker step.

  5. New measures improve the accuracy of the directed-lie test when detecting deception using a mock crime.

    PubMed

    Bell, Brian G; Kircher, John C; Bernhardt, Paul C

    2008-06-09

    The present study tested the accuracy of probable-lie and directed-lie polygraph tests. One hundred and twenty men and women were recruited from the general community and paid $30 to participate in a mock crime experiment. Equal numbers of males and females were assigned to either the guilty or innocent condition with equal numbers in each group receiving either a probable-lie or a directed-lie polygraph test resulting in a 2 x 2 design with two experimental factors (test type and deceptive condition). Half of the participants were guilty and half were innocent of committing a mock theft of $20 from a purse. All participants were paid a $50 bonus if they could convince the polygraph examiner that they were innocent. There were no significant differences in decision accuracy between probable-lie and directed-lie tests, but respiration measures were more diagnostic for the probable-lie test. New physiological measures, skin potential excursion and a new respiratory measure improved the accuracy of the directed-lie test such that 86% of the innocent participants and 93% of the guilty participants were correctly classified.

  6. A simple modification to improve the accuracy of methylation-sensitive restriction enzyme quantitative polymerase chain reaction.

    PubMed

    Krygier, Magdalena; Podolak-Popinigis, Justyna; Limon, Janusz; Sachadyn, Paweł; Stanisławska-Sachadyn, Anna

    2016-05-01

    DNA digestion with endonucleases sensitive to CpG methylation such as HpaII followed by polymerase chain reaction (PCR) quantitation is commonly used in molecular studies as a simple and inexpensive solution for assessment of region-specific DNA methylation. We observed that the results of such analyses were highly overestimated if mock-digested samples were applied as the reference. We determined DNA methylation levels in several promoter regions in two setups implementing different references: mock-digested and treated with a restriction enzyme that has no recognition sites within examined amplicons. Fragmentation of reference templates allowed removing the overestimation effect, thereby improving measurement accuracy.

  7. Educating Everybody's Children: Diverse Teaching Strategies for Diverse Learners. What Research and Practice Say about Improving Achievement.

    ERIC Educational Resources Information Center

    Cole, Robert W., Ed.

    The culmination of work by the Association for Supervision and Curriculum Development's (ASCD) Urban Middle Grades Network, a special Advisory Panel on Improving Student Achievement, and the Improving Student Achievement Research Panel, this book proposes a repertoire of tools for educators meeting the needs of an increasingly diverse student…

  8. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2010-01-01

    The poster provides an overview of techniques to improve the Mars Global Reference Atmospheric Model (Mars-GRAM) sensitivity. It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for TES MapYear = 0 and large optical depth values such as tau = 3 is less than realistic. A preliminary fix has been made to Mars-GRAM by adding a density factor value that was determined for tau = 0.3, 1 and 3.

  9. Evaluation of Doppler shifts to improve the accuracy of primary atomic fountain clocks.

    PubMed

    Guéna, Jocelyne; Li, Ruoxin; Gibble, Kurt; Bize, Sébastien; Clairon, André

    2011-04-01

    We demonstrate agreement between measurements and ab initio calculations of the frequency shifts caused by distributed cavity phase variations in the microwave cavity of a primary atomic fountain clock. Experimental verification of the finite element models of the cavities gives the first quantitative evaluation of this leading uncertainty and allows it to be reduced to δν/ν=±8.4×10(-17). Applying these experimental techniques to clocks with improved microwave cavities will yield negligible distributed cavity phase uncertainties, less than ±1×10(-17).

  10. Improving GLOBALlAND30 Artificial Type Extraction Accuracy in Low-Density Residents

    NASA Astrophysics Data System (ADS)

    Hou, Lili; Zhu, Ling; Peng, Shu; Xie, Zhenlei; Chen, Xu

    2016-06-01

    GlobalLand 30 is the first 30m resolution land cover product in the world. It covers the area within 80°N and 80°S. There are ten classes including artificial cover, water bodies, woodland, lawn, bare land, cultivated land, wetland, sea area, shrub and snow,. The TM imagery from Landsat is the main data source of GlobalLand 30. In the artificial surface type, one of the omission error happened on low-density residents' part. In TM images, hash distribution is one of the typical characteristics of the low-density residents, and another one is there are a lot of cultivated lands surrounded the low-density residents. Thus made the low-density residents part being blurred with cultivated land. In order to solve this problem, nighttime light remote sensing image is used as a referenced data, and on the basis of NDBI, we add TM6 to calculate the amount of surface thermal radiation index TR-NDBI (Thermal Radiation Normalized Difference Building Index) to achieve the purpose of extracting low-density residents. The result shows that using TR-NDBI and the nighttime light remote sensing image are a feasible and effective method for extracting low-density residents' areas.

  11. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    PubMed

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-09

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.

  12. An improving fringe analysis method based on the accuracy of S-transform profilometry

    NASA Astrophysics Data System (ADS)

    Shen, Qiuju; Chen, Wenjing; Zhong, Min; Su, Xianyu

    2014-07-01

    The S transform, as a simple and popular technique for spacetime-frequency analysis, has been introduced in optical three-dimensional surface shape measurement in recent years. Based on the S transform, S transform “ridge” method (STR) and S transform filtering method (STF) have been proposed to extract the phase information from the single deformed fringe pattern. This paper focuses on studying the STR in fringe pattern analysis. In previous researches about the STR, a linear constraint, which assumed that the phase was locally expressed as first-order Taylor expansion with respect to x and y directions, was implicitly added. Actually at least the second-order partial derivatives of the phase in each local area should be taken into account because they are related to the local curvature of the height distribution of the tested object. Therefore, the traditional STR has larger phase measuring errors in those areas with the rapid height variation on the tested object. This paper proposes an improved STR method, in which the phase is approximately expressed as a quadric in each local area. The phase extraction formula based on the quadric is derived, and the phase correction is carried out as well. Both the simulations and the experiments verify that a more accurate phase map can be obtained by the improved method compared with that by the traditional STR, especially in the areas where height variation is steep.

  13. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    NASA Astrophysics Data System (ADS)

    Starn, J. J.

    2013-12-01

    -flow finite-difference transport simulations (MT3DMS). Results show more accurate simulation of pumping-well BTCs for a given grid cell size when using analytical solutions. The code base is extended to transient flow and BTCs are compared to results from MT3DMS simulations. Results show the particle-based solutions can resolve transient behavior using coarser model grids with far less computational effort than MT3DMS. The effect of simulation accuracy on parameter estimates (porosity) also is investigated. Porosity estimated using more accurate analytical solutions are less biased than in synthetic finite-difference transport simulations, which tend to be biased by coarseness of the grid. Eliminating the bias by using a finer grid comes at the expense of much larger computational effort. Finally, the code base was applied to an actual groundwater-flow model of Salt Lake Valley, Utah. Particle simulations using the Python code base compare well with finite-difference simulations, but with less computational effort, and have the added advantage of delineating flow paths, thus explicitly connecting solute source areas with receptors, and producing complete particle-age distributions. Knowledge of source areas and age distribution greatly enhances the analysis of dissolved solids data in Salt Lake Valley.

  14. Improving measurement accuracy by optimum data acquisition for Nd:YAG Thomson scattering system.

    PubMed

    Minami, T; Itoh, Y; Yamada, I; Yasuhara, R; Funaba, H; Nakanishi, H; Hatae, T

    2014-11-01

    A new high speed Nd:YAG Thomson scattering AD Convertor (HYADC) that can directly convert the detected scattered light signal into a digital signal is under development. The HYADC is expected to improve a signal to noise ratio of the Nd:YAG Thomson scattering measurement. The data storage of the HYADC which is required for the direct conversion of whole plasma discharge is drastically reduced by a ring buffer memory and a stop trigger system. Data transfer of the HYADC is performed by the SiTCP. The HYADC is easily expandable to a multi-channel system by the distributed data processing, and is very compact and easy to implement as a built-in system of the polychromators.

  15. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  16. Combined assessment of midbrain hyperechogenicity, hyposmia and motor asymmetry improves diagnostic accuracy in early Parkinson's disease.

    PubMed

    Poewe, Werner; Mahlknecht, Philipp

    2012-08-01

    The differential diagnosis of Parkinsonian syndromes can be challenging, particularly in early disease stages, when overlapping clinical signs and symptoms may lead to erroneous classification. However, an early differentiation between Parkinson's disease (PD) and other diseases causing Parkinsonism is crucial for prognostic and therapeutic reasons and is essential for clinical research. In a recent study, Busse et al. investigated the diagnostic utility of a set of tests to improve diagnostic differentiation between PD, essential tremor and other Parkinsonian disorders. The authors studied a total of 632 patients divided into a retrospective (n = 517) and a prospective (n = 115) group. Diagnostic anchors were based on clinical criteria. Combining midbrain hyperechogenicity, hyposmia and motor asymmetry increased specificity and positive predictive value for diagnosis of PD up to 98% at the expense of sensitivity, whereas two features provided 91% sensitivity with 77% specificity. The results of this study further support the diagnostic utility of transcranial sonography in diagnosing PD.

  17. Multiobjective guided priors improve the accuracy of near-infrared spectral tomography for breast imaging

    NASA Astrophysics Data System (ADS)

    Feng, Jinchao; Jiang, Shudong; Xu, Junqing; Zhao, Yan; Pogue, Brian W.; Paulsen, Keith D.

    2016-09-01

    An image reconstruction regularization approach for magnetic resonance imaging-guided near-infrared spectral tomography has been developed to improve quantification of total hemoglobin (HbT) and water. By combining prior information from dynamic contrast enhanced (DCE) and diffusion weighted (DW) MR images, the absolute bias errors of HbT and water in the tumor were reduced by 22% and 18%, 21% and 6%, and 10% and 11%, compared to that in the no-prior, DCE- or DW-guided reconstructed images in three-dimensional simulations, respectively. In addition, the apparent contrast values of HbT and water were increased in patient image reconstruction from 1.4 and 1.4 (DCE) or 1.8 and 1.4 (DW) to 4.6 and 1.6.

  18. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  19. Improving the Accuracy of a Heliocentric Potential (HCP) Prediction Model for the Aviation Radiation Dose

    NASA Astrophysics Data System (ADS)

    Hwang, Junga; Yoon, Kyoung-Won; Jo, Gyeongbok; Noh, Sung-Jun

    2016-12-01

    The space radiation dose over air routes including polar routes should be carefully considered, especially when space weather shows sudden disturbances such as coronal mass ejections (CMEs), flares, and accompanying solar energetic particle events. We recently established a heliocentric potential (HCP) prediction model for real-time operation of the CARI-6 and CARI-6M programs. Specifically, the HCP value is used as a critical input value in the CARI-6/6M programs, which estimate the aviation route dose based on the effective dose rate. The CARI-6/6M approach is the most widely used technique, and the programs can be obtained from the U.S. Federal Aviation Administration (FAA). However, HCP values are given at a one month delay on the FAA official webpage, which makes it difficult to obtain real-time information on the aviation route dose. In order to overcome this critical limitation regarding the time delay for space weather customers, we developed a HCP prediction model based on sunspot number variations (Hwang et al. 2015). In this paper, we focus on improvements to our HCP prediction model and update it with neutron monitoring data. We found that the most accurate method to derive the HCP value involves (1) real-time daily sunspot assessments, (2) predictions of the daily HCP by our prediction algorithm, and (3) calculations of the resultant daily effective dose rate. Additionally, we also derived the HCP prediction algorithm in this paper by using ground neutron counts. With the compensation stemming from the use of ground neutron count data, the newly developed HCP prediction model was improved.

  20. Improving Delivery Accuracy of Stereotactic Body Radiotherapy to a Moving Tumor Using Simplified Volumetric Modulated Arc Therapy

    PubMed Central

    Ko, Young Eun; Cho, Byungchul; Kim, Su Ssan; Song, Si Yeol; Choi, Eun Kyung; Ahn, Seung Do; Yi, Byongyong

    2016-01-01

    Purpose To develop a simplified volumetric modulated arc therapy (VMAT) technique for more accurate dose delivery in thoracic stereotactic body radiation therapy (SBRT). Methods and Materials For each of the 22 lung SBRT cases treated with respiratory-gated VMAT, a dose rate modulated arc therapy (DrMAT) plan was retrospectively generated. A dynamic conformal arc therapy plan with 33 adjoining coplanar arcs was designed and their beam weights were optimized by an inverse planning process. All sub-arc beams were converted into a series of control points with varying MLC segment and dose rates and merged into an arc beam for a DrMAT plan. The plan quality of original VMAT and DrMAT was compared in terms of target coverage, compactness of dose distribution, and dose sparing of organs at risk. To assess the delivery accuracy, the VMAT and DrMAT plans were delivered to a motion phantom programmed with the corresponding patients’ respiratory signal; results were compared using film dosimetry with gamma analysis. Results The plan quality of DrMAT was equivalent to that of VMAT in terms of target coverage, dose compactness, and dose sparing for the normal lung. In dose sparing for other critical organs, DrMAT was less effective than VMAT for the spinal cord, heart, and esophagus while being well within the limits specified by the Radiation Therapy Oncology Group. Delivery accuracy of DrMAT to a moving target was similar to that of VMAT using a gamma criterion of 2%/2mm but was significantly better using a 2%/1mm criterion, implying the superiority of DrMAT over VMAT in SBRT for thoracic/abdominal tumors with respiratory movement. Conclusion We developed a DrMAT technique for SBRT that produces plans of a quality similar to that achieved with VMAT but with better delivery accuracy. This technique is well-suited for small tumors with motion uncertainty. PMID:27333199

  1. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  2. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    PubMed

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers.

  3. Improvement of orbit determination accuracy for Beidou Navigation Satellite System with Two-way Satellite Time Frequency Transfer

    NASA Astrophysics Data System (ADS)

    Tang, Chengpan; Hu, Xiaogong; Zhou, Shanshi; Guo, Rui; He, Feng; Liu, Li; Zhu, Lingfeng; Li, Xiaojie; Wu, Shan; Zhao, Gang; Yu, Yang; Cao, Yueling

    2016-10-01

    The Beidou Navigation Satellite System (BDS) manages to estimate simultaneously the orbits and clock offsets of navigation satellites, using code and carrier phase measurements of a regional network within China. The satellite clock offsets are also directly measured with Two-way Satellite Time Frequency Transfer (TWSTFT). Satellite laser ranging (SLR) residuals and comparisons with the precise ephemeris indicate that the radial error of GEO satellites is much larger than that of IGSO and MEO satellites and that the BDS orbit accuracy is worse than GPS. In order to improve the orbit determination accuracy for BDS, a new orbit determination strategy is proposed, in which the satellite clock measurements from TWSTFT are fixed as known values, and only the orbits of the satellites are solved. However, a constant systematic error at the nanosecond level can be found in the clock measurements, which is obtained and then corrected by differencing the clock measurements and the clock estimates from orbit determination. The effectiveness of the new strategy is verified by a GPS regional network orbit determination experiment. With the IGS final clock products fixed, the orbit determination and prediction accuracy for GPS satellites improve by more than 50% and the 12-h prediction User Range Error (URE) is better than 0.12 m. By processing a 25-day of measurement from the BDS regional network, an optimal strategy for the satellite-clock-fixed orbit determination is identified. User Equivalent Ranging Error is reduced by 27.6% for GEO satellites, but no apparent reduction is found for IGSO/MEO satellites. The SLR residuals exhibit reductions by 59% and 32% for IGSO satellites but no reductions for GEO and MEO satellites.

  4. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    SciTech Connect

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  5. Sharp Chandra View of ROSAT All-Sky Survey Bright Sources — I. Improvement of Positional Accuracy

    NASA Astrophysics Data System (ADS)

    Gao, Shuang; Wang, Song; Liu, Ji-Feng

    2016-12-01

    The ROSAT All-Sky Survey (RASS) represents one of the most complete and sensitive soft X-ray all-sky surveys to date. However, the deficient positional accuracy of the RASS Bright Source Catalog (BSC) and subsequent lack of firm optical identifications affect multi-wavelength studies of X-ray sources. The widely used positional errors σpos based on the Tycho Reference Catalog (Tycho-1) have previously been applied for identifying objects in the optical band. The considerably sharper Chandra view covers a fraction of RASS sources, whose σpos could be improved by utilizing the sub-arcsec positional accuracy of Chandra observations. We cross-match X-ray objects between the BSC and Chandra sources extracted from the Advanced CCD Imaging Spectrometer (ACIS) archival observations. A combined list of counterparts (BSCxACIS) with Chandra spatial positions weighted by the X-ray flux of multiple counterparts is employed to evaluate and improve the former identifications of BSC when used with other surveys. Based on these identification evaluations, we suggest that the point-source likeness of BSC sources and INS (isolated neutron star) candidates should be carefully reconsidered.

  6. A New Multi-Sensor Fusion Scheme to Improve the Accuracy of Knee Flexion Kinematics for Functional Rehabilitation Movements

    PubMed Central

    Tannous, Halim; Istrate, Dan; Benlarbi-Delai, Aziz; Sarrazin, Julien; Gamet, Didier; Ho Ba Tho, Marie Christine; Dao, Tien Tuan

    2016-01-01

    Exergames have been proposed as a potential tool to improve the current practice of musculoskeletal rehabilitation. Inertial or optical motion capture sensors are commonly used to track the subject’s movements. However, the use of these motion capture tools suffers from the lack of accuracy in estimating joint angles, which could lead to wrong data interpretation. In this study, we proposed a real time quaternion-based fusion scheme, based on the extended Kalman filter, between inertial and visual motion capture sensors, to improve the estimation accuracy of joint angles. The fusion outcome was compared to angles measured using a goniometer. The fusion output shows a better estimation, when compared to inertial measurement units and Kinect outputs. We noted a smaller error (3.96°) compared to the one obtained using inertial sensors (5.04°). The proposed multi-sensor fusion system is therefore accurate enough to be applied, in future works, to our serious game for musculoskeletal rehabilitation. PMID:27854288

  7. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    SciTech Connect

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  8. Classification algorithms to improve the accuracy of identifying patients hospitalized with community-acquired pneumonia using administrative data.

    PubMed

    Yu, O; Nelson, J C; Bounds, L; Jackson, L A

    2011-09-01

    In epidemiological studies of community-acquired pneumonia (CAP) that utilize administrative data, cases are typically defined by the presence of a pneumonia hospital discharge diagnosis code. However, not all such hospitalizations represent true CAP cases. We identified 3991 hospitalizations during 1997-2005 in a managed care organization, and validated them as CAP or not by reviewing medical records. To improve the accuracy of CAP identification, classification algorithms that incorporated additional administrative information associated with the hospitalization were developed using the classification and regression tree analysis. We found that a pneumonia code designated as the primary discharge diagnosis and duration of hospital stay improved the classification of CAP hospitalizations. Compared to the commonly used method that is based on the presence of a primary discharge diagnosis code of pneumonia alone, these algorithms had higher sensitivity (81-98%) and positive predictive values (82-84%) with only modest decreases in specificity (48-82%) and negative predictive values (75-90%).

  9. New procedure for improving precision and accuracy of instrumental color measurements of beef.

    PubMed

    Khatri, Mamata; Phung, Vinh T; Isaksson, Tomas; Sørheim, Oddvin; Slinde, Erik; Egelandsdal, Bjørg

    2012-07-01

    The surface layers of steaks from bovine M. semimembranosus were prepared to have deoxy- (DMb), oxy- (OMb) and metmyoglobin (MMb) states using either chemicals (CHEM) or oxygen partial pressure packaging (OPP). Ninety-six different meat surface areas were measured in reflectance mode (400-1100 nm) for each preparation method. Reflectance spectra were converted to absorbance (A) and then transformed by Kubelka-Munk transformation (K/S) and/or extended multiplicative scatter correction (EMSC). Transformed spectra of prepared pure states were used to make calibration models of MMb, DMb and OMb using either selected wavelengths (SW) or partial least square (PLS) regression. Finally, the predicted myoglobin states were normalized to ensure that no state was <0 or >1 and the sum of all states equal to 1. Multivariate calibrations (i.e. PLS) outperformed the univariate calibrations (i.e. SW). The OPP method of preparing pure states was clearly best for OMb while the CHEM method was best for preparing MMb on fresh meat surfaces. Both preparation methods needed improvement concerning DMb. The CHEM(K/S) SW and the OPP EMSC(A) PLS methods predicted MMb, DMb and OMb with root-mean-square errors of cross validation (RMSECV) equal to 0.08, 0.16 and 0.18 (range 0-1) and 0.04, 0.04 and 0.04 (range 0-1), respectively. This new reflectance protocol has potential for routine meat color measurements.

  10. [Improving laser center wavelength detection accuracy based on multi-level combination prisms].

    PubMed

    Liu, Xiao-Dong; Zhang, Zhi-Jie

    2011-08-01

    In order to improve the spectral resolution of birefringence prism under the conditions of ensuring the quality of interference fringes image, the system used multi-level combination prisms and designed the method of interferometer fringes splice. According to calculation of the interferometer fringes intensity of multi-level combination prisms, the optical path difference function and the spectrum resolution, the present paper analyzed that the least spectrum resolution is 2.875 cm(-1) in multi-level combination prisms of four prisms structure. The method of interferometer fringes splice was designed to splice the section interferometer fringes, and in experiment the size of multi-level combination prisms is 30 mm x 28 mm x 10 mm. The standard 635 nm laser for getting the interferometer fringes was dealed with. Experimental data show that the detection spectrum distribution of the 635.0 nm laser was distorted by the direct splicing of the interference fringes, while the detection spectrum distribution of the 635.0 nm laser was consistent with the standard spectrum by the method of interferometer fringes splice. So the method can effectively avoid spectrum distortion by interferometer fringes splice in multi-level combination prisms.

  11. Using standards to improve middle school students' accuracy at evaluating the quality of their recall.

    PubMed

    Lipko, Amanda R; Dunlosky, John; Hartwig, Marissa K; Rawson, Katherine A; Swan, Karen; Cook, Dale

    2009-12-01

    When recalling key term definitions from class materials, students may recall entirely incorrect definitions, yet will often claim that these commission errors are entirely correct; that is, they are overconfident in the quality of their recall responses. We investigated whether this overconfidence could be reduced by providing various standards to middle school students as they evaluated their recall responses. Students studied key term definitions, attempted to recall each one, and then were asked to score the quality of their recall. In Experiment 1, they evaluated their recall responses by rating each response as fully correct, partially correct, or incorrect. Most important, as they evaluated a particular response, it was presented either alone (i.e., without a standard) or with the correct definition present. Providing this full-definition standard reduced overconfidence in commission errors: Students assigned full or partial credit to 73% of their commission errors when they received no standard, whereas they assigned credit to only 44% of these errors when receiving the full-definition standard. In Experiment 2, a new standard was introduced: Idea units from each definition were presented, and students indicated whether each idea unit was in their response. After making these idea-unit judgments, the students then evaluated the quality of their entire response. Idea-unit standards further reduced overconfidence. Thus, although middle school students are overconfident in evaluating the quality of their recall responses, using standards substantially reduces this overconfidence and promises to improve the efficacy of their self-regulated learning.

  12. Radiotherapy dosimetry audit: three decades of improving standards and accuracy in UK clinical practice and trials.

    PubMed

    Clark, Catharine H; Aird, Edwin G A; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia A D; Thomas, Russell A S; Venables, Karen; Thwaites, David I

    2015-01-01

    Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed.

  13. Improving accuracy and reliability of 186-keV measurements for unattended enrichment monitoring

    SciTech Connect

    Ianakiev, Kiril D; Boyer, Brian D; Swinhoe, Martyn T; Moss, Calvin E; Goda, Joetta M; Favalli, Andrea; Lombardi, Marcie; Paffett, Mark T; Hill, Thomas R; MacArthur, Duncan W; Smith, Morag K

    2010-04-13

    Improving the quality of safeguards measurements at Gas Centrifuge Enrichment Plants (GCEPs), whilst reducing the inspection effort, is an important objective given the number of existing and new plants that need to be safeguarded. A useful tool in many safeguards approaches is the on-line monitoring of enrichment in process pipes. One aspect of this measurement is a simple, reliable and precise passive measurement of the 186-keV line from {sup 235}U. (The other information required is the amount of gas in the pipe. This can be obtained by transmission measurements or pressure measurements). In this paper we describe our research efforts towards such a passive measurement system. The system includes redundant measurements of the 186-keV line from the gas and separately from the wall deposits. The design also includes measures to reduce the effect of the potentially important background. Such an approach would practically eliminate false alarms and can maintain the operation of the system even with a hardware malfunction in one of the channels. The work involves Monte Carlo modeling and the construction of a proof-of-principle prototype. We will carry out experimental tests with UF{sub 6} gas in pipes with and without deposits in order to demonstrate the deposit correction.

  14. Radiotherapy dosimetry audit: three decades of improving standards and accuracy in UK clinical practice and trials

    PubMed Central

    Aird, Edwin GA; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia AD; Thomas, Russell AS; Venables, Karen; Thwaites, David I

    2015-01-01

    Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed. PMID:26329469

  15. Electronic prescribing: improving the efficiency and accuracy of prescribing in the ambulatory care setting.

    PubMed

    Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto

    2014-01-01

    Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues.

  16. Improving the accuracy of the gradient method for determining soil carbon dioxide efflux

    NASA Astrophysics Data System (ADS)

    Sánchez-Cañete, Enrique P.; Scott, Russell L.; Haren, Joost; Barron-Gafford, Greg A.

    2017-01-01

    Soil CO2 efflux (Fsoil) represents a significant source of ecosystem CO2 emissions that is rarely quantified with high-temporal-resolution data in carbon flux studies. Fsoil estimates can be obtained by the low-cost gradient method (GM), but the utility of the method is hindered by uncertainties in the application of published models for the diffusion coefficient. Therefore, to address and resolve these uncertainties, we compared Fsoil measured by 2 soil CO2 efflux chambers and Fsoil estimated by 16 gas transport models using the GM across 1 year. We used 14 published empirical gas diffusion models and 2 in situ models: (1) a gas transfer model called "Chamber model" obtained using a calibration between the chamber and the gradient method and (2) a diffusion model called "SF6 model" obtained through an interwell conservative tracer experiment. Most of the published models using the GM underestimated cumulative annual Fsoil by 55% to 361%, while the Chamber model closely approximated cumulative Fsoil (0.6% error). Surprisingly, the SF6 model combined with the GM underestimated Fsoil by 32%. Differences between in situ models could stem from the Chamber model implicitly accounting for production of soil CO2, while the conservative tracer model does not. Therefore, we recommend using the GM only after calibration with chamber measurements to generate reliable long-term ecosystem Fsoil measurements. Accurate estimates of Fsoil will improve our understanding of soil respiration's contribution to ecosystem fluxes.

  17. Drift Removal for Improving the Accuracy of Gait Parameters Using Wearable Sensor Systems

    PubMed Central

    Takeda, Ryo; Lisco, Giulia; Fujisawa, Tadashi; Gastaldi, Laura; Tohyama, Harukazu; Tadano, Shigeru

    2014-01-01

    Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR) digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases. PMID:25490587

  18. Charting the course for home health care quality: action steps for achieving sustainable improvement: conference proceedings.

    PubMed

    Feldman, Penny Hollander; Peterson, Laura E; Reische, Laurie; Bruno, Lori; Clark, Amy

    2004-12-01

    On June 30 and July 1, 2003, the first national meeting Charting the Course for Home Health Care Quality: Action Steps for Achieving Sustainable Improvement convened in New York City. The Center for Home Care Policy & Research of the Visiting Nurse Service of New York (VNSNY) hosted the meeting with support from the Robert Wood Johnson Foundation. Fifty-seven attendees from throughout the United States participated. The participants included senior leaders and managers and nurses working directly in home care today. The meeting's objectives were to: 1. foster dialogue among key constituents influencing patient safety and home care, 2. promote information-sharing across sectors and identify areas where more information is needed, and, 3. develop an agenda and strategy for moving forward. This article reports the meeting's proceedings.

  19. Optimization of Oxidation Temperature for Commercially Pure Titanium to Achieve Improved Corrosion Resistance

    NASA Astrophysics Data System (ADS)

    Bansal, Rajesh; Singh, J. K.; Singh, Vakil; Singh, D. D. N.; Das, Parimal

    2017-03-01

    Thermal oxidation of commercially pure titanium (cp-Ti) was carried out at different temperatures, ranging from 200 to 900 °C to achieve optimum corrosion resistance of the thermally treated surface in simulated body fluid. Scanning electron microscopy, x-ray diffraction, Raman spectroscopy and electrochemical impedance spectroscopy techniques were used to characterize the oxides and assess their protective properties exposed in the test electrolyte. Maximum resistance toward corrosion was observed for samples oxidized at 500 °C. This was attributed to the formation of a composite layer of oxides at this temperature comprising Ti2O3 (titanium sesquioxide), anatase and rutile phases of TiO2 on the surface of cp-Ti. Formation of an intact and pore-free oxide-substrate interface also improved its corrosion resistance.

  20. An improvement in land cover classification achieved by merging microwave data with Landsat multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Wu, S. T.

    1980-01-01

    The improvement in land cover classification achieved by merging microwave data with Landsat MSS data is examined. To produce a merged data set for analysis and comparison, a registration procedure by which a set of Seasat SAR digital data was merged with the MSS data is described. The Landsat MSS data and the merged Landsat/Seasat data sets were processed using conventional multichannel spectral pattern recognition techniques. An analysis of the classified data sets indicates that while Landsat data delineate different forest types (i.e., deciduous/coniferous) and allow some species separation, SAR data provide additional information related to plant canopy configuration and vegetation density as associated with varying water regimes, and therefore allow for further subdivision in the classification of forested wetlands of the coastal region of the southern United States.

  1. Optimization of Oxidation Temperature for Commercially Pure Titanium to Achieve Improved Corrosion Resistance

    NASA Astrophysics Data System (ADS)

    Bansal, Rajesh; Singh, J. K.; Singh, Vakil; Singh, D. D. N.; Das, Parimal

    2017-02-01

    Thermal oxidation of commercially pure titanium (cp-Ti) was carried out at different temperatures, ranging from 200 to 900 °C to achieve optimum corrosion resistance of the thermally treated surface in simulated body fluid. Scanning electron microscopy, x-ray diffraction, Raman spectroscopy and electrochemical impedance spectroscopy techniques were used to characterize the oxides and assess their protective properties exposed in the test electrolyte. Maximum resistance toward corrosion was observed for samples oxidized at 500 °C. This was attributed to the formation of a composite layer of oxides at this temperature comprising Ti2O3 (titanium sesquioxide), anatase and rutile phases of TiO2 on the surface of cp-Ti. Formation of an intact and pore-free oxide-substrate interface also improved its corrosion resistance.

  2. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  3. Analyses to Verify and Improve the Accuracy of the Manufactured Home Energy Audit (MHEA)

    SciTech Connect

    Ternes, Mark P; Gettings, Michael B

    2008-12-01

    A series of analyses were performed to determine the reasons that the Manufactured Home Energy Audit (MHEA) over predicted space-heating energy savings as measured in a recent field test and to develop appropriate corrections to improve its performance. The study used the Home Energy Rating System (HERS) Building Energy Simulation Test (BESTEST) to verify that MHEA accurately calculates the UA-values of mobile home envelope components and space-heating energy loads as compared with other, well-accepted hourly energy simulation programs. The study also used the Procedures for Verification of RESNET Accredited HERS Software Tools to determine that MHEA accurately calculates space-heating energy consumptions for gas furnaces, heat pumps, and electric-resistance furnaces. Even though MHEA's calculations were shown to be correct from an engineering point of view, three modifications to MHEA's algorithms and use of a 0.6 correction factor were incorporated into MHEA to true-up its predicted savings to values measured in a recent field test. A simulated use of the revised version of MHEA in a weatherization program revealed that MHEA would likely still recommend a significant number of cost-effective weatherization measures in mobile homes (including ceiling, floor, and even wall insulation and far fewer storm windows). Based on the findings from this study, it was recommended that a revised version of MHEA with all the changes and modifications outlined in this report should be finalized and made available to the weatherization community as soon as possible, preferably in time for use within the 2009 Program Year.

  4. Assessment and improvement of the accuracy of GPS terrestrial origin and scale (Invited)

    NASA Astrophysics Data System (ADS)

    Herring, T.

    2009-12-01

    We examine the sensitivity of the origin and scale determination to analysis models and estimation strategies for Global Positioning System (GPS) data collected since the mid-1990s. Initial analyses suggest that the Z location of the offset between center of mass of the Earth System and the geometric center of the GPS network is most sensitive the once-per-revolution orbit perturbations. These same perturbations also strongly affect estimates of the rates of change of Earth orientation parameters (EOP) in pole position and rotation rate i.e., length-of-day. The scale of the GPS network is primarily affected by the phase center models of the GPS transmission and ground antennas. The common sensitivity of EOP rates and terrestrial origin suggest that external constraints on one of these parameters would lead to better determination of the other. We will examine the impact of constraining EOP rates using very-long-baseline interferometry (VLBI) data on the GPS determined terrestrial origin. Initial tests suggest that the impact is not as large as might be expected given the common sensitivity to orbit modeling. We also explore the use of GPS orbital integrations longer than one day as a method of improving the orbit determination and thus the terrestrial origin. For scale determination, we will examine the impact of phase center models on GPS orbit determination. Preliminary results suggest that the orbit determinations are not very sensitive to phase center models and thus external constraints on orbits might not effectively resolve the terrestrial scale estimates. We will also discuss possible methods of better calibration GPS phase center models.

  5. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    PubMed

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers.

  6. Predicting antimicrobial peptides with improved accuracy by incorporating the compositional, physico-chemical and structural features into Chou's general PseAAC.

    PubMed

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Saini, Varsha; Rao, Atmakuri Ramakrishna

    2017-02-13

    Antimicrobial peptides (AMPs) are important components of the innate immune system that have been found to be effective against disease causing pathogens. Identification of AMPs through wet-lab experiment is expensive. Therefore, development of efficient computational tool is essential to identify the best candidate AMP prior to the in vitro experimentation. In this study, we made an attempt to develop a support vector machine (SVM) based computational approach for prediction of AMPs with improved accuracy. Initially, compositional, physico-chemical and structural features of the peptides were generated that were subsequently used as input in SVM for prediction of AMPs. The proposed approach achieved higher accuracy than several existing approaches, while compared using benchmark dataset. Based on the proposed approach, an online prediction server iAMPpred has also been developed to help the scientific community in predicting AMPs, which is freely accessible at http://cabgrid.res.in:8080/amppred/. The proposed approach is believed to supplement the tools and techniques that have been developed in the past for prediction of AMPs.

  7. Predicting antimicrobial peptides with improved accuracy by incorporating the compositional, physico-chemical and structural features into Chou’s general PseAAC

    PubMed Central

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Saini, Varsha; Rao, Atmakuri Ramakrishna

    2017-01-01

    Antimicrobial peptides (AMPs) are important components of the innate immune system that have been found to be effective against disease causing pathogens. Identification of AMPs through wet-lab experiment is expensive. Therefore, development of efficient computational tool is essential to identify the best candidate AMP prior to the in vitro experimentation. In this study, we made an attempt to develop a support vector machine (SVM) based computational approach for prediction of AMPs with improved accuracy. Initially, compositional, physico-chemical and structural features of the peptides were generated that were subsequently used as input in SVM for prediction of AMPs. The proposed approach achieved higher accuracy than several existing approaches, while compared using benchmark dataset. Based on the proposed approach, an online prediction server iAMPpred has also been developed to help the scientific community in predicting AMPs, which is freely accessible at http://cabgrid.res.in:8080/amppred/. The proposed approach is believed to supplement the tools and techniques that have been developed in the past for prediction of AMPs. PMID:28205576

  8. Using Mini-RF To Improve Accuracy Of Lunar TiO2 Maps

    NASA Astrophysics Data System (ADS)

    Gillis-Davis, J. J.; Bussey, B.; Trang, D.; Carter, L. M.; Williams, K. K.

    2010-12-01

    The Mini-Radio Frequency (Mini-RF) instrument is on board the NASA Lunar Reconnaissance Orbiter, which has been in orbit around the Moon since June 2009. Mini-RF is capable of imaging in two bands, X-band (4.2-cm) and S-band (12.6-cm), at 150 m and 30 m (Zoom Mode) resolution respectively, and with an illumination incidence angle of ~48 degrees. The majority of its observations to date have been obtained in S-Band Zoom Mode. Mini-RF was designed to map the permanently dark areas of the lunar poles and characterize the nature of the deposits. In addition, to aid global analyses of mare composition, Mini-RF has acquired coverage for most of the maria. Mini-RF observations of geologic targets of interest are used to improve TiO2 mapping within the lunar maria. Targets of interest include basalt flows that were estimated to contain high-TiO2 compositions based on Clementine spectral reflectance data but had low-TiO2 compositions as measured by Lunar Prospector neutron spectrometer data (LPNS). Visible and near infrared spectral characteristics of lunar soils are controlled by multiple competing factors. Ilmenite, the principal oxide phase, is dark and spectrally neutral relative to the spectrally red mature lunar mare soils; thus causing soils to become spectrally bluer as ilmenite content increases. However, large uncertainties in ultraviolet-visible (UVVIS) based estimates of TiO2 are revealed when comparing LPNS TiO2 and Clementine UVVIS 415/750 ratio. Prime culprits identified as causing this effect are differential agglutinate formation on high-FeO flows, TiO2 in phases other than ilmenite, and surface contamination by highlands materials. Similar to UVVIS spectra, depolarized radar return is found to anticorrelate with titanium abundance - with higher TiO2 abundance leading to lower radar returns. The source of the titanium affecting the radar return is thought to be ilmenite. The possibility that the radar loss tangent is modulated by other mineralogic

  9. Improving accuracy in shallow-landslide susceptibility analyses at regional scale

    NASA Astrophysics Data System (ADS)

    Iovine, Giulio G. R.; Rago, Valeria; Frustaci, Francesco; Bruno, Claudia; Giordano, Stefania; Muto, Francesco; Gariano, Stefano L.; Pellegrino, Annamaria D.; Conforti, Massimo; Pascale, Stefania; Distilo, Daniela; Basile, Vincenzo; Soleri, Sergio; Terranova, Oreste G.

    2015-04-01

    Calabria (southern Italy) is particularly exposed to geo-hydrological risk. In the last decades, slope instabilities, mainly related to rainfall-induced landslides, repeatedly affected its territory. Among these, shallow landslides, characterized by abrupt onset and extremely rapid movements, are among the most destructive and dangerous phenomena for people and infrastructures. In this study, a susceptibility analysis to shallow landslides has been performed by refining a method recently applied in Costa Viola - central Calabria (Iovine et al., 2014), and only focusing on landslide source activations (regardless of their possible evolution as debris flows). A multivariate approach has been applied to estimating the presence/absence of sources, based on linear statistical relationships with a set of causal variables. The different classes of numeric causal variables have been determined by means of a data clustering method, designed to determine the best arrangement. A multi-temporal inventory map of sources, mainly obtained from interpretation of air photographs taken in 1954-1955, and in 2000, has been adopted to selecting the training and the validation sets. Due to the wide extend of the territory, the analysis has been iteratively performed by a step-by-step decreasing cell-size approach, by adopting greater spatial resolutions and thematic details (e.g. lithology, land-use, soil, morphometry, rainfall) for high-susceptible sectors. Through a sensitivity analysis, the weight of the considered factors in predisposing shallow landslides has been evaluated. The best set of variables has been identified by iteratively including one variable at a time, and comparing the results in terms of performance. Furthermore, susceptibility evaluations obtained through logistic regression have been compared to those obtained by applying neural networks. Obtained results may be useful to improve land utilization planning, and to select proper mitigation measures in shallow

  10. Improvement of Accuracy of Proper Motions of Hipparcos Catalogue Stars Using Optical Latitude Observations

    NASA Astrophysics Data System (ADS)

    Damljanovic, G.

    2009-09-01

    ), Mizusawa (MZL FZT), Tuorla -- Turku (TT VZT), Mizusawa (MZP and MZQ PZT), Mount Stromlo (MS PZT), Ondřejov (OJP PZT), Punta Indio (PIP PZT), Richmond (RCP and RCQ PZT) and Washington (WA, W and WGQ PZT). The task is to improve the proper motions in declination of the observed Hipparcos stars. The original method was developed, and it consists of removing from the instantaneous observed latitudes all known effects (polar motion and some local instrumental errors). The corrected latitudes are then used to calculate the corrections of the Hipparcos proper motions in declination (Damljanović 2005). The Least Squares Method (LSM) is used with the linear model. We compared the calculated results with ARIHIP and EOC-2 data, and found a good agreement. The newly obtained values of proper motions in declination are substantially more precise than those of the Hipparcos Catalogue. It is because the time interval covered by the latitude observations (tens of years) is much longer than the Hipparcos one (less than four years), and because of the great number of observations made during this interval (Damljanović et al. 2006). Our method is completely different from the one used to compute the EOC-2 catalogue (Vondrák 2004). It was also an almost independent check of the proper motions of EOC-2. The catalogue EOC-2 is used in this thesis to distinguish the corrections of the two stars of a pair observed by using the Horrebow -- Talcott method. The difference between the two proper motions is constrained by the difference in the EOC-2 and Hipparcos catalogues (Damljanović and Pejović 2006). The main result of the thesis is the catalogue of proper motions in declination of 2347 Hipparcos stars.

  11. The Stories Clinicians Tell: Achieving High Reliability and Improving Patient Safety

    PubMed Central

    Cohen, Daniel L; Stewart, Kevin O

    2016-01-01

    The patient safety movement has been deeply affected by the stories patients have shared that have identified numerous opportunities for improvements in safety. These stories have identified system and/or human inefficiencies or dysfunctions, possibly even failures, often resulting in patient harm. Although patients’ stories tell us much, less commonly heard are the stories of clinicians and how their personal observations regarding the environments they work in and the circumstances and pressures under which they work may degrade patient safety and lead to harm. If the health care industry is to function like a high-reliability industry, to improve its processes and achieve the outcomes that patients rightly deserve, then leaders and managers must seek and value input from those on the front lines—both clinicians and patients. Stories from clinicians provided in this article address themes that include incident identification, disclosure and transparency, just culture, the impact of clinical workload pressures, human factors liabilities, clinicians as secondary victims, the impact of disruptive and punitive behaviors, factors affecting professional morale, and personal failings. PMID:26580146

  12. A New-Generation Continuous Glucose Monitoring System: Improved Accuracy and Reliability Compared with a Previous-Generation System

    PubMed Central

    Bailey, Timothy; Watkins, Elaine; Liljenquist, David; Price, David; Nakamura, Katherine; Boock, Robert; Peyser, Thomas

    2013-01-01

    Abstract Background Use of continuous glucose monitoring (CGM) systems can improve glycemic control, but widespread adoption of CGM utilization has been limited, in part because of real and perceived problems with accuracy and reliability. This study compared accuracy and performance metrics for a new-generation CGM system with those of a previous-generation device. Subjects and Methods Subjects were enrolled in a 7-day, open-label, multicenter pivotal study. Sensor readings were compared with venous YSI measurements (blood glucose analyzer from YSI Inc., Yellow Springs, OH) every 15 min (±5 min) during in-clinic visits. The aggregate and individual sensor accuracy and reliability of a new CGM system, the Dexcom® (San Diego, CA) G4™ PLATINUM (DG4P), were compared with those of the previous CGM system, the Dexcom SEVEN® PLUS (DSP). Results Both study design and subject characteristics were similar. The aggregate mean absolute relative difference (MARD) for DG4P was 13% compared with 16% for DSP (P<0.0001), and 82% of DG4P readings were within ±20 mg/dL (for YSI ≤80 mg/dL) or 20% of YSI values (for YSI >80 mg/dL) compared with 76% for DSP (P<0.001). Ninety percent of the DG4P sensors had an individual MARD ≤20% compared with only 76% of DSP sensors (P=0.015). Half of DG4P sensors had a MARD less than 12.5% compared with 14% for the DSP sensors (P=0.028). The mean absolute difference for biochemical hypoglycemia (YSI <70 mg/dL) for DG4P was 11 mg/dL compared with 16 mg/dL for DSP (P<0.001). Conclusions The performance of DG4P was significantly improved compared with that of DSP, which may increase routine clinical use of CGM and improve patient outcomes. PMID:23777402

  13. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    PubMed Central

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  14. EEG activity during movement planning encodes upcoming peak speed and acceleration and improves the accuracy in predicting hand kinematics.

    PubMed

    Yang, Lingling; Leung, Howard; Plank, Markus; Snider, Joe; Poizner, Howard

    2015-01-01

    The relationship between movement kinematics and human brain activity is an important and fundamental question for the development of neural prosthesis. The peak velocity and the peak acceleration could best reflect the feedforward-type movement; thus, it is worthwhile to investigate them further. Most related studies focused on the correlation between kinematics and brain activity during the movement execution or imagery. However, human movement is the result of the motor planning phase as well as the execution phase and researchers have demonstrated that statistical correlations exist between EEG activity during the motor planning and the peak velocity and the peak acceleration using grand-average analysis. In this paper, we examined whether the correlations were concealed in trial-to-trial decoding from the low signal-to-noise ratio of EEG activity. The alpha and beta powers from the movement planning phase were combined with the alpha and beta powers from the movement execution phase to predict the peak tangential speed and acceleration. The results showed that EEG activity from the motor planning phase could also predict the peak speed and the peak acceleration with a reasonable accuracy. Furthermore, the decoding accuracy of the peak speed and the peak acceleration could both be improved by combining band powers from the motor planning phase with the band powers from the movement execution.

  15. NREL Evaluates Thermal Performance of Uninsulated Walls to Improve Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    NREL researchers discover ways to increase accuracy in building energy simulations tools to improve predictions of potential energy savings in homes. Uninsulated walls are typical in older U.S. homes where the wall cavities were not insulated during construction or where the insulating material has settled. Researchers at the National Renewable Energy Laboratory (NREL) are investigating ways to more accurately calculate heat transfer through building enclosures to verify the benefit of energy efficiency upgrades that reduce energy use in older homes. In this study, scientists used computational fluid dynamics (CFD) analysis to calculate the energy loss/gain through building walls and visualize different heat transfer regimes within the uninsulated cavities. The effects of ambient outdoor temperature, the radiative properties of building materials, insulation levels, and the temperature dependence of conduction through framing members were considered. The research showed that the temperature dependence of conduction through framing members dominated the differences between this study and previous results - an effect not accounted for in existing building energy simulation tools. The study provides correlations for the resistance of the uninsulated assemblies that can be implemented into building simulation tools to increase the accuracy of energy use estimates in older homes, which are currently over-predicted.

  16. Improving production of 11C to achieve high specific labelled radiopharmaceuticals

    NASA Astrophysics Data System (ADS)

    Savio, E.; García, O.; Trindade, V.; Buccino, P.; Giglio, J.; Balter, H.; Engler, H.

    2012-12-01

    Molecular imaging is usually based on the recognition by the radiopharmaceuticals of specific sites which are present in limited number or density in the cells or biological tissues. Thus is of high importance to label the radiopharmaceuticals with high specific activity to be able to achieve a high target to non target ratio. The presence of carbon dioxide (CO2) from the air containing 98,88% of 12C and 1,12% 13C compete with 11CO2 produced at the cyclotron. In order to minimize the presence of these isotopes along the process of irradiation, transferring and synthesis of radiopharmaceuticals labelled with 11C, we applied this method: previous to the irradiation the target was 3-4 times flushed with He (5.7) as a cold cleaning, followed by a similar conditioning of the line, from the target up to the module, and finally a hot cleaning in order to desorb 12CO2 and 13CO2, this was performed by irradiation during 1 min at 5 uA (3 times). In addition, with the aim of improving quality of gases in the target and in the modules, water traps (Agilent) were incorporated in the inlet lines of the target and modules. Target conditioning process (cold and hot flushings) as well as line cleaning, allowing the desorption of unlabelled CO2, together with the increasing of gas purity in the irradiation and in the synthesis, were critical parameters that enable to achieve 11C-radiopharamaceuticals with high specific activity, mainly in the case of 11C-PIB.

  17. Using radar ground-truth to validate and improve the location accuracy of a lightning direction-finding network

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.

    1989-01-01

    A technique is described in which isolated radar echoes associated with clusters of lightning strikes are used to validate and improve the location accuracy of a lightning-direction-finding network. Using this technique, site errors of a magnetic direction-finding network for locating lightning strikes to ground were accurately determined. The technique offers advantages over existing techniques in that large sample sizes are readily attainable over a broad area on a regular basis; the technique can also provide additional constraints to redundant data methods such as that described by Orville (1987). Since most lightning strike networks have either partial or full weather radar coverage, the technique is practical for all but a few users.

  18. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal

    SciTech Connect

    Zhou Zhongxing; Gao Feng; Zhao Huijuan; Zhang Lixin

    2011-03-15

    Purpose: Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. Methods: In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Results: Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering

  19. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    SciTech Connect

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-02-15

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  20. Systemic inaccuracies in the National Surgical Quality Improvement Program database: Implications for accuracy and validity for neurosurgery outcomes research.

    PubMed

    Rolston, John D; Han, Seunggu J; Chang, Edward F

    2017-03-01

    The American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) provides a rich database of North American surgical procedures and their complications. Yet no external source has validated the accuracy of the information within this database. Using records from the 2006 to 2013 NSQIP database, we used two methods to identify errors: (1) mismatches between the Current Procedural Terminology (CPT) code that was used to identify the surgical procedure, and the International Classification of Diseases (ICD-9) post-operative diagnosis: i.e., a diagnosis that is incompatible with a certain procedure. (2) Primary anesthetic and CPT code mismatching: i.e., anesthesia not indicated for a particular procedure. Analyzing data for movement disorders, epilepsy, and tumor resection, we found evidence of CPT code and postoperative diagnosis mismatches in 0.4-100% of cases, depending on the CPT code examined. When analyzing anesthetic data from brain tumor, epilepsy, trauma, and spine surgery, we found evidence of miscoded anesthesia in 0.1-0.8% of cases. National databases like NSQIP are an important tool for quality improvement. Yet all databases are subject to errors, and measures of internal consistency show that errors affect up to 100% of case records for certain procedures in NSQIP. Steps should be taken to improve data collection on the frontend of NSQIP, and also to ensure that future studies with NSQIP take steps to exclude erroneous cases from analysis.