Science.gov

Sample records for achieve improved accuracy

  1. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  2. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  3. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  4. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  5. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  6. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples

  7. Phase space correlation to improve detection accuracy.

    PubMed

    Carroll, T L; Rachford, F J

    2009-09-01

    The standard method used for detecting signals in radar or sonar is cross correlation. The accuracy of the detection with cross correlation is limited by the bandwidth of the signals. We show that by calculating the cross correlation based on points that are nearby in phase space rather than points that are simultaneous in time, the detection accuracy is improved. The phase space correlation technique works for some standard radar signals, but it is especially well suited to chaotic signals because trajectories that are adjacent in phase space move apart from each other at an exponential rate.

  8. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  9. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  10. Can Judges Improve Academic Achievement?

    ERIC Educational Resources Information Center

    Greene, Jay P.; Trivitt, Julie R.

    2008-01-01

    Over the last 3 decades student achievement has remained essentially unchanged in the United States, but not for a lack of spending. Over the same period a myriad of education reforms have been suggested and per-pupil spending has more than doubled. Since the 1990s the education reform attempts have frequently included judicial decisions to revise…

  11. Improving the accuracy of death certification

    PubMed Central

    Myers, K A; Farquhar, D R

    1998-01-01

    BACKGROUND: Population-based mortality statistics are derived from the information recorded on death certificates. This information is used for many important purposes, such as the development of public health programs and the allocation of health care resources. Although most physicians are confronted with the task of completing death certificates, many do not receive adequate training in this skill. Resulting inaccuracies in information undermine the quality of the data derived from death certificates. METHODS: An educational intervention was designed and implemented to improve internal medicine residents' accuracy in death certificate completion. A total of 229 death certificates (146 completed before and 83 completed after the intervention) were audited for major and minor errors, and the rates of errors before and after the intervention were compared. RESULTS: Major errors were identified on 32.9% of the death certificates completed before the intervention, a rate comparable to previously reported rates for internal medicine services in teaching hospitals. Following the intervention the major error rate decreased to 15.7% (p = 0.01). The reduction in the major error rate was accounted for by significant reductions in the rate of listing of mechanism of death without a legitimate underlying cause of death (15.8% v. 4.8%) (p = 0.01) and the rate of improper sequencing of death certificate information (15.8% v. 6.0%) (p = 0.03). INTERPRETATION: Errors are common in the completion of death certificates in the inpatient teaching hospital setting. The accuracy of death certification can be improved with the implementation of a simple educational intervention. PMID:9614825

  12. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  13. Simultaneously improving the sensitivity and absolute accuracy of CPT magnetometer.

    PubMed

    Liang, Shang-Qing; Yang, Guo-Qing; Xu, Yun-Fei; Lin, Qiang; Liu, Zhi-Heng; Chen, Zheng-Xiang

    2014-03-24

    A new method to improve the sensitivity and absolute accuracy simultaneously for coherent population trapping (CPT) magnetometer based on the differential detection method is presented. Two modulated optical beams with orthogonal circular polarizations are applied, in one of which two magnetic resonances are excited simultaneously by modulating a 3.4GHz microwave with Larmor frequency. When a microwave frequency shift is introduced, the difference in the power transmitted through the cell in each beam shows a low noise resonance. The sensitivity of 2pT/Hz @ 10Hz is achieved. Meanwhile, the absolute accuracy of ± 0.5nT within the magnetic field ranging from 20000nT to 100000nT is realized.

  14. Improving the accuracy of walking piezo motors.

    PubMed

    den Heijer, M; Fokkema, V; Saedi, A; Schakel, P; Rost, M J

    2014-05-01

    Many application areas require ultraprecise, stiff, and compact actuator systems with a high positioning resolution in combination with a large range as well as a high holding and pushing force. One promising solution to meet these conflicting requirements is a walking piezo motor that works with two pairs of piezo elements such that the movement is taken over by one pair, once the other pair reaches its maximum travel distance. A resolution in the pm-range can be achieved, if operating the motor within the travel range of one piezo pair. However, applying the typical walking drive signals, we measure jumps in the displacement up to 2.4 μm, when the movement is given over from one piezo pair to the other. We analyze the reason for these large jumps and propose improved drive signals. The implementation of our new drive signals reduces the jumps to less than 42 nm and makes the motor ideally suitable to operate as a coarse approach motor in an ultra-high vacuum scanning tunneling microscope. The rigidity of the motor is reflected in its high pushing force of 6.4 N.

  15. Resist development modeling for OPC accuracy improvement

    NASA Astrophysics Data System (ADS)

    Fan, Yongfa; Zavyalova, Lena; Zhang, Yunqiang; Zhang, Charlie; Lucas, Kevin; Falch, Brad; Croffie, Ebo; Li, Jianliang; Melvin, Lawrence; Ward, Brian

    2009-03-01

    in the same way that current model calibration is done. The method is validated with a rigorous lithography process simulation tool which is based on physical models to simulate and predict effects during the resist PEB and development process. Furthermore, an experimental lithographic process was modeled using this new methodology, showing significant improvement in modeling accuracy in compassion to a traditional model. Layout correction test has shown that the new model form is equivalent to traditional model forms in terms of correction convergence and speed.

  16. Meteor orbit determination with improved accuracy

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria

    2015-08-01

    Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217

  17. EGM improves speed, accuracy in gas measurement

    SciTech Connect

    Sqyres, M.

    1995-07-01

    The natural gas industry`s adoption of electronic gas measurement (EGM) as a way to increase speed and accuracy in obtaining measurement data also has created a need for an electronic data management system. These systems, if not properly designed and implemented, can potentially render the entire process useless. Therefore, it is essential that the system add functionality that complements the power of the hardware. With proper implementation, such a system will not only facilitate operations in today`s fast-paced, post FERC 636 environment, but also will establish a foundation for meeting tomorrow`s measurement challenges. An effective EGM data editing software package can provide a suite of tools to provide accurate, timely data processing. This can be done in a structured, feature-rich, well-designed environment using a user-friendly, graphical user interface (GUI). The program can include functions to perform the following tasks: import data; recognize, review, and correct anomalies; report; export; and provide advanced ad hoc query capabilities. Other considerations can include the developer`s commitment resources, and long-term strategy, vis-a-vis EGM, as well as the industry`s overall acceptance of the package.

  18. Monopulse feed improves radar tracking accuracy

    NASA Astrophysics Data System (ADS)

    Lee-Yow, Clancy; Ghosh, Subir

    1992-03-01

    A monopulse antenna feed system based on a compact mode triplexer and a multimode corrugated horn is described. The system makes it possible to dramatically enhance the bandwidth and performance of space-fed array radars. Efficient microwave networking in the mode triplexer and a single aperture multimode corrugated horn provide independent control of RF excitation of the antenna aperture to achieve optimum distribution for the difference and sum beams simultaneously with a simple structure. The tracking acuracy is enhanced by the beam dependent effective size of the aperture in the multimode horn.

  19. Generalized and Heuristic-Free Feature Construction for Improved Accuracy

    PubMed Central

    Fan, Wei; Zhong, Erheng; Peng, Jing; Verscheure, Olivier; Zhang, Kun; Ren, Jiangtao; Yan, Rong; Yang, Qiang

    2010-01-01

    State-of-the-art learning algorithms accept data in feature vector format as input. Examples belonging to different classes may not always be easy to separate in the original feature space. One may ask: can transformation of existing features into new space reveal significant discriminative information not obvious in the original space? Since there can be infinite number of ways to extend features, it is impractical to first enumerate and then perform feature selection. Second, evaluation of discriminative power on the complete dataset is not always optimal. This is because features highly discriminative on subset of examples may not necessarily be significant when evaluated on the entire dataset. Third, feature construction ought to be automated and general, such that, it doesn't require domain knowledge and its improved accuracy maintains over a large number of classification algorithms. In this paper, we propose a framework to address these problems through the following steps: (1) divide-conquer to avoid exhaustive enumeration; (2) local feature construction and evaluation within subspaces of examples where local error is still high and constructed features thus far still do not predict well; (3) weighting rules based search that is domain knowledge free and has provable performance guarantee. Empirical studies indicate that significant improvement (as much as 9% in accuracy and 28% in AUC) is achieved using the newly constructed features over a variety of inductive learners evaluated against a number of balanced, skewed and high-dimensional datasets. Software and datasets are available from the authors. PMID:21544257

  20. Improving Student Achievement Using Expert Learning Systems

    ERIC Educational Resources Information Center

    Green, Ronny; Smith, Bob; Leech, Don

    2004-01-01

    Both educators and the public are demanding improvements in student achievement and school performance. However, students meeting the highest college admission standards are increasingly selecting fields of study other than teaching. How can we increase teacher competence when many of our brightest teacher prospects are going into other fields?…

  1. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  2. Improving Localization Accuracy: Successive Measurements Error Modeling.

    PubMed

    Ali, Najah Abu; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle's future position and its past positions, and then propose a -order Gauss-Markov model to predict the future position of a vehicle from its past  positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss-Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle's future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  3. Exemplar pediatric collaborative improvement networks: achieving results.

    PubMed

    Billett, Amy L; Colletti, Richard B; Mandel, Keith E; Miller, Marlene; Muething, Stephen E; Sharek, Paul J; Lannon, Carole M

    2013-06-01

    A number of pediatric collaborative improvement networks have demonstrated improved care and outcomes for children. Regionally, Cincinnati Children's Hospital Medical Center Physician Hospital Organization has sustained key asthma processes, substantially increased the percentage of their asthma population receiving "perfect care," and implemented an innovative pay-for-performance program with a large commercial payor based on asthma performance measures. The California Perinatal Quality Care Collaborative uses its outcomes database to improve care for infants in California NICUs. It has achieved reductions in central line-associated blood stream infections (CLABSI), increased breast-milk feeding rates at hospital discharge, and is now working to improve delivery room management. Solutions for Patient Safety (SPS) has achieved significant improvements in adverse drug events and surgical site infections across all 8 Ohio children's hospitals, with 7700 fewer children harmed and >$11.8 million in avoided costs. SPS is now expanding nationally, aiming to eliminate all events of serious harm at children's hospitals. National collaborative networks include ImproveCareNow, which aims to improve care and outcomes for children with inflammatory bowel disease. Reliable adherence to Model Care Guidelines has produced improved remission rates without using new medications and a significant increase in the proportion of Crohn disease patients not taking prednisone. Data-driven collaboratives of the Children's Hospital Association Quality Transformation Network initially focused on CLABSI in PICUs. By September 2011, they had prevented an estimated 2964 CLABSI, saving 355 lives and $103,722,423. Subsequent improvement efforts include CLABSI reductions in additional settings and populations.

  4. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  5. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a number line.…

  6. Improving Student Achievement in Science. Based on the "Handbook of Research on Improving Student Achievement."

    ERIC Educational Resources Information Center

    Educational Research Service, Arlington, VA.

    This booklet summarizes the science chapter from the "Handbook of Research on Improving Student Achievement," a report sponsored by the Alliance for Curriculum Reform and also published by the Educational Research Service. The handbook is based on the idea that efforts to improve instruction must focus on the existing knowledge base on effective…

  7. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    NASA Astrophysics Data System (ADS)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  8. Improving Student Achievement in Math and Science

    NASA Technical Reports Server (NTRS)

    Sullivan, Nancy G.; Hamsa, Irene Schulz; Heath, Panagiota; Perry, Robert; White, Stacy J.

    1998-01-01

    As the new millennium approaches, a long anticipated reckoning for the education system of the United States is forthcoming, Years of school reform initiatives have not yielded the anticipated results. A particularly perplexing problem involves the lack of significant improvement of student achievement in math and science. Three "Partnership" projects represent collaborative efforts between Xavier University (XU) of Louisiana, Southern University of New Orleans (SUNO), Mississippi Valley State University (MVSU), and the National Aeronautics and Space Administration (NASA), Stennis Space Center (SSC), to enhance student achievement in math and science. These "Partnerships" are focused on students and teachers in federally designated rural and urban empowerment zones and enterprise communities. The major goals of the "Partnerships" include: (1) The identification and dissemination of key indices of success that account for high performance in math and science; (2) The education of pre-service and in-service secondary teachers in knowledge, skills, and competencies that enhance the instruction of high school math and science; (3) The development of faculty to enhance the quality of math and science courses in institutions of higher education; and (4) The incorporation of technology-based instruction in institutions of higher education. These goals will be achieved by the accomplishment of the following objectives: (1) Delineate significant ?best practices? that are responsible for enhancing student outcomes in math and science; (2) Recruit and retain pre-service teachers with undergraduate degrees in Biology, Math, Chemistry, or Physics in a graduate program, culminating with a Master of Arts in Curriculum and Instruction; (3) Provide faculty workshops and opportunities for travel to professional meetings for dissemination of NASA resources information; (4) Implement methodologies and assessment procedures utilizing performance-based applications of higher order

  9. An improved method for determining force balance calibration accuracy

    NASA Astrophysics Data System (ADS)

    Ferris, Alice T.

    The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.

  10. Operators, service companies improve horizontal drilling accuracy offshore

    SciTech Connect

    Lyle, D.

    1996-04-01

    Continuing efforts to get more and better measurement and logging equipment closer to the bit improve accuracy in offshore drilling. Using current technology, both in measurement while drilling and logging while drilling, a target can consistently be hit within five vertical feet.

  11. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  12. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  13. Improve accuracy for automatic acetabulum segmentation in CT images.

    PubMed

    Liu, Hao; Zhao, Jianning; Dai, Ning; Qian, Hongbo; Tang, Yuehong

    2014-01-01

    Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.

  14. Do Charter Schools Improve Student Achievement?

    ERIC Educational Resources Information Center

    Clark, Melissa A.; Gleason, Philip M.; Tuttle, Christina Clark; Silverberg, Marsha K.

    2015-01-01

    This article presents findings from a lottery-based study of the impacts of a broad set of 33 charter middle schools across 13 states on student achievement. To estimate charter school impacts, we compare test score outcomes of students admitted to these schools through the randomized admissions lotteries with outcomes of applicants who were not…

  15. Improving Achievement through Problem-Based Learning

    ERIC Educational Resources Information Center

    Sungur, Semra; Tekkaya, Ceren; Geban, Omer

    2006-01-01

    In this study, the effect of problem-based learning on students' academic achievement and performance skills in a unit on the human excretory system was investigated. Sixty-one 10th grade students, from two full classes instructed by the same biology teacher, were involved in the study. Classes were randomly assigned as either the experimental or…

  16. Accuracy improvement in digital holographic microtomography by multiple numerical reconstructions

    NASA Astrophysics Data System (ADS)

    Ma, Xichao; Xiao, Wen; Pan, Feng

    2016-11-01

    In this paper, we describe a method to improve the accuracy in digital holographic microtomography (DHMT) for measurement of thick samples. Two key factors impairing the accuracy, the deficiency of depth of focus and the rotational error, are considered and addressed simultaneously. The hologram is propagated to a series of distances by multiple numerical reconstructions so as to extend the depth of focus. The correction of the rotational error, implemented by numerical refocusing and image realigning, is merged into the computational process. The method is validated by tomographic results of a four-core optical fiber and a large mode optical crystal fiber. A sample as thick as 258 μm is accurately reconstructed and the quantitative three-dimensional distribution of refractive index is demonstrated.

  17. Improving ASM stepper alignment accuracy by alignment signal intensity simulation

    NASA Astrophysics Data System (ADS)

    Li, Gerald; Pushpala, Sagar M.; Bradford, Bradley; Peng, Zezhong; Gottipati, Mohan

    1993-08-01

    As photolithography technology advances into submicron regime, the requirement for alignment accuracy also becomes much tighter. The alignment accuracy is a function of the strength of the alignment signal. Therefore, a detailed alignment signal intensity simulation for 0.8 micrometers EPROM poly-1 layer on ASM stepper was done based on the process of record in the fab to reduce misalignment and improve die yield. Oxide thickness variation did not have significant impact on the alignment signal intensity. However, poly-1 thickness was the most important parameter to affect optical alignments. The real alignment intensity data versus resist thickness on production wafers was collected and it showed good agreement with the simulated results. Similar results were obtained for ONO dielectric layer at a different fab.

  18. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  19. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  20. a Method to Achieve Large Volume, High Accuracy Photogrammetric Measurements Through the Use of AN Actively Deformable Sensor Mounting Platform

    NASA Astrophysics Data System (ADS)

    Sargeant, B.; Robson, S.; Szigeti, E.; Richardson, P.; El-Nounu, A.; Rafla, M.

    2016-06-01

    When using any optical measurement system one important factor to consider is the placement of the sensors in relation to the workpiece being measured. When making decisions on sensor placement compromises are necessary in selecting the best placement based on the shape and size of the object of interest and the desired resolution and accuracy. One such compromise is in the distance the sensors are placed from the measurement surface, where a smaller distance gives a higher spatial resolution and local accuracy and a greater distance reduces the number of measurements necessary to cover a large area reducing the build-up of errors between measurements and increasing global accuracy. This paper proposes a photogrammetric approach whereby a number of sensors on a continuously flexible mobile platform are used to obtain local measurements while the position of the sensors is determined by a 6DoF tracking solution and the results combined to give a single set of measurement data within a continuous global coordinate system. The ability of this approach to achieve both high accuracy measurement and give results over a large volume is then tested and areas of weakness to be improved upon are identified.

  1. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  2. Improving College Effectiveness: Raising Quality and Achievement.

    ERIC Educational Resources Information Center

    Somekh, Bridget; Convery, Andy; Delaney, Jean; Fisher, Roy; Gray, John; Gunn, Stan; Henworth, Andrew; Powell, Loraine

    1999-01-01

    Work undertaken to improve the effectiveness of the United Kingdom's schools and further education (FE) sectors was identified and assessed in a study entailing four data collection methods: literature review; questionnaire administered to all FE college principals in England and Wales; expert seminar and face-to-face interviews with high-level…

  3. Improving Reading Achievements of Struggling Learners

    ERIC Educational Resources Information Center

    Houtveen, Thoni; van de Grift, Wim

    2012-01-01

    In The Netherlands, the percentage of struggling readers in the 1st year of formal reading instruction is about 25%. This problem inspired us to develop the Reading Acceleration Programme. To evaluate the effectiveness of this programme, a quasi-experiment is carried out. The teachers in the experimental group have been trained to improve their…

  4. Strategic School Funding for Improved Student Achievement

    ERIC Educational Resources Information Center

    Chambers, Jay G.; Brown, James R.; Levin, Jesse; Jubb, Steve; Harper, Dorothy; Tolleson, Ray; Manship, Karen

    2010-01-01

    This article features Strategic School Funding for Results (SSFR) project, a new joint initiative of the American Institutes for Research (AIR) and Pivot Learning Partners (PLP) aimed at improving school finance, human resources, and management systems in large urban school districts. The goal of the project is to develop and implement more…

  5. How Patients Can Improve the Accuracy of their Medical Records

    PubMed Central

    Dullabh, Prashila M.; Sondheimer, Norman K.; Katsh, Ethan; Evans, Michael A.

    2014-01-01

    Objectives: Assess (1) if patients can improve their medical records’ accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback’s impact on medical record accuracy. Background: Improving medical record’ accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records’ accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists’ accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. Methods: The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists’ medication reconciliation logs. Findings/Discussion: (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency—or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms

  6. A new technique to improve the accuracy of LDA tracker measurements

    NASA Astrophysics Data System (ADS)

    Vonka, V.; Hoornstra, J.; Oldengarm, J.

    1981-08-01

    A new technique that improves the measurement accuracy of a tracker type laser Doppler anemometer for time averaged velocity measurements in a stationary flow is presented. It is shown that the accuracy of the demodulation system is affected by a systematic error, which can be eliminated. The principle of the technique is based on taking two independent but coupled measurements such that the error appears in both results but with opposite sign. This is achieved by up- and down-shifting the Doppler frequency using a bidirectional optical frequency shifting device.

  7. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  8. Improving spatial updating accuracy in absence of external feedback.

    PubMed

    Mackrous, I; Simoneau, M

    2015-08-01

    Updating the position of an earth-fixed target during whole-body rotation seems to rely on cognitive processes such as the utilization of external feedback. According to perceptual learning models, improvement in performance can also occur without external feedback. The aim of this study was to assess spatial updating improvement in the absence and in the presence of external feedback. While being rotated counterclockwise (CCW), participants had to predict when their body midline had crossed the position of a memorized target. Four experimental conditions were tested: (1) Pre-test: the target was presented 30° in the CCW direction from participant's midline. (2) Practice: the target was located 45° in the CCW direction from participant's midline. One group received external feedback about their spatial accuracy (Mackrous and Simoneau, 2014) while the other group did not. (3) Transfer T(30)CCW: the target was presented 30° in the CCW direction to evaluate whether improvement in performance, during practice, generalized to other target eccentricity. (4) Transfer T(30)CW: the target was presented 30° in the clockwise (CW) direction and participants were rotated CW. This transfer condition evaluated whether improvement in performance generalized to the untrained rotation direction. With practice, performance improved in the absence of external feedback (p=0.004). Nonetheless, larger improvement occurred when external feedback was provided (ps=0.002). During T(30)CCW, performance remained better for the feedback than the no-feedback group (p=0.005). However, no group difference was observed for the untrained direction (p=0.22). We demonstrated that spatial updating improved without external feedback but less than when external feedback was given. These observations are explained by a mixture of calibration processes and supervised vestibular learning.

  9. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  10. Cadastral Positioning Accuracy Improvement: a Case Study in Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, N. M.; Omar, A. H.; Omar, K. M.; Abdullah, N. M.; Yatim, M. H. M.

    2016-09-01

    Cadastral map is a parcel-based information which is specifically designed to define the limitation of boundaries. In Malaysia, the cadastral map is under authority of the Department of Surveying and Mapping Malaysia (DSMM). With the growth of spatial based technology especially Geographical Information System (GIS), DSMM decided to modernize and reform its cadastral legacy datasets by generating an accurate digital based representation of cadastral parcels. These legacy databases usually are derived from paper parcel maps known as certified plan. The cadastral modernization will result in the new cadastral database no longer being based on single and static parcel paper maps, but on a global digital map. Despite the strict process of the cadastral modernization, this reform has raised unexpected queries that remain essential to be addressed. The main focus of this study is to review the issues that have been generated by this transition. The transformed cadastral database should be additionally treated to minimize inherent errors and to fit them to the new satellite based coordinate system with high positional accuracy. This review result will be applied as a foundation for investigation to study the systematic and effectiveness method for Positional Accuracy Improvement (PAI) in cadastral database modernization.

  11. Selecting fillers on emotional appearance improves lineup identification accuracy.

    PubMed

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  12. Geometrical accuracy improvement in flexible roll forming lines

    NASA Astrophysics Data System (ADS)

    Larrañaga, J.; Berner, S.; Galdos, L.; Groche, P.

    2011-01-01

    The general interest to produce profiles with variable cross section in a cost-effective way has increased in the last few years. The flexible roll forming process allows producing profiles with variable cross section lengthwise in a continuous way. Until now, only a few flexible roll forming lines were developed and built up. Apart from the flange wrinkling along the transition zone of u-profiles with variable cross section, the process limits have not been investigated and solutions for shape deviations are unknown. During the PROFOM project a flexible roll forming machine has been developed with the objective of producing high technological components for automotive body structures. In order to investigate the limits of the process, different profile geometries and steel grades including high strength steels have been applied. During the first experimental tests, several errors have been identified, as a result of the complex stress states generated during the forming process. In order to improve the accuracy of the target profiles and to meet the tolerance demands of the automotive industry, a thermo-mechanical solution has been proposed. Additional mechanical devices, supporting flexible the roll forming process, have been implemented in the roll forming line together with local heating techniques. The combination of both methods shows a significant increase of the accuracy. In the present investigation, the experimental results of the validation process are presented.

  13. Advantages of improved timing accuracy in PET cameras using LSOscintillator

    SciTech Connect

    Moses, William W.

    2002-12-02

    PET scanners based on LSO have the potential forsignificantly better coincidence timing resolution than the 6 ns fwhmtypically achieved with BGO. This study analyzes the performanceenhancements made possible by improved timing as a function of thecoincidence time resolution. If 500 ps fwhm coincidence timing resolutioncan be achieved in a complete PET camera, the following four benefits canbe realized for whole-body FDG imaging: 1) The random event rate can bereduced by using a narrower coincidence timing window, increasing thepeak NECR by~;50 percent. 2) Using time-of-flight in the reconstructionalgorithm will reduce the noise variance by a factor of 5. 3) Emissionand transmission data can be acquired simultaneously, reducing the totalscan time. 4) Axial blurring can be reduced by using time-of-flight todetermine the correct axial plane that each event originated from. Whiletime-of-flight was extensively studied in the 1980's, practical factorslimited its effectiveness at that time and little attention has been paidto timing in PET since then. As these potential improvements aresubstantial and the advent of LSO PET cameras gives us the means toobtain them without other sacrifices, efforts to improve PET timingshould resume after their long dormancy.

  14. Accuracy of Self-Reported College GPA: Gender-Moderated Differences by Achievement Level and Academic Self-Efficacy

    ERIC Educational Resources Information Center

    Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.

    2014-01-01

    Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…

  15. Stratified computed tomography findings improve diagnostic accuracy for appendicitis

    PubMed Central

    Park, Geon; Lee, Sang Chul; Choi, Byung-Jo; Kim, Say-June

    2014-01-01

    AIM: To improve the diagnostic accuracy in patients with symptoms and signs of appendicitis, but without confirmative computed tomography (CT) findings. METHODS: We retrospectively reviewed the database of 224 patients who had been operated on for the suspicion of appendicitis, but whose CT findings were negative or equivocal for appendicitis. The patient population was divided into two groups: a pathologically proven appendicitis group (n = 177) and a non-appendicitis group (n = 47). The CT images of these patients were re-evaluated according to the characteristic CT features as described in the literature. The re-evaluations and baseline characteristics of the two groups were compared. RESULTS: The two groups showed significant differences with respect to appendiceal diameter, and the presence of periappendiceal fat stranding and intraluminal air in the appendix. A larger proportion of patients in the appendicitis group showed distended appendices larger than 6.0 mm (66.3% vs 37.0%; P < 0.001), periappendiceal fat stranding (34.1% vs 8.9%; P = 0.001), and the absence of intraluminal air (67.6% vs 48.9%; P = 0.024) compared to the non-appendicitis group. Furthermore, the presence of two or more of these factors increased the odds ratio to 6.8 times higher than baseline (95%CI: 3.013-15.454; P < 0.001). CONCLUSION: Appendiceal diameter and wall thickening, fat stranding, and absence of intraluminal air can be used to increased diagnostic accuracy for appendicitis with equivocal CT findings. PMID:25320531

  16. Accuracy Improvement on the Measurement of Human-Joint Angles.

    PubMed

    Meng, Dai; Shoepe, Todd; Vejarano, Gustavo

    2016-03-01

    A measurement technique that decreases the root mean square error (RMSE) of measurements of human-joint angles using a personal wireless sensor network is reported. Its operation is based on virtual rotations of wireless sensors worn by the user, and it focuses on the arm, whose position is measured on 5 degree of freedom (DOF). The wireless sensors use inertial magnetic units that measure the alignment of the arm with the earth's gravity and magnetic fields. Due to the biomechanical properties of human tissue (e.g., skin's elasticity), the sensors' orientation is shifted, and this shift affects the accuracy of measurements. In the proposed technique, the change of orientation is first modeled from linear regressions of data collected from 15 participants at different arm positions. Then, out of eight body indices measured with dual-energy X-ray absorptiometry, the percentage of body fat is found to have the greatest correlation with the rate of change in sensors' orientation. This finding enables us to estimate the change in sensors' orientation from the user's body fat percentage. Finally, an algorithm virtually rotates the sensors using quaternion theory with the objective of reducing the error. The proposed technique is validated with experiments on five different participants. In the DOF, whose error decreased the most, the RMSE decreased from 2.20(°) to 0.87(°). This is an improvement of 60%, and in the DOF whose error decreased the least, the RMSE decreased from 1.64(°) to 1.37(°). This is an improvement of 16%. On an average, the RMSE improved by 44%. PMID:25622331

  17. An effective approach to improving low-cost GPS positioning accuracy in real-time navigation.

    PubMed

    Islam, Md Rashedul; Kim, Jong-Myon

    2014-01-01

    Positioning accuracy is a challenging issue for location-based applications using a low-cost global positioning system (GPS). This paper presents an effective approach to improving the positioning accuracy of a low-cost GPS receiver for real-time navigation. The proposed method precisely estimates position by combining vehicle movement direction, velocity averaging, and distance between waypoints using coordinate data (latitude, longitude, time, and velocity) of the GPS receiver. The previously estimated precious reference point, coordinate translation, and invalid data check also improve accuracy. In order to evaluate the performance of the proposed method, we conducted an experiment using a GARMIN GPS 19xHVS receiver attached to a car and used Google Maps to plot the processed data. The proposed method achieved improvement of 4-10 meters in several experiments. In addition, we compared the proposed approach with two other state-of-the-art methods: recursive averaging and ARMA interpolation. The experimental results show that the proposed approach outperforms other state-of-the-art methods in terms of positioning accuracy.

  18. An Effective Approach to Improving Low-Cost GPS Positioning Accuracy in Real-Time Navigation

    PubMed Central

    Islam, Md. Rashedul; Kim, Jong-Myon

    2014-01-01

    Positioning accuracy is a challenging issue for location-based applications using a low-cost global positioning system (GPS). This paper presents an effective approach to improving the positioning accuracy of a low-cost GPS receiver for real-time navigation. The proposed method precisely estimates position by combining vehicle movement direction, velocity averaging, and distance between waypoints using coordinate data (latitude, longitude, time, and velocity) of the GPS receiver. The previously estimated precious reference point, coordinate translation, and invalid data check also improve accuracy. In order to evaluate the performance of the proposed method, we conducted an experiment using a GARMIN GPS 19xHVS receiver attached to a car and used Google Maps to plot the processed data. The proposed method achieved improvement of 4–10 meters in several experiments. In addition, we compared the proposed approach with two other state-of-the-art methods: recursive averaging and ARMA interpolation. The experimental results show that the proposed approach outperforms other state-of-the-art methods in terms of positioning accuracy. PMID:25136679

  19. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  20. Improving Accuracy of Influenza-Associated Hospitalization Rate Estimates

    PubMed Central

    Reed, Carrie; Kirley, Pam Daily; Aragon, Deborah; Meek, James; Farley, Monica M.; Ryan, Patricia; Collins, Jim; Lynfield, Ruth; Baumbach, Joan; Zansky, Shelley; Bennett, Nancy M.; Fowler, Brian; Thomas, Ann; Lindegren, Mary L.; Atkinson, Annette; Finelli, Lyn; Chaves, Sandra S.

    2015-01-01

    Diagnostic test sensitivity affects rate estimates for laboratory-confirmed influenza–associated hospitalizations. We used data from FluSurv-NET, a national population-based surveillance system for laboratory-confirmed influenza hospitalizations, to capture diagnostic test type by patient age and influenza season. We calculated observed rates by age group and adjusted rates by test sensitivity. Test sensitivity was lowest in adults >65 years of age. For all ages, reverse transcription PCR was the most sensitive test, and use increased from <10% during 2003–2008 to ≈70% during 2009–2013. Observed hospitalization rates per 100,000 persons varied by season: 7.3–50.5 for children <18 years of age, 3.0–30.3 for adults 18–64 years, and 13.6–181.8 for adults >65 years. After 2009, hospitalization rates adjusted by test sensitivity were ≈15% higher for children <18 years, ≈20% higher for adults 18–64 years, and ≈55% for adults >65 years of age. Test sensitivity adjustments improve the accuracy of hospitalization rate estimates. PMID:26292017

  1. Improving the Accuracy of Stamping Analyses Including Springback Deformations

    NASA Astrophysics Data System (ADS)

    Firat, Mehmet; Karadeniz, Erdal; Yenice, Mustafa; Kaya, Mesut

    2013-02-01

    An accurate prediction of sheet metal deformation including springback is one of the main issues in an efficient finite element (FE) simulation in automotive and stamping industries. Considering tooling design for newer class of high-strength steels, in particular, this requirement became an important aspect for springback compensation practices today. The sheet deformation modeling accounting Bauschinger effect is considered to be a key factor affecting the accuracy of FE simulations in this context. In this article, a rate-independent cyclic plasticity model is presented and implemented into LS-Dyna software for an accurate modeling of sheet metal deformation in stamping simulations. The proposed model uses Hill's orthotropic yield surface in the description of yield loci of planar and transversely anisotropic sheets. The strain-hardening behavior is calculated based on an additive backstress form of the nonlinear kinematic hardening rule. The proposed model is applied in stamping simulations of a dual-phase steel automotive part, and comparisons are presented in terms of part strain and thickness distributions calculated with isotropic plasticity and the proposed model. It is observed that both models produce similar plastic strain and thickness distributions; however, there appeared to be considerable differences in computed springback deformations. Part shapes computed with both plasticity models were evaluated with surface scanning of manufactured parts. A comparison of FE computed geometries with manufactured parts proved the improved performance of proposed model over isotropic plasticity for this particular stamping application.

  2. Accuracy Improvement for Predicting Parkinson’s Disease Progression

    PubMed Central

    Nilashi, Mehrbakhsh; Ibrahim, Othman; Ahani, Ali

    2016-01-01

    Parkinson’s disease (PD) is a member of a larger group of neuromotor diseases marked by the progressive death of dopamineproducing cells in the brain. Providing computational tools for Parkinson disease using a set of data that contains medical information is very desirable for alleviating the symptoms that can help the amount of people who want to discover the risk of disease at an early stage. This paper proposes a new hybrid intelligent system for the prediction of PD progression using noise removal, clustering and prediction methods. Principal Component Analysis (PCA) and Expectation Maximization (EM) are respectively employed to address the multi-collinearity problems in the experimental datasets and clustering the data. We then apply Adaptive Neuro-Fuzzy Inference System (ANFIS) and Support Vector Regression (SVR) for prediction of PD progression. Experimental results on public Parkinson’s datasets show that the proposed method remarkably improves the accuracy of prediction of PD progression. The hybrid intelligent system can assist medical practitioners in the healthcare practice for early detection of Parkinson disease. PMID:27686748

  3. Science Achievement for All: Improving Science Performance and Closing Achievement Gaps

    ERIC Educational Resources Information Center

    Jackson, Julie K.; Ash, Gwynne

    2012-01-01

    This article addresses the serious and growing need to improve science instruction and science achievement for all students. We will describe the results of a 3-year study that transformed science instruction and student achievement at two high-poverty ethnically diverse public elementary schools in Texas. The school-wide intervention included…

  4. What level of accuracy is achievable for preclinical dose painting studies on a clinical irradiation platform?

    PubMed

    Trani, Daniela; Reniers, Brigitte; Persoon, Lucas; Podesta, Mark; Nalbantov, Georgi; Leijenaar, Ralph T H; Granzier, Marlies; Yaromina, Ala; Dubois, Ludwig; Verhaegen, Frank; Lambin, Philippe

    2015-05-01

    in a rat tumor model on a clinical platform, with a high accuracy achieved in the delivery of complex dose distributions. Our work demonstrates the technical feasibility of this approach and enables future investigations on the therapeutic effect of preclinical dose painting strategies using a state-of-the-art clinical platform.

  5. Science Achievement for All: Improving Science Performance and Closing Achievement Gaps

    NASA Astrophysics Data System (ADS)

    Jackson, Julie K.; Ash, Gwynne

    2012-11-01

    This article addresses the serious and growing need to improve science instruction and science achievement for all students. We will describe the results of a 3-year study that transformed science instruction and student achievement at two high-poverty ethnically diverse public elementary schools in Texas. The school-wide intervention included purposeful planning, inquiry science instruction, and contextually rich academic science vocabulary development. In combination, these instructional practices rapidly improved student-science learning outcomes and narrowed achievement gaps across diverse student populations.

  6. Achieving sub-pixel geolocation accuracy in support of MODIS land science

    USGS Publications Warehouse

    Wolfe, R.E.; Nishihama, M.; Fleig, A.J.; Kuyper, J.A.; Roy, D.P.; Storey, J.C.; Patt, F.S.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands-29 with 1 km, five with 500 m, and two with 250 m nadir pixel dimensions. The Terra satellite has on-board exterior orientation (position and attitude) measurement systems designed to enable geolocation of MODIS data to approximately 150 m (1??) at nadir. A global network of ground control points is being used to determine biases and trends in the sensor orientation. Biases have been removed by updating models of the spacecraft and instrument orientation in the MODIS geolocation software several times since launch and have improved the MODIS geolocation to approximately 50 m (1??) at nadir. This paper overviews the geolocation approach, summarizes the first year of geolocation analysis, and overviews future work. The approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications. ?? 2002 Elsevier Science Inc. All rights reserved.

  7. How Much Can Spatial Training Improve STEM Achievement?

    ERIC Educational Resources Information Center

    Stieff, Mike; Uttal, David

    2015-01-01

    Spatial training has been indicated as a possible solution for improving Science, Technology, Engineering, and Mathematics (STEM) achievement and degree attainment. Advocates for this approach have noted that the correlation between spatial ability and several measures of STEM achievement suggests that spatial training should focus on improving…

  8. A computational approach for prediction of donor splice sites with improved accuracy.

    PubMed

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-01

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911

  9. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  10. An Action Plan for Improving Mediocre or Stagnant Student Achievement

    ERIC Educational Resources Information Center

    Redmond, Kimberley B.

    2013-01-01

    Although all of the schools in the target school system adhere to a school improvement process, achievement scores remain mediocre or stagnant within the overseas school in Italy that serves children of United States armed service members. To address this problem, this study explored the target school's improvement process to discover how…

  11. Professional Learning Communities That Initiate Improvement in Student Achievement

    ERIC Educational Resources Information Center

    Royer, Suzanne M.

    2012-01-01

    Quality teaching requires a strong practice of collaboration, an essential building block for educators to improve student achievement. Researchers have theorized that the implementation of a professional learning community (PLC) with resultant collaborative practices among teachers sustains academic improvement. The problem addressed specifically…

  12. Improving Literacy Achievement: An Effective Approach to Continuous Progress

    ERIC Educational Resources Information Center

    Haley, Carolyn E.

    2007-01-01

    Billions of dollars are spent searching for programs and strategic plans that will prove to be the panacea for improving literacy achievement. With all of the experimental and researched programs implemented in school districts, the overall results are still at a minimum and many improvement gains have been short term. This book focuses on…

  13. Does Children's Academic Achievement Improve when Single Mothers Marry?

    ERIC Educational Resources Information Center

    Wagmiller, Robert L., Jr.; Gershoff, Elizabeth; Veliz, Philip; Clements, Margaret

    2010-01-01

    Promoting marriage, especially among low-income single mothers with children, is increasingly viewed as a promising public policy strategy for improving developmental outcomes for disadvantaged children. Previous research suggests, however, that children's academic achievement either does not improve or declines when single mothers marry. In this…

  14. Correcting Memory Improves Accuracy of Predicted Task Duration

    ERIC Educational Resources Information Center

    Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.

    2008-01-01

    People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…

  15. Improving Accuracy of Sleep Self-Reports through Correspondence Training

    ERIC Educational Resources Information Center

    St. Peter, Claire C.; Montgomery-Downs, Hawley E.; Massullo, Joel P.

    2012-01-01

    Sleep insufficiency is a major public health concern, yet the accuracy of self-reported sleep measures is often poor. Self-report may be useful when direct measurement of nonverbal behavior is impossible, infeasible, or undesirable, as it may be with sleep measurement. We used feedback and positive reinforcement within a small-n multiple-baseline…

  16. Improving the accuracy of MTF measurement at low frequencies based on oversampled edge spread function deconvolution.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin; Ren, Liqiang; Li, Zheng; Ghani, Muhammad U; Hao, Ting; Liu, Hong

    2015-01-01

    The modulation transfer function (MTF) of a radiographic system is often evaluated by measuring the system's edge spread function (ESF) using edge device. However, the numerical differentiation procedure of the traditional slanted edge method amplifies noises in the line spread function (LSF) and limits the accuracy of the MTF measurement at low frequencies. The purpose of this study is to improve the accuracy of low-frequency MTF measurement for digital x-ray imaging systems. An edge spread function (ESF) deconvolution technique was developed for MTF measurement based on the degradation model of slanted edge images. Specifically, symmetric oversampled ESFs were constructed by subtracting a shifted version of the ESF from the original one. For validation, the proposed MTF technique was compared with conventional slanted edge method through computer simulations as well as experiments on two digital radiography systems. The simulation results show that the average errors of the proposed ESF deconvolution technique were 0.11% ± 0.09% and 0.23% ± 0.14%, and they outperformed the conventional edge method (0.64% ± 0.57% and 1.04% ± 0.82% respectively) at low-frequencies. On the experimental edge images, the proposed technique achieved better uncertainty performance than the conventional method. As a result, both computer simulation and experiments have demonstrated that the accuracy of MTF measurement at low frequencies can be improved by using the proposed ESF deconvolution technique. PMID:26410662

  17. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium

    PubMed Central

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619

  18. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium.

    PubMed

    Ramstein, Guillaume P; Evans, Joseph; Kaeppler, Shawn M; Mitchell, Robert B; Vogel, Kenneth P; Buell, C Robin; Casler, Michael D

    2016-04-07

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families' parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.

  19. Improved time-domain accuracy standards for model gravitational waveforms

    SciTech Connect

    Lindblom, Lee; Baker, John G.

    2010-10-15

    Model gravitational waveforms must be accurate enough to be useful for detection of signals and measurement of their parameters, so appropriate accuracy standards are needed. Yet these standards should not be unnecessarily restrictive, making them impractical for the numerical and analytical modelers to meet. The work of Lindblom, Owen, and Brown [Phys. Rev. D 78, 124020 (2008)] is extended by deriving new waveform accuracy standards which are significantly less restrictive while still ensuring the quality needed for gravitational-wave data analysis. These new standards are formulated as bounds on certain norms of the time-domain waveform errors, which makes it possible to enforce them in situations where frequency-domain errors may be difficult or impossible to estimate reliably. These standards are less restrictive by about a factor of 20 than the previously published time-domain standards for detection, and up to a factor of 60 for measurement. These new standards should therefore be much easier to use effectively.

  20. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  1. Referential Communication Accuracy of Mother-Child Pairs and Children's Later Scholastic Achievement: A Follow-Up Study.

    ERIC Educational Resources Information Center

    McDevitt, Teresa M.; And Others

    1987-01-01

    The relationship between the referential communication accuracy of mothers and their 4-year-old children and the children's achievement in vocabulary and mathematics at age 12 was examined in 47 American and 44 Japanese mother-child pairs. Positive correlations were found in both cultures. (Author/BN)

  2. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  3. Minimizing MRI Geometric Distortions for Improved Stereotactic Surgical Planning Accuracy: a Theoretical and Experimental Analysis

    NASA Astrophysics Data System (ADS)

    Bertolina, James A.

    1995-01-01

    addressed. From this analysis it is shown that significant geometric distortions often exist in MR images. It is pointed out, however, that the net geometric distortion in a stereotactic image can be minimized to an acceptable level (less than 1 mm error) with careful selection of MR sequences and parameters. This improved accuracy is accompanied by a reduction in the signal-to-noise ratio. This can be overcome by signal averaging, but this is clinically unattractive because it requires extra time. A novel method that yields a high degree of accuracy without sacrificing signal-to-noise is then introduced. Finally, the concepts developed throughout this dissertation are validated using stereotactic images of a cadaver head. It is concluded that by implementing the steps outlined in this dissertation, any MR site can achieve a high degree of image stereotactic accuracy without the cost, inconvenience, or patient discomfort associated with concurrent x-ray computed tomography (CT) imaging.

  4. Combining data fusion with multiresolution analysis for improving the classification accuracy of uterine EMG signals

    NASA Astrophysics Data System (ADS)

    Moslem, Bassam; Diab, Mohamad; Khalil, Mohamad; Marque, Catherine

    2012-12-01

    Multisensor data fusion is a powerful solution for solving difficult pattern recognition problems such as the classification of bioelectrical signals. It is the process of combining information from different sensors to provide a more stable and more robust classification decisions. We combine here data fusion with multiresolution analysis based on the wavelet packet transform (WPT) in order to classify real uterine electromyogram (EMG) signals recorded by 16 electrodes. Herein, the data fusion is done at the decision level by using a weighted majority voting (WMV) rule. On the other hand, the WPT is used to achieve significant enhancement in the classification performance of each channel by improving the discrimination power of the selected feature. We show that the proposed approach tested on our recorded data can improve the recognition accuracy in labor prediction and has a competitive and promising performance.

  5. Achieving Millennium Development Goal 5, the improvement of maternal health.

    PubMed

    Callister, Lynn Clark; Edwards, Joan E

    2010-01-01

    The purpose of this article is to describe the progress made toward the achievement of Millennium Development Goal 5, the improvement of maternal health. Maternal mortality rates (MMR) remain high globally, and in the United States there have been recent increases in MMR. Interventions to improve global maternal health are described. Nurses should be aware of the enduring epidemic of global maternal mortality, advocate for childbearing women, and contribute to implementing effective interventions to reduce maternal mortality. PMID:20673318

  6. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    SciTech Connect

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  7. Using Students' Cultural Heritage to Improve Academic Achievement in Writing

    ERIC Educational Resources Information Center

    Mendez, Gilbert

    2006-01-01

    This article discusses an approach to teaching used at Calexico Unified School District, a California-Mexican border high school, by a group of teachers working to make teaching and learning more relevant to Chicano and Mexican students' lives and to improve their academic achievement in writing. An off-shoot of a training program for English…

  8. Systems Thinking: A Skill to Improve Student Achievement

    ERIC Educational Resources Information Center

    Thornton, Bill; Peltier, Gary; Perreault, George

    2004-01-01

    This article examines how schools can avoid barriers to systems thinking in relation to improving student achievement. It then illustrates common errors associated with non-systems thinking and recommends solutions. Educators who understand that schools are complex interdependent social systems can move their organizations forward. Unfortunately,…

  9. New Directions in Social Psychological Interventions to Improve Academic Achievement

    ERIC Educational Resources Information Center

    Wilson, Timothy D.; Buttrick, Nicholas R.

    2016-01-01

    Attempts to improve student achievement typically focus on changing the educational environment (e.g., better schools, better teachers) or on personal characteristics of students (e.g., intelligence, self-control). The 6 articles in this special issue showcase an additional approach, emanating from social psychology, which focuses on students'…

  10. Improving Student Reading Achievement through the Use of Reading Strategies.

    ERIC Educational Resources Information Center

    Claussen, Jean; Ford, Linda; Mosley, Elizabeth

    This report describes a program to improve student reading achievement of the targeted first, second, and third grade classes in two west central Illinois schools. More than half of each schools' population was identified as low-income. Evidence for the existence of reading problems included student surveys, oral reading rubrics, phonemic…

  11. Understanding the Change Styles of Teachers to Improve Student Achievement

    ERIC Educational Resources Information Center

    Bigby, Arlene May Green

    2009-01-01

    The topic of this dissertation is the understanding of teacher change styles to improve student achievement. Teachers from public schools in a state located in the northern plains were surveyed regarding their Change Styles (preferred approaches to change) and flexibility scores. The results were statistically analyzed to determine if there were…

  12. Incorporating tracer-tracee differences into models to improve accuracy

    SciTech Connect

    Schoeller, D.A. )

    1991-05-01

    The ideal tracer for metabolic studies is one that behaves exactly like the tracee. Compounds labeled with isotopes come the closest to this ideal because they are chemically identical to the tracee except for the substitution of a stable or radioisotope at one or more positions. Even this substitution, however, can introduce a difference in metabolism that may be quantitatively important with regard to the development of the mathematical model used to interpret the kinetic data. The doubly labeled water method for the measurement of carbon dioxide production and hence energy expenditure in free-living subjects is a good example of how differences between the metabolism of the tracers and the tracee can influence the accuracy of the carbon dioxide production rate determined from the kinetic data.

  13. Design considerations for achieving high accuracy with the SHOALS bathymetric lidar system

    NASA Astrophysics Data System (ADS)

    Guenther, Gary C.; Thomas, Robert W. L.; LaRocque, Paul E.

    1996-11-01

    The ultimate accuracy of depths from an airborne laser hydrography system depends both on careful hardware design aimed at producing the best possible accuracy and precision of recorded data, along with insensitivity to environmental effects, and on post-flight data processing software which corrects for a number of unavoidable biases and provides for flexible operator interaction to handle special cases. The generic procedure for obtaining a depth from an airborne lidar pulse involves measurement of the time between the surface return and the bottom return. In practice, because both of these return times are biased due to a number of environmental and hardware effects, it is necessary to apply various correctors in order to obtain depth estimates which are sufficiently accurate to meet International Hydrographic Office standards. Potential false targets, also of both environmental and hardware origin, must be discriminated, and wave heights must be removed. It is important to have a depth confidence value matched to accuracy and to have warnings about or automatic deletion of pulses with questionable characteristics. Techniques, procedures, and algorithms developed for the SHOALS systems are detailed here.

  14. Study on Improving the Accuracy of Satellite Measurement in Urban Areas

    NASA Astrophysics Data System (ADS)

    Matsushita, Takashi; Tanaka, Toshiyuki

    GPS/GNSS (Global Positioning System/Global Navigation Satellite System) is a 3D positioning system using space satellites for measuring a receiver's current position. Recently, many people use GPS as the navigation system in car and cellular phone, so the positioning accuracy of several meters is required to satisfy the user's need. However the measurement error reaches hundreds of meters in urban areas. One of the reasons is that the receiver fails to measure pseudo range accurately due to multipath from the buildings and so on. The other reason is that the satellite constellation is biased because of the decreasing number of observable satellites. Therefore, we proposed methods for reducing the multipath error and the lack of visible satellites. At the present day, although multipath error is reduced by the choke ring antenna and the correlators, this method has a problem that the antenna is expensive, big or complex. We devise methods to reduce the multipath error by only using measurement data. By these methods, we can reduce the size of the receiver and to use the satellite that contains the multipath error for the measurement. We achieved the improvement from 35.3m to 30.5m in 2drms by this method. We achieved about 69% improvement in 2drms and about 5% increase in measurement rate. We can describe that we succeeded not only in improving the measurement accuracy but also in increasing the measurement rate in urban area. The results show that our proposed method is effective in urban areas measurement.

  15. Lucky imaging: improved localization accuracy for single molecule imaging.

    PubMed

    Cronin, Bríd; de Wet, Ben; Wallace, Mark I

    2009-04-01

    We apply the astronomical data-analysis technique, Lucky imaging, to improve resolution in single molecule fluorescence microscopy. We show that by selectively discarding data points from individual single-molecule trajectories, imaging resolution can be improved by a factor of 1.6 for individual fluorophores and up to 5.6 for more complex images. The method is illustrated using images of fluorescent dye molecules and quantum dots, and the in vivo imaging of fluorescently labeled linker for activation of T cells.

  16. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  17. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-06-18

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving.

  18. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  19. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  20. You are so beautiful... to me: seeing beyond biases and achieving accuracy in romantic relationships.

    PubMed

    Solomon, Brittany C; Vazire, Simine

    2014-09-01

    Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically). PMID:25133729

  1. Singing Video Games May Help Improve Pitch-Matching Accuracy

    ERIC Educational Resources Information Center

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  2. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  3. Does naming accuracy improve through self-monitoring of errors?

    PubMed

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  4. Cocontraction of Pairs of Muscles around Joints May Improve an Accuracy of a Reaching Movement: a Numerical Simulation Study

    NASA Astrophysics Data System (ADS)

    Ueyama, Yuki; Miyashita, Eizo

    2011-06-01

    We have pair muscle groups on a joint; agonist and antagonist muscles. Simultaneous activation of agonist and antagonist muscles around a joint, which is called cocontraction, is suggested to take a role of increasing the joint stiffness in order to decelerate hand speed and improve movement accuracy. However, it has not been clear how cocontraction and the joint stiffness are varied during movements. In this study, muscle activation and the joint stiffness in reaching movements were studied under several requirements of end-point accuracy using a 2-joint 6-muscle model and an approximately optimal control. The time-varying cocontraction and the joint stiffness were showed by the numerically simulation study. It indicated that the strength of cocontraction and the joint stiffness increased synchronously as the required accuracy level increased. We conclude that cocontraction may get the joint stiffness increased to achieve higher requirement of the movement accuracy.

  5. Improving the Accuracy of Estimation of Climate Extremes

    NASA Astrophysics Data System (ADS)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  6. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE PAGES

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-02-11

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  7. Improving the accuracy of maternal mortality and pregnancy related death.

    PubMed

    Schaible, Burk

    2014-01-01

    Comparing abortion-related death and pregnancy-related death remains difficult due to the limitations within the Abortion Mortality Surveillance System and the International Statistical Classification of Diseases and Related Health Problems (ICD). These methods lack a systematic and comprehensive method of collecting complete records regarding abortion outcomes in each state and fail to properly identify longitudinal cause of death related to induced abortion. This article seeks to analyze the current method of comparing abortion-related death with pregnancy-related death and provide solutions to improve data collection regarding these subjects.

  8. Using improvement science methods to increase accuracy of surgical consents.

    PubMed

    Mercurio, Patti; Shaffer Ellis, Andrea; Schoettker, Pamela J; Stone, Raymond; Lenk, Mary Anne; Ryckman, Frederick C

    2014-07-01

    The surgical consent serves as a key link in preventing breakdowns in communication that could lead to wrong-patient, wrong-site, or wrong-procedure events. We conducted a quality improvement initiative at a large, urban pediatric academic medical center to reliably increase the percentage of informed consents for surgical and medical procedures with accurate safety data information at the first point of perioperative contact. Improvement activities focused on awareness, education, standardization, real-time feedback and failure identification, and transparency. A total of 54,082 consent forms from 13 surgical divisions were reviewed between May 18, 2011, and November 30, 2012. Between May 2011 and June 2012, the percentage of consents without safety errors increased from a median of 95.4% to 99.7%. Since July 2012, the median has decreased slightly but has remained stable at 99.4%. Our results suggest that effective safety checks allow discovery and prevention of errors.

  9. SPHGal: smoothed particle hydrodynamics with improved accuracy for galaxy simulations

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Yu; Naab, Thorsten; Walch, Stefanie; Moster, Benjamin P.; Oser, Ludwig

    2014-09-01

    We present the smoothed particle hydrodynamics (SPH) implementation SPHGal, which combines some recently proposed improvements in GADGET. This includes a pressure-entropy formulation with a Wendland kernel, a higher order estimate of velocity gradients, a modified artificial viscosity switch with a modified strong limiter, and artificial conduction of thermal energy. With a series of idealized hydrodynamic tests, we show that the pressure-entropy formulation is ideal for resolving fluid mixing at contact discontinuities but performs conspicuously worse at strong shocks due to the large entropy discontinuities. Including artificial conduction at shocks greatly improves the results. In simulations of Milky Way like disc galaxies a feedback-induced instability develops if too much artificial viscosity is introduced. Our modified artificial viscosity scheme prevents this instability and shows efficient shock capturing capability. We also investigate the star formation rate and the galactic outflow. The star formation rates vary slightly for different SPH schemes while the mass loading is sensitive to the SPH scheme and significantly reduced in our favoured implementation. We compare the accretion behaviour of the hot halo gas. The formation of cold blobs, an artefact of simple SPH implementations, can be eliminated efficiently with proper fluid mixing, either by conduction and/or by using a pressure-entropy formulation.

  10. Improved accuracy of 3D-printed navigational template during complicated tibial plateau fracture surgery.

    PubMed

    Huang, Huajun; Hsieh, Ming-Fa; Zhang, Guodong; Ouyang, Hanbin; Zeng, Canjun; Yan, Bin; Xu, Jing; Yang, Yang; Wu, Zhanglin; Huang, Wenhua

    2015-03-01

    This study was aimed to improve the surgical accuracy of plating and screwing for complicated tibial plateau fracture assisted by 3D implants library and 3D-printed navigational template. Clinical cases were performed whereby complicated tibial plateau fractures were imaged using computed tomography and reconstructed into 3D fracture prototypes. The preoperative planning of anatomic matching plate with appropriate screw trajectories was performed with the help of the library of 3D models of implants. According to the optimal planning, patient-specific navigational templates produced by 3D printer were used to accurately guide the real surgical implantation. The fixation outcomes in term of the deviations of screw placement between preoperative and postoperative screw trajectories were measured and compared, including the screw lengths, entry point locations and screw directions. With virtual preoperative planning, we have achieved optimal and accurate fixation outcomes in the real clinical surgeries. The deviations of screw length was 1.57 ± 5.77 mm, P > 0.05. The displacements of the entry points in the x-, y-, and z-axis were 0.23 ± 0.62, 0.83 ± 1.91, and 0.46 ± 0.67 mm, respectively, P > 0.05. The deviations of projection angle in the coronal (x-y) and transverse (x-z) planes were 6.34 ± 3.42° and 4.68 ± 3.94°, respectively, P > 0.05. There was no significant difference in the deviations of screw length, entry point and projection angle between the ideal and real screw trajectories. The ideal and accurate preoperative planning of plating and screwing can be achieved in the real surgery assisted by the 3D models library of implants and the patient-specific navigational template. This technology improves the accuracy and efficiency of personalized internal fixation surgery and we have proved this in our clinical applications.

  11. Resolution and quantitative accuracy improvements in ultrasound transmission imaging

    NASA Astrophysics Data System (ADS)

    Chenevert, T. L.

    The type of ultrasound transmission imaging, referred to as ultrasonic computed tomography (UCT), reconstructs distributions of tissue speed of sound and sound attenuation properties from measurements of acoustic pulse time of flight (TCF) and energy received through tissue. Although clinical studies with experimental UCT scanners have demonstrated UCT is sensitive to certain tissue pathologies not easily detected with conventional ultrasound imaging, they have also shown UCT to suffer from artifacts due to physical differences between the acoustic beam and its ray model implicit in image reconstruction algorithms. Artifacts are expressed as large quantitative errors in attenuation images, and poor spatial resolution and size distortion (exaggerated size of high speed of sound regions) in speed of sound images. Methods are introduced and investigated which alleviate these problems in UCT imaging by providing improved measurements of pulse TCF and energy.

  12. Strain mapping accuracy improvement using super-resolution techniques.

    PubMed

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Fernández-Reyes, D; González, D; Mayoral, A; Utrilla, A D; Ulloa, J M; Galindo, P L

    2016-04-01

    Super-resolution (SR) software-based techniques aim at generating a final image by combining several noisy frames with lower resolution from the same scene. A comparative study on high-resolution high-angle annular dark field images of InAs/GaAs QDs has been carried out in order to evaluate the performance of the SR technique. The obtained SR images present enhanced resolution and higher signal-to-noise (SNR) ratio and sharpness regarding the experimental images. In addition, SR is also applied in the field of strain analysis using digital image processing applications such as geometrical phase analysis and peak pairs analysis. The precision of the strain mappings can be improved when SR methodologies are applied to experimental images.

  13. Improved accuracy of quantification of analytes in human body fluids by near-IR laser Raman spectroscopy with new algorithms

    NASA Astrophysics Data System (ADS)

    Qu, Jianan Y.; Yau, On L.; Yau, SzeFong M.

    1999-07-01

    Near infrared Raman spectroscopy has been successfully used to analyze ethanol and acetaminophen in human urine samples quantitatively. The new algorithms incorporating the intrinsic spectrum of the analyte of interest into the multivariate calibration were examined to improve the accuracy of the predicted concentrations. Comparing with commonly used partial least squares calibration, it was found that the methods using the intrinsic spectrum of the analyte of interest always achieved much higher accuracy, particularly when the interference from other undesired chemicals in the samples are severe.

  14. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  15. FISM 2.0: Improved Spectral Range, Resolution, and Accuracy

    NASA Technical Reports Server (NTRS)

    Chamberlin, Phillip C.

    2012-01-01

    The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.

  16. Improving the accuracy of canal seepage detection through geospatial techniques

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad

    With climatic change, many western states in the United States are experiencing drought conditions. Numerous irrigation districts are losing significant amount of water from their canal systems due to leakage. Every year, on the average 2 million acres of prime cropland in the US is lost to soil erosion, waterlogging and salinity. Lining of canals could save enormous amount of water for irrigating crops but in present time due to soaring costs of construction and environmental mitigation, adopting such program on a large scale would be excessive. Conventional techniques of seepage detection are expensive, time consuming and labor intensive besides being not very accurate. Technological advancements in remote sensing have made it possible to investigate irrigation canals for seepage sites identification. In this research, band-9 in the [NIR] region and band-45 in the [TIR] region of an airborne MASTER data has been utilized to highlight anomalies along irrigation canal at Phoenix, Arizona. High resolution (1 to 4 meter pixels) satellite images provided by private companies for scientific research and made available by Google to the public on Google Earth is then successfully used to separate those anomalies into water activity sites, natural vegetation, and man-made structures and thereby greatly improving the seepage detection ability of airborne remote sensing. This innovative technique is much faster and cost effective as compared to conventional techniques and past airborne remote sensing techniques for verification of anomalies along irrigation canals. This technique also solves one of the long standing problems of discriminating false impression of seepage sites due to dense natural vegetation, terrain relief and low depressions of natural drainages from true water related activity sites.

  17. Two-step FEM-based Liver-CT registration: improving internal and external accuracy

    NASA Astrophysics Data System (ADS)

    Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2014-03-01

    To know the exact location of the internal structures of the organs, especially the vasculature, is of great importance for the clinicians. This information allows them to know which structures/vessels will be affected by certain therapy and therefore to better treat the patients. However the use of internal structures for registration is often disregarded especially in physical based registration methods. In this paper we propose an algorithm that uses finite element methods to carry out a registration of liver volumes that will not only have accuracy in the boundaries of the organ but also in the interior. Therefore a graph matching algorithm is used to find correspondences between the vessel trees of the two livers to be registered. In addition to this an adaptive volumetric mesh is generated that contains nodes in the locations in which correspondences were found. The displacements derived from those correspondences are the input for the initial deformation of the model. The first deformation brings the internal structures to their final deformed positions and the surfaces close to it. Finally, thin plate splines are used to refine the solution at the boundaries of the organ achieving an improvement in the accuracy of 71%. The algorithm has been evaluated in CT clinical images of the abdomen.

  18. Improvements in ECG accuracy for diagnosis of left ventricular hypertrophy in obesity

    PubMed Central

    Rider, Oliver J; Ntusi, Ntobeko; Bull, Sacha C; Nethononda, Richard; Ferreira, Vanessa; Holloway, Cameron J; Holdsworth, David; Mahmod, Masliza; Rayner, Jennifer J; Banerjee, Rajarshi; Myerson, Saul; Watkins, Hugh; Neubauer, Stefan

    2016-01-01

    Objectives The electrocardiogram (ECG) is the most commonly used tool to screen for left ventricular hypertrophy (LVH), and yet current diagnostic criteria are insensitive in modern increasingly overweight society. We propose a simple adjustment to improve diagnostic accuracy in different body weights and improve the sensitivity of this universally available technique. Methods Overall, 1295 participants were included—821 with a wide range of body mass index (BMI 17.1–53.3 kg/m2) initially underwent cardiac magnetic resonance evaluation of anatomical left ventricular (LV) axis, LV mass and 12-lead surface ECG in order to generate an adjustment factor applied to the Sokolow–Lyon criteria. This factor was then validated in a second cohort (n=520, BMI 15.9–63.2 kg/m2). Results When matched for LV mass, the combination of leftward anatomical axis deviation and increased BMI resulted in a reduction of the Sokolow–Lyon index, by 4 mm in overweight and 8 mm in obesity. After adjusting for this in the initial cohort, the sensitivity of the Sokolow–Lyon index increased (overweight: 12.8% to 30.8%, obese: 3.1% to 27.2%) approaching that seen in normal weight (37.8%). Similar results were achieved in the validation cohort (specificity increased in overweight: 8.3% to 39.1%, obese: 9.4% to 25.0%) again approaching normal weight (39.0%). Importantly, specificity remained excellent (>93.1%). Conclusions Adjusting the Sokolow–Lyon index for BMI (overweight +4 mm, obesity +8 mm) improves the diagnostic accuracy for detecting LVH. As the ECG, worldwide, remains the most widely used screening tool for LVH, implementing these findings should translate into significant clinical benefit. PMID:27486142

  19. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images.

    PubMed

    Chu, Carlton; Hsu, Ai-Ling; Chou, Kun-Hsien; Bandettini, Peter; Lin, Chingpo

    2012-03-01

    There are growing numbers of studies using machine learning approaches to characterize patterns of anatomical difference discernible from neuroimaging data. The high-dimensionality of image data often raises a concern that feature selection is needed to obtain optimal accuracy. Among previous studies, mostly using fixed sample sizes, some show greater predictive accuracies with feature selection, whereas others do not. In this study, we compared four common feature selection methods. 1) Pre-selected region of interests (ROIs) that are based on prior knowledge. 2) Univariate t-test filtering. 3) Recursive feature elimination (RFE), and 4) t-test filtering constrained by ROIs. The predictive accuracies achieved from different sample sizes, with and without feature selection, were compared statistically. To demonstrate the effect, we used grey matter segmented from the T1-weighted anatomical scans collected by the Alzheimer's disease Neuroimaging Initiative (ADNI) as the input features to a linear support vector machine classifier. The objective was to characterize the patterns of difference between Alzheimer's disease (AD) patients and cognitively normal subjects, and also to characterize the difference between mild cognitive impairment (MCI) patients and normal subjects. In addition, we also compared the classification accuracies between MCI patients who converted to AD and MCI patients who did not convert within the period of 12 months. Predictive accuracies from two data-driven feature selection methods (t-test filtering and RFE) were no better than those achieved using whole brain data. We showed that we could achieve the most accurate characterizations by using prior knowledge of where to expect neurodegeneration (hippocampus and parahippocampal gyrus). Therefore, feature selection does improve the classification accuracies, but it depends on the method adopted. In general, larger sample sizes yielded higher accuracies with less advantage obtained by using

  20. Prediction of soil properties using imaging spectroscopy: Considering fractional vegetation cover to improve accuracy

    NASA Astrophysics Data System (ADS)

    Franceschini, M. H. D.; Demattê, J. A. M.; da Silva Terra, F.; Vicente, L. E.; Bartholomeus, H.; de Souza Filho, C. R.

    2015-06-01

    Spectroscopic techniques have become attractive to assess soil properties because they are fast, require little labor and may reduce the amount of laboratory waste produced when compared to conventional methods. Imaging spectroscopy (IS) can have further advantages compared to laboratory or field proximal spectroscopic approaches such as providing spatially continuous information with a high density. However, the accuracy of IS derived predictions decreases when the spectral mixture of soil with other targets occurs. This paper evaluates the use of spectral data obtained by an airborne hyperspectral sensor (ProSpecTIR-VS - Aisa dual sensor) for prediction of physical and chemical properties of Brazilian highly weathered soils (i.e., Oxisols). A methodology to assess the soil spectral mixture is adapted and a progressive spectral dataset selection procedure, based on bare soil fractional cover, is proposed and tested. Satisfactory performances are obtained specially for the quantification of clay, sand and CEC using airborne sensor data (R2 of 0.77, 0.79 and 0.54; RPD of 2.14, 2.22 and 1.50, respectively), after spectral data selection is performed; although results obtained for laboratory data are more accurate (R2 of 0.92, 0.85 and 0.75; RPD of 3.52, 2.62 and 2.04, for clay, sand and CEC, respectively). Most importantly, predictions based on airborne-derived spectra for which the bare soil fractional cover is not taken into account show considerable lower accuracy, for example for clay, sand and CEC (RPD of 1.52, 1.64 and 1.16, respectively). Therefore, hyperspectral remotely sensed data can be used to predict topsoil properties of highly weathered soils, although spectral mixture of bare soil with vegetation must be considered in order to achieve an improved prediction accuracy.

  1. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    SciTech Connect

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  2. Improved accuracy of radar WPMM estimated rainfall upon application of objective classification criteria

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Daniel; Amitai, Eyal; Wolff, David B.

    1995-01-01

    Application of the window probability matching method to radar and rain gauge data that have been objectively classified into different rain types resulted in distinctly different Z(sub e)-R relationships for the various classifications. These classification parameters, in addition to the range from the radar, are (a) the horizontal radial reflectivity gradients (dB/km); (b) the cloud depth, as scaled by the effective efficiency; (c) the brightband fraction within the radar field window; and (d) the height of the freezing level. Combining physical parameters to identify the type of precipitation and statistical relations most appropriate to the precipitation types results in considerable improvement of both point and areal rainfall measurements. A limiting factor in the assessment of the improved accuracy is the inherent variance between the true rain intensity at the radar measured volume and the rain intensity at the mouth of the rain guage. Therefore, a very dense rain gauge network is required to validate most of the suggested realized improvement. A rather small sample size is required to achieve a stable Z(sub e)-R relationship (standard deviation of 15% of R for a given Z(sub e)) -- about 200 mm of rainfall accumulated in all guages combined for each classification.

  3. Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy

    PubMed Central

    Sommer, Marius; Rönnqvist, Louise

    2009-01-01

    This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608

  4. Improved localization accuracy in magnetic source imaging using a 3-D laser scanner.

    PubMed

    Bardouille, Timothy; Krishnamurthy, Santosh V; Hajra, Sujoy Ghosh; D'Arcy, Ryan C N

    2012-12-01

    Brain source localization accuracy in magnetoencephalography (MEG) requires accuracy in both digitizing anatomical landmarks and coregistering to anatomical magnetic resonance images (MRI). We compared the source localization accuracy and MEG-MRI coregistration accuracy of two head digitization systems-a laser scanner and the current standard electromagnetic digitization system (Polhemus)-using a calibrated phantom and human data. When compared using the calibrated phantom, surface and source localization accuracy for data acquired with the laser scanner improved over the Polhemus by 141% and 132%, respectively. Laser scan digitization reduced MEG source localization error by 1.38 mm on average. In human participants, a laser scan of the face generated a 1000-fold more points per unit time than the Polhemus head digitization. An automated surface-matching algorithm improved the accuracy of MEG-MRI coregistration over the equivalent manual procedure. Simulations showed that the laser scan coverage could be reduced to an area around the eyes only while maintaining coregistration accuracy, suggesting that acquisition time can be substantially reduced. Our results show that the laser scanner can both reduce setup time and improve localization accuracy, in comparison to the Polhemus digitization system.

  5. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  6. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  7. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  8. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    NASA Astrophysics Data System (ADS)

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  9. Improving accuracy and precision in biological applications of fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Wei

    The quantitative understanding of cellular and molecular responses in living cells is important for many reasons, including identifying potential molecular targets for treatments of diseases like cancer. Fluorescence lifetime imaging microscopy (FLIM) can quantitatively measure these responses in living cells by producing spatially resolved images of fluorophore lifetime, and has advantages over intensity-based measurements. However, in live-cell microscopy applications using high-intensity light sources such as lasers, maintaining biological viability remains critical. Although high-speed, time-gated FLIM significantly reduces light delivered to live cells, making measurements at low light levels remains a challenge affecting quantitative FLIM results. We can significantly improve both accuracy and precision in gated FLIM applications. We use fluorescence resonance energy transfer (FRET) with fluorescent proteins to detect molecular interactions in living cells: the use of FLIM, better fluorophores, and temperature/CO2 controls can improve live-cell FRET results with higher consistency, better statistics, and less non-specific FRET (for negative control comparisons, p-value = 0.93 (physiological) vs. 9.43E-05 (non-physiological)). Several lifetime determination methods are investigated to optimize gating schemes. We demonstrate a reduction in relative standard deviation (RSD) from 52.57% to 18.93% with optimized gating in an example under typical experimental conditions. We develop two novel total variation (TV) image denoising algorithms, FWTV ( f-weighted TV) and UWTV (u-weighted TV), that can achieve significant improvements for real imaging systems. With live-cell images, they improve the precision of local lifetime determination without significantly altering the global mean lifetime values (<5% lifetime changes). Finally, by combining optimal gating and TV denoising, even low-light excitation can achieve precision better than that obtained in high

  10. Improving Student Achievement: A Study of High-Poverty Schools with Higher Student Achievement Outcomes

    ERIC Educational Resources Information Center

    Butz, Stephen D.

    2012-01-01

    This research examined the education system at high-poverty schools that had significantly higher student achievement levels as compared to similar schools with lower student achievement levels. A multischool qualitative case study was conducted of the educational systems where there was a significant difference in the scores achieved on the…

  11. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  12. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  13. Improving the Accuracy of Early Diagnosis of Thyroid Nodule Type Based on the SCAD Method.

    PubMed

    Shahraki, Hadi Raeisi; Pourahmad, Saeedeh; Paydar, Shahram; Azad, Mohsen

    2016-01-01

    Although early diagnosis of thyroid nodule type is very important, the diagnostic accuracy of standard tests is a challenging issue. We here aimed to find an optimal combination of factors to improve diagnostic accuracy for distinguishing malignant from benign thyroid nodules before surgery. In a prospective study from 2008 to 2012, 345 patients referred for thyroidectomy were enrolled. The sample size was split into a training set and testing set as a ratio of 7:3. The former was used for estimation and variable selection and obtaining a linear combination of factors. We utilized smoothly clipped absolute deviation (SCAD) logistic regression to achieve the sparse optimal combination of factors. To evaluate the performance of the estimated model in the testing set, a receiver operating characteristic (ROC) curve was utilized. The mean age of the examined patients (66 male and 279 female) was 40.9 ± 13.4 years (range 15- 90 years). Some 54.8% of the patients (24.3% male and 75.7% female) had benign and 45.2% (14% male and 86% female) malignant thyroid nodules. In addition to maximum diameters of nodules and lobes, their volumes were considered as related factors for malignancy prediction (a total of 16 factors). However, the SCAD method estimated the coefficients of 8 factors to be zero and eliminated them from the model. Hence a sparse model which combined the effects of 8 factors to distinguish malignant from benign thyroid nodules was generated. An optimal cut off point of the ROC curve for our estimated model was obtained (p=0.44) and the area under the curve (AUC) was equal to 77% (95% CI: 68%-85%). Sensitivity, specificity, positive predictive value and negative predictive values for this model were 70%, 72%, 71% and 76%, respectively. An increase of 10 percent and a greater accuracy rate in early diagnosis of thyroid nodule type by statistical methods (SCAD and ANN methods) compared with the results of FNA testing revealed that the statistical modeling

  14. Accuracy, Effectiveness and Improvement of Vibration-Based Maintenance in Paper Mills: Case Studies

    NASA Astrophysics Data System (ADS)

    AL-NAJJAR, B.

    2000-01-01

    Many current vibration-based maintenance (VBM) policies for rolling element bearings do not use as much as possible of their useful lives. Evidence and indications to prolong the bearings' mean effective lives by using more accurate diagnosis and prognosis are confirmed when faulty bearing installation, faulty machinery design, harsh environmental condition and when a bearing is replaced as soon as its vibration level exceeds the normal. Analysis of data from roller bearings at two paper mills suggests that longer bearing lives can be safely achieved by increasing the accuracy of the vibration data. This paper relates bearing failure modes to the observed vibration spectra and their development patterns over the bearings' lives. A systematic approach, which describes the objectives and performance of studies in two Swedish paper mills, is presented. Explanations of the mechanisms behind some frequent modes of early failure and ways to avoid them are suggested. It is shown theoretically, and partly confirmed by the analysis of (unfortunately incomplete) data from two paper mills over many years, that accurate prediction of remaining bearing life requires: (a) enough vibration measurements, (b) numerate records of operating conditions, (c) better discrimination between frequencies in the spectrum and (d) correlation of (b) and (c). This is because life prediction depends on precise knowledge of primary, harmonic and side-band frequency amplitudes and their development over time. Further, the available data, which are collected from relevant plant activities, can be utilized to perform cyclic improvements in diagnosis, prognosis, experience and economy.

  15. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  16. Improving protein–protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model

    PubMed Central

    An, Ji‐Yong; Meng, Fan‐Rong; Chen, Xing; Yan, Gui‐Ying; Hu, Ji‐Pu

    2016-01-01

    Abstract Predicting protein–protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high‐throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM‐BiGP that combines the relevance vector machine (RVM) model and Bi‐gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi‐gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five‐fold cross‐validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state‐of‐the‐art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM‐BiGP method is significantly better than the SVM‐based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic

  17. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  18. Improving protein-protein interactions prediction accuracy using protein evolutionary information and relevance vector machine model.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying; Hu, Ji-Pu

    2016-10-01

    Predicting protein-protein interactions (PPIs) is a challenging task and essential to construct the protein interaction networks, which is important for facilitating our understanding of the mechanisms of biological systems. Although a number of high-throughput technologies have been proposed to predict PPIs, there are unavoidable shortcomings, including high cost, time intensity, and inherently high false positive rates. For these reasons, many computational methods have been proposed for predicting PPIs. However, the problem is still far from being solved. In this article, we propose a novel computational method called RVM-BiGP that combines the relevance vector machine (RVM) model and Bi-gram Probabilities (BiGP) for PPIs detection from protein sequences. The major improvement includes (1) Protein sequences are represented using the Bi-gram probabilities (BiGP) feature representation on a Position Specific Scoring Matrix (PSSM), in which the protein evolutionary information is contained; (2) For reducing the influence of noise, the Principal Component Analysis (PCA) method is used to reduce the dimension of BiGP vector; (3) The powerful and robust Relevance Vector Machine (RVM) algorithm is used for classification. Five-fold cross-validation experiments executed on yeast and Helicobacter pylori datasets, which achieved very high accuracies of 94.57 and 90.57%, respectively. Experimental results are significantly better than previous methods. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the yeast dataset. The experimental results demonstrate that our RVM-BiGP method is significantly better than the SVM-based method. In addition, we achieved 97.15% accuracy on imbalance yeast dataset, which is higher than that of balance yeast dataset. The promising experimental results show the efficiency and robust of the proposed method, which can be an automatic decision support tool for future

  19. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers

    PubMed Central

    Thompson, Clarissa A.; Opfer, John E.

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688

  20. Alaska Case Study: Scientists Venturing Into Field with Journalists Improves Accuracy

    NASA Astrophysics Data System (ADS)

    Ekwurzel, B.; Detjen, J.; Hayes, R.; Nurnberger, L.; Pavangadkar, A.; Poulson, D.

    2008-12-01

    Issues such as climate change, stem cell research, public health vaccination, etc., can be fraught with public misunderstanding, myths, as well as deliberate distortions of the fundamental science. Journalists are adept at creating print, radio, and video content that can be both compelling and informative to the public. Yet most scientists have little time or training to devote to developing media content for the public and spend little time with journalists who cover science stories. We conducted a case study to examine whether the time and funding invested in exposing journalists to scientists in the field over several days would improve accuracy of media stories about complex scientific topics. Twelve journalists were selected from the 70 who applied for a four-day environmental journalism fellowship in Alaska. The final group achieved the goal of a broad geographic spectrum of the media outlets (small regional to large national organizations), medium (print, radio, online), and experience (early career to senior producers). Reporters met with a diverse group of scientists. The lessons learned and successful techniques will be presented. Initial results demonstrate that stories were highly accurate and rich with audio or visual content for lay audiences. The journalists have also maintained contact with the scientists, asking for leads on emerging stories and seeking new experts that can assist in their reporting. Science-based institutions should devote more funding to foster direct journalist-scientist interactions in the lab and field. These positive goals can be achieved: (1) more accurate dissemination of science information to the public; (2) a broader portion of the scientific community will become a resource to journalists instead of the same eloquent few in the community; (3) scientists will appreciate the skill and pressures of those who survive the media downsizing and provide media savvy content; and (4) the public may incorporate science evidence

  1. Improving the accuracy of brain tumor surgery via Raman-based technology.

    PubMed

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W; Sunney Xie, X; Orringer, Daniel A

    2016-03-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors. PMID:26926067

  2. Improving the accuracy of brain tumor surgery via Raman-based technology

    PubMed Central

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W.; Xie, X. Sunney; Orringer, Daniel A.

    2016-01-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors. PMID:26926067

  3. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  4. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  5. Improving Reading Achievement through the Implementation of Reading Strategies.

    ERIC Educational Resources Information Center

    Cramer, Cynthia; Fate, Joan; Lueders, Kristin

    This study describes a program designed to increase student achievement in reading. The targeted population consisted of first and fourth grade elementary students in a Midwest community. Evidence for the existence of the problem included standardized tests and alternative assessments to measure reading achievement, and teacher observations with…

  6. Evidence that Smaller Schools Do Not Improve Student Achievement

    ERIC Educational Resources Information Center

    Wainer, Howard; Zwerling, Harris L.

    2006-01-01

    If more small schools than "expected" are among the high achievers, then creating more small schools would raise achievement across the board, many proponents of small schools have argued. In this article, the authors challenge the faulty logic of such inferences. Many claims have been made about the advantages of smaller schools. One is that,…

  7. Improving Primary Student Motivation and Achievement in Mathematics.

    ERIC Educational Resources Information Center

    Adami-Bunyard, Eppy; Gummow, Mary; Milazzo-Licklider, Nicole

    This report describes a program for increasing student readiness for and achievement in mathematics. The targeted population consists of third grade students in an expanding suburban community and kindergarten students in a culturally diverse urban community, both located in Northern Illinois. The problems of achievement in and attitudes towards…

  8. Design of a platinum resistance thermometer temperature measuring transducer and improved accuracy of linearizing the output voltage

    SciTech Connect

    Malygin, V.M.

    1995-06-01

    An improved method is presented for designing a temperature measuring transducer, the electrical circuit of which comprises an unbalanced bridge, in one arm of which is a platinum resistance thermometer, and containing a differential amplifier with feedback. Values are given for the coefficients, the minimum linearization error is determined, and an example is also given of the practical design of the transducer, using the given coefficients. A determination is made of the limiting achievable accuracy in linearizing the output voltage of the measuring transducer, as a function of the range of measured temperature.

  9. On the use of Numerical Weather Models for improving SAR geolocation accuracy

    NASA Astrophysics Data System (ADS)

    Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.

    2013-12-01

    Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of

  10. Improving the accuracy and reliability of remote system-calibration-free eye-gaze tracking.

    PubMed

    Hennessey, Craig A; Lawrence, Peter D

    2009-07-01

    Remote eye-gaze tracking provides a means for nonintrusive tracking of the point-of-gaze (POG) of a user. For application as a user interface for the disabled, a remote system that is noncontact, reliable, and permits head motion is very desirable. The system-calibration-free pupil-corneal reflection (P-CR) vector technique for POG estimation is a popular method due to its simplicity, however, accuracy has been shown to be degraded with head displacement. Model-based POG-estimation methods were developed, which improve system accuracy during head displacement, however, these methods require complex system calibration in addition to user calibration. In this paper, the use of multiple corneal reflections and point-pattern matching allows for a scaling correction of the P-CR vector for head displacements as well as an improvement in system robustness to corneal reflection distortion, leading to improved POG-estimation accuracy. To demonstrate the improvement in performance, the enhanced multiple corneal reflection P-CR method is compared to the monocular and binocular accuracy of the traditional single corneal reflection P-CR method, and a model-based method of POG estimation for various head displacements. PMID:19272975

  11. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    ERIC Educational Resources Information Center

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  12. Improving the accuracy of volumetric segmentation using pre-processing boundary detection and image reconstruction.

    PubMed

    Archibald, Rick; Hu, Jiuxiang; Gelb, Anne; Farin, Gerald

    2004-04-01

    The concentration edge -detection and Gegenbauer image-reconstruction methods were previously shown to improve the quality of segmentation in magnetic resonance imaging. In this study, these methods are utilized as a pre-processing step to the Weibull E-SD field segmentation. It is demonstrated that the combination of the concentration edge detection and Gegenbauer reconstruction method improves the accuracy of segmentation for the simulated test data and real magnetic resonance images used in this study. PMID:15376580

  13. Incorporating the effect of DEM resolution and accuracy for improved flood inundation mapping

    NASA Astrophysics Data System (ADS)

    Saksena, Siddharth; Merwade, Venkatesh

    2015-11-01

    Topography plays a major role in determining the accuracy of flood inundation areas. However, many areas in the United States and around the world do not have access to high quality topographic data in the form of Digital Elevation Models (DEM). For such areas, an improved understanding of the effects of DEM properties such as horizontal resolution and vertical accuracy on flood inundation maps may eventually lead to improved flood inundation modeling and mapping. This study attempts to relate the errors arising from DEM properties such as spatial resolution and vertical accuracy to flood inundation maps, and then use this relationship to create improved flood inundation maps from coarser resolution DEMs with low accuracy. The results from the five stream reaches used in this study show that water surface elevations (WSE) along the stream and the flood inundation area have a linear relationship with both DEM resolution and accuracy. This linear relationship is then used to extrapolate the water surface elevations from coarser resolution DEMs to get water surface elevations corresponding to a finer resolution DEM. Application of this approach show that improved results can be obtained from flood modeling by using coarser and less accurate DEMs, including public domain datasets such as the National Elevation Dataset and Shuttle Radar Topography Mission (SRTM) DEMs. The improvement in the WSE and its application to obtain better flood inundation maps is dependent on the study reach characteristics such as land use, valley shape, reach length and width. Application of the approach presented in this study on more reaches may lead to development of guidelines for flood inundation mapping using coarser resolution and less accurate topographic datasets.

  14. The improvement of OPC accuracy and stability by the model parameters' analysis and optimization

    NASA Astrophysics Data System (ADS)

    Chung, No-Young; Choi, Woon-Hyuk; Lee, Sung-Ho; Kim, Sung-Il; Lee, Sun-Yong

    2007-10-01

    The OPC model is very critical in the sub 45nm device because the Critical Dimension Uniformity (CDU) is so tight to meet the device performance and the process window latitude for the production level. The OPC model is generally composed of an optical model and a resist model. Each of them has physical terms to be calculated without any wafer data and empirical terms to be fitted with real wafer data to make the optical modeling and the resist modeling. Empirical terms are usually related to the OPC accuracy, but are likely to be overestimated with the wafer data and so those terms can deteriorate OPC stability in case of being overestimated by a small cost function. Several physical terms have been used with ideal value in the optical property and even weren't be considered because those parameters didn't give a critical impact on the OPC accuracy, but these parameters become necessary to be applied to the OPC modeling at the low k1 process. Currently, real optic parameter instead of ideal optical parameter like the laser bandwidth, source map, pupil polarization including the phase and intensity difference start to be measured and those real measured value are used for the OPC modeling. These measured values can improve the model accuracy and stability. In the other hand these parameters can make the OPC model to overcorrect the process proximity errors without careful handling. The laser bandwidth, source map, pupil polarization, and focus centering for the optical modeling are analyzed and the sample data weight scheme and resist model terms are investigated, too. The image blurring by actual laser bandwidth in the exposure system is modeled and the modeling result shows that the extraction of the 2D patterns is necessary to get a reasonable result due to the 2D patterns' measurement noise in the SEM. The source map data from the exposure machine shows lots of horizontal and vertical intensity difference and this phenomenon must come from the measurement noise

  15. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  16. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  17. Cognitive Processing Profiles of School-Age Children Who Meet Low-Achievement, IQ-Discrepancy, or Dual Criteria for Underachievement in Oral Reading Accuracy

    ERIC Educational Resources Information Center

    Van Santen, Frank W.

    2012-01-01

    The purpose of this study was to compare the cognitive processing profiles of school-age children (ages 7 to 17) who met criteria for underachievement in oral reading accuracy based on three different methods: 1) use of a regression-based IQ-achievement discrepancy only (REGonly), 2) use of a low-achievement cutoff only (LAonly), and 3) use of a…

  18. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    PubMed Central

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  19. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage.

    PubMed

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  20. HEAT: High accuracy extrapolated ab initio thermochemistry. III. Additional improvements and overview.

    SciTech Connect

    Harding, M. E.; Vazquez, J.; Ruscic, B.; Wilson, A. K.; Gauss, J.; Stanton, J. F.; Chemical Sciences and Engineering Division; Univ. t Mainz; The Univ. of Texas; Univ. of North Texas

    2008-01-01

    Effects of increased basis-set size as well as a correlated treatment of the diagonal Born-Oppenheimer approximation are studied within the context of the high-accuracy extrapolated ab initio thermochemistry (HEAT) theoretical model chemistry. It is found that the addition of these ostensible improvements does little to increase the overall accuracy of HEAT for the determination of molecular atomization energies. Fortuitous cancellation of high-level effects is shown to give the overall HEAT strategy an accuracy that is, in fact, higher than most of its individual components. In addition, the issue of core-valence electron correlation separation is explored; it is found that approximate additive treatments of the two effects have limitations that are significant in the realm of <1 kJ mol{sup -1} theoretical thermochemistry.

  1. Research on how to improve the accuracy of the SLM metallic parts

    NASA Astrophysics Data System (ADS)

    Pacurar, Razvan; Balc, Nicolae; Prem, Florica

    2011-05-01

    Selective laser melting (SLM) is one of the most important technologies used when complex metallic parts need to be rapidly manufactured. There are some requirements related to the quality of the manufactured part or the accuracy of the process control, in order to turn SLM process into a production technique. This paper presents a case study undertaken at the Technical University of Cluj-Napoca (TUCN) in cooperation with an industrial company from Romania, focusing on the accuracy issues. Finite element analysis (FEA) and Design Expert software were jointly used in order to determine the optimum process parameters required to improve the accuracy of the SLM metallic parts. Experimental results are also presented in the paper.

  2. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    PubMed

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  3. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    PubMed

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  4. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  5. Improving Student Achievement in Today's High Schools: What Works.

    ERIC Educational Resources Information Center

    Shields, Marie S.

    This paper is based on a study of two high schools in Maine that achieved outstanding and consistent gains in English, math, and science over a 5-year period. Three strands of inquiry were used for the study: surveys, interviews, and observations. A multiple-perspective approach was used to integrate the information so as to evaluate the…

  6. Helping Students Improve Academic Achievement and School Success Behavior

    ERIC Educational Resources Information Center

    Brigman, Greg; Campbell, Chari

    2003-01-01

    This article describes a study evaluating the impact of school-counselor-led interventions on student academic achievement and school success behavior. A group counseling and classroom guidance model called student success skills (SSS) was the primary intervention. The focus of the SSS model was on three sets of skills identified in several…

  7. An Effective Way to Improve Mathematics Achievement in Urban Schools

    ERIC Educational Resources Information Center

    Kim, Taik

    2010-01-01

    The local Gaining Early Awareness and Readiness for Undergraduate Programs (GEARUP) partnership serves 11 K-8 schools with the lowest achievement scores and the highest poverty rates in a large Midwestern urban district. Recently, GEARUP launched a specially designed teaching program, Mathematics Enhancement Group (MEG), for underachievers in…

  8. Improving Secondary School Students' Achievement using Intrinsic Motivation

    ERIC Educational Resources Information Center

    Albrecht, Erik; Haapanen, Rebecca; Hall, Erin; Mantonya, Michelle

    2009-01-01

    This report describes a program for increasing students' intrinsic motivation in an effort to increase academic achievement. The targeted population consisted of secondary level students in a middle to upper-middle class suburban area. The students of the targeted secondary level classes appeared to be disengaged from learning due to a lack of…

  9. Does Video-Autotutorial Instruction Improve College Student Achievement?

    ERIC Educational Resources Information Center

    Fisher, K. M.; And Others

    1977-01-01

    Compares student achievement in an upper-division college introductory course taught by the video-autotutorial method with that in two comparable courses taught by the lecture-discussion method. Pre-post tests of 623 students reveal that video-autotutorial students outperform lecture/discussion participants at all ability levels and that in…

  10. Improving Student Achievement by Measuring Ability, Not Content

    ERIC Educational Resources Information Center

    Smith, Malbert S., III

    2005-01-01

    Although significant federal funds are tied to student assessment, public education in this country is the responsibility of individual states. Subsequently, it seems as if there are as many approaches to measuring student ability and achievement as there are states. We now live in an increasingly mobile society in which today's families move from…

  11. Analysis and improvement of accuracy, sensitivity, and resolution of the coherent gradient sensing method.

    PubMed

    Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin

    2016-06-10

    The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035

  12. Partnering through Training and Practice to Achieve Performance Improvement

    ERIC Educational Resources Information Center

    Lyons, Paul R.

    2010-01-01

    This article presents a partnership effort among managers, trainers, and employees to spring to life performance improvement using the performance templates (P-T) approach. P-T represents a process model as well as a method of training leading to performance improvement. Not only does it add to our repertoire of training and performance management…

  13. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  14. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  15. A study of neural network parameters for improvement in classification accuracy

    NASA Astrophysics Data System (ADS)

    Pathak, Avijit; Tiwari, K. C.

    2016-05-01

    Hyperspectral data due to large number of spectral bands facilitates discrimination between large numbers of classes in a data; however, the advantage afforded by the hyperspectral data often tends to get lost in the limitations of convection al classifier techniques. Artificial Neural Networks (ANN) in several studies has shown to outperform convection al classifiers, however; there are several issues with regard to selection of parameters for achieving best possible classification accuracy. Objectives of this study have been accordingly formulated to include an investigation of t he effect of various Neural Network parameters on the accuracy of hyperspectral image classification. AVIRIS Hyperspectral Indian Pine Test site 3 dataset acquiredin220 Bands on June 12, 1992 has been used in the stud y. Thereafter, maximal feature extraction technique of Principle component analysis (PCA) is used to reduce the dataset t o 10 bands preserving of 99.96% variance. The data contains 16 major classes of which 4 have been considered for ANN based classification. The parameters selected for the study are - number of hidden layers, hidden Nodes, training sample size, learning rate and learning momentum. Backpropagation method of learning is adopted. The overall accuracy of the network trained has been assessed using test sample size of 300 pixels. Although, the study throws up certain distinct ranges within which higher classification accuracies can be expected, however, no definite relationship could be identified between various ANN parameters under study.

  16. Improve the ZY-3 Height Accuracy Using Icesat/glas Laser Altimeter Data

    NASA Astrophysics Data System (ADS)

    Li, Guoyuan; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-06-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What's more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  17. Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?

    NASA Astrophysics Data System (ADS)

    Van Dyk, J.; Battista, J.

    2014-03-01

    Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.

  18. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-04-01

    There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.

  19. The prediction of the hepatic clearance of tanshinone IIA in rat liver subcellular fractions: accuracy improvement.

    PubMed

    Li, Peng; Wang, Guang-Ji; Li, Jing; Zhang, Qian; Liu, Xin; Khlentzos, Alexander; Roberts, Michael S

    2008-01-01

    The in vivo hepatic clearance of tanshinone IIA in the rat was predicted using microsome, cytosol and S9 fractions combined with two different cofactor systems, NADPH-regenerating and UDPGA system. Two different models, the well stirred model and the parallel-tube model, were used in predicting the in vivo clearance in the rat. The in vivo clearance of tanshinone IIA was acquired from a pharmacokinetic study in rat. The results show that the prediction accuracy acquired from the microsome combined with the NADPH is poor. The in vivo clearance in the rat is almost 32 fold higher than the clearance predicted in microsome. The predicted clearance of the S9 model combined with both NADPH and UDPGA system is about 4 fold lower than the in vivo clearance. The predicted clearance of the cytosol combined with the two cofactor system is about 7 fold lower than the in vivo clearance. Although the prediction accuracy acquired from the S9 and cytosol system is not perfect, the prediction accuracy is improved in these two incubation systems. Using S9 combined with both the phase I and phase II metabolism can improve the prediction accuracy. PMID:18220570

  20. The Divisional Approach to Achieving Hospital Improvement Goals

    ERIC Educational Resources Information Center

    Cortazzo, Arnold D.; Allen, Robert M.

    1971-01-01

    The divisional or horizontal approach, rather than the traditional or vertical model, was employed in improving the major organizations and programatic and service structure of a residential facility for retardates, Sunland Training Center (Miami, Florida). (KW)

  1. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.; Jiang, N.; Dunn, A. K.; Yeh, H.-C.

    2015-10-01

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.

  2. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    SciTech Connect

    Liu, C.; Liu, Y.-L.; Perillo, E. P.; Jiang, N.; Dunn, A. K. E-mail: tim.yeh@austin.utexas.edu; Yeh, H.-C. E-mail: tim.yeh@austin.utexas.edu

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.

  3. Natural language processing with dynamic classification improves P300 speller accuracy and bit rate

    NASA Astrophysics Data System (ADS)

    Speier, William; Arnold, Corey; Lu, Jessica; Taira, Ricky K.; Pouratian, Nader

    2012-02-01

    The P300 speller is an example of a brain-computer interface that can restore functionality to victims of neuromuscular disorders. Although the most common application of this system has been communicating language, the properties and constraints of the linguistic domain have not to date been exploited when decoding brain signals that pertain to language. We hypothesized that combining the standard stepwise linear discriminant analysis with a Naive Bayes classifier and a trigram language model would increase the speed and accuracy of typing with the P300 speller. With integration of natural language processing, we observed significant improvements in accuracy and 40-60% increases in bit rate for all six subjects in a pilot study. This study suggests that integrating information about the linguistic domain can significantly improve signal classification.

  4. Preimplantation genetic diagnosis: technological advances to improve accuracy and range of applications.

    PubMed

    Kuliev, Anver; Verlinsky, Yury

    2008-04-01

    Preimplantation genetic diagnosis (PGD) is an option for couples who are at risk that enables them to have unaffected progeny without facing the risk of pregnancy termination after prenatal diagnosis as currently practiced. It is also one of the practical tools used in assisted reproduction technology to improve the chance of conception for infertility cases with poor prognosis. Because PGD is performed using a single biopsied cell, technological advances are important to improving PGD accuracy. This has contributed to the avoidance of misdiagnosis in PGD for single gene disorders, and extensive experience in PGD for chromosomal disorders suggests strategies for more reliable evaluation of the chromosomal status of the preimplantation embryo. This paper describes the present status of PGD for genetic and chromosomal disorders, its accuracy and range, and how PGD is an integral part of IVF and genetic practices.

  5. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  6. Chemical modification of proteins to improve the accuracy of their relative molecular mass determination by electrophoresis.

    PubMed

    Dolnik, Vladislav; Gurske, William A

    2011-10-01

    We studied the electrophoretic behavior of basic proteins (cytochrome c and histone III) and developed a carbamylation method that normalizes their electrophoretic size separation and improves the accuracy of their relative molecular mass determined electrophoretically. In capillary zone electrophoresis with cationic hitchhiking, native cytochrome c does not sufficiently bind cationic surfactants due to electrostatic repulsion between the basic protein and cationic surfactant. Carbamylation suppresses the strong positive charge of the basic proteins and results in more accurate relative molecular masses.

  7. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  8. Microalloying Boron Carbide with Silicon to Achieve Dramatically Improved Ductility.

    PubMed

    An, Qi; Goddard, William A

    2014-12-01

    Boron carbide (B4C) is a hard material whose value for extended engineering applications such as body armor; is limited by its brittleness under impact. To improve the ductility while retaining hardness, we used density functional theory to examine modifying B4C ductility through microalloying. We found that replacing the CBC chain in B4C with Si-Si, denoted as (B11Cp)-Si2, dramatically improves the ductility, allowing a continuous shear to a large strain of 0.802 (about twice of B4C failure strain) without brittle failure. Moreover, (B11C)-Si2 retains low density and high hardness. This ductility improvement arises because the Si-Si linkages enable the icosahedra accommodate additional shear by rotating instead of breaking bonds.

  9. Organizational management practices for achieving software process improvement

    NASA Technical Reports Server (NTRS)

    Kandt, Ronald Kirk

    2004-01-01

    The crisis in developing software has been known for over thirty years. Problems that existed in developing software in the early days of computing still exist today. These problems include the delivery of low-quality products, actual development costs that exceed expected development costs, and actual development time that exceeds expected development time. Several solutions have been offered to overcome out inability to deliver high-quality software, on-time and within budget. One of these solutions involves software process improvement. However, such efforts often fail because of organizational management issues. This paper discusses business practices that organizations should follow to improve their chances of initiating and sustaining successful software process improvement efforts.

  10. Improved spatial accuracy of functional maps in the rat olfactory bulb using supervised machine learning approach.

    PubMed

    Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro

    2016-08-15

    Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. PMID:27236085

  11. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  12. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  13. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  14. Improving Ocean Color Data Products using a Purely Empirical Approach: Reducing the Requirement for Radiometric Calibration Accuracy

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2008-01-01

    Radiometric calibration is the foundation upon which ocean color remote sensing is built. Quality derived geophysical products, such as chlorophyll, are assumed to be critically dependent upon the quality of the radiometric calibration. Unfortunately, the goals of radiometric calibration are not typically met in global and large-scale regional analyses, and are especially deficient in coastal regions. The consequences of the uncertainty in calibration are very large in terms of global and regional ocean chlorophyll estimates. In fact, stability in global chlorophyll requires calibration uncertainty much greater than the goals, and outside of modern capabilities. Using a purely empirical approach, we show that stable and consistent global chlorophyll values can be achieved over very wide ranges of uncertainty. Furthermore, the approach yields statistically improved comparisons with in situ data, suggesting improved quality. The results suggest that accuracy requirements for radiometric calibration cab be reduced if alternative empirical approaches are used.

  15. Improving International Research with Clinical Specimens: 5 Achievable Objectives

    PubMed Central

    LaBaer, Joshua

    2012-01-01

    Our increased interest in translational research has created a large demand for blood, tissue and other clinical samples, which find use in a broad variety of research including genomics, proteomics, and metabolomics. Hundreds of millions of dollars have been invested internationally on the collection, storage and distribution of samples. Nevertheless, many researchers complain in frustration about their inability to obtain relevant and/or useful samples for their research. Lack of access to samples, poor condition of samples, and unavailability of appropriate control samples have slowed our progress in the study of diseases and biomarkers. In this editorial, I focus on five major challenges that thwart clinical sample use for translational research and propose near term objectives to address them. They include: (1) defining our biobanking needs; (2) increasing the use of and access to standard operating procedures; (3) mapping inter-observer differences for use in normalizing diagnoses; (4) identifying natural internal protein controls; and (5) redefining the clinical sample paradigm by building partnerships with the public. In each case, I believe that we have the tools at hand required to achieve the objective within 5 years. Potential paths to achieve these objectives are explored. However we solve these problems, the future of proteomics depends on access to high quality clinical samples, collected under standardized conditions, accurately annotated and shared under conditions that promote the research we need to do. PMID:22998582

  16. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  17. Improving Student Achievement in Solving Mathematical Word Problems.

    ERIC Educational Resources Information Center

    Roti, Joan; Trahey, Carol; Zerafa, Susan

    This report describes a program for improving students' comprehension of the language of mathematical problems. The targeted population consists of 5th and 6th grade multi-age students and multi-age learners with special needs at a middle school located outside a major city in a Midwestern community. Evidence for the existence of this problem…

  18. Using Curriculum-Based Measurement to Improve Achievement

    ERIC Educational Resources Information Center

    Clarke, Suzanne

    2009-01-01

    Response to intervention (RTI) is on the radar screen of most principals these days--finding out what it is, how it can improve teaching and learning, and what needs to be done to implement it effectively. One critical component of RTI that will require particular attention from principals is student progress monitoring, which is required in every…

  19. Community Schools Seek to Improve High School Achievement, College Readiness

    ERIC Educational Resources Information Center

    Gilroy, Marilyn

    2011-01-01

    The Coalition for Community Schools, an alliance of more than 150 national, state, and local organizations, is bringing public schools in partnership with community resources to improve student success. While that might seem like an abstract idea, it has very concrete goals, such as boosting high school graduation rates and college readiness.…

  20. Enhancing Student Achievement through the Improvement of Listening Skills.

    ERIC Educational Resources Information Center

    Barr, Lori; Dittmar, Maureen; Roberts, Emily; Sheraden, Marie

    This report describes a program for the improvement of listening skills in order to increase academic performance. The targeted population consisted of elementary students in a middle class community located in western Illinois. The problem of ineffective listening skills was documented through data revealing the number of students whose lowered…

  1. Data as a Lever for Improving Instruction and Student Achievement

    ERIC Educational Resources Information Center

    Simmons, Warren

    2012-01-01

    This commentary draws on the articles in this issue to underscore the importance of community engagement and districtwide capacity building as central to efforts to use data to inform accountability and choice, along with school and instructional improvement. The author cautions against treating data as an all-purpose tool absent adequate…

  2. Achieving Continuous Improvement: Theories that Support a System Change.

    ERIC Educational Resources Information Center

    Armel, Donald

    Focusing on improvement is different than focusing on quality, quantity, customer satisfaction, and productivity. This paper discusses Open System Theory, and suggests ways to change large systems. Changing a system (meaning the way all the parts are connected) requires a considerable amount of data gathering and analysis. Choosing the proper…

  3. Reducing the influence of spatial resolution to improve quantitative accuracy in emission tomography: A comparison of potential strategies

    NASA Astrophysics Data System (ADS)

    Hutton, B. F.; Olsson, A.; Som, S.; Erlandsson, K.; Braun, M.

    2006-12-01

    The goal of this paper is to compare strategies for reducing partial volume effects by either minimizing the cause (i.e. improving resolution) or correcting the effect. Correction for resolution loss can be achieved either by modelling the resolution for use in iterative reconstruction or by imposing constraints based on knowledge of the underlying anatomy. Approaches to partial volume correction largely rely on knowledge of the underlying anatomy, based on well-registered high-resolution anatomical imaging modalities (CT or MRI). Corrections can be applied by considering the signal loss that results by smoothing the high-resolution modality to the same resolution as obtained in emission tomography. A physical phantom representing the central brain structures was used to evaluate the quantitative accuracy of the various strategies for either improving resolution or correcting for partial volume effects. Inclusion of resolution in the reconstruction model improved the measured contrast for the central brain structures but still underestimated the true object contrast (˜0.70). Use of information on the boundaries of the structures in conjunction with a smoothing prior using maximum entropy reconstruction achieved some degree of contrast enhancement and improved the noise properties of the resulting images. Partial volume correction based on segmentation of registered anatomical images and knowledge of the reconstructed resolution permitted more accurate quantification of the target to background ratio for individual brain structures.

  4. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-11-01

    There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  5. Improvement in the accuracy of respiratory-gated radiation therapy using a respiratory guiding system

    NASA Astrophysics Data System (ADS)

    Kang, Seong-Hee; Kim, Dong-Su; Kim, Tae-Ho; Suh, Tae-Suk; Yoon, Jai-Woong

    2013-01-01

    The accuracy of respiratory-gated radiation therapy (RGRT) depends on the respiratory regularity because external respiratory signals are used for gating the radiation beam at particular phases. Many studies have applied a respiratory guiding system to improve the respiratory regularity. This study aims to evaluate the effect of an in-house-developed respiratory guiding system to improve the respiratory regularity for RGRT. To verify the effectiveness of this system, we acquired respiratory signals from five volunteers. The improvement in respiratory regularity was analyzed by comparing the standard deviations of the amplitudes and the periods between free and guided breathing. The reduction in residual motion at each phase was analyzed by comparing the standard deviations of sorted data within each corresponding phase bin as obtained from free and guided breathing. The results indicate that the respiratory guiding system improves the respiratory regularity, and that most of the volunteers showed significantly less average residual motion at each phase. The average residual motion measured at phases of 40, 50, and 60%, which showed lower variation than other phases, were, respectively, reduced by 41, 45, and 44% during guided breathing. The results show that the accuracy of RGRT can be improved by using the in-house-developed respiratory guiding system. Furthermore, this system should reduce artifacts caused by respiratory motion in 4D CT imaging.

  6. Dynamic sea surface topography, gravity and improved orbit accuracies from the direct evaluation of SEASAT altimeter data

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Lerch, F.; Koblinsky, C. J.; Klosko, S. M.; Robbins, J. W.; Williamson, R. G.; Patel, G. B.

    1989-01-01

    A method for the simultaneous solution of dynamic ocean topography, gravity and orbits using satellite altimeter data is described. A GEM-T1 based gravitational model called PGS-3337 that incorporates Seasat altimetry, surface gravimetry and satellite tracking data has been determined complete to degree and order 50. The altimeter data is utilized as a dynamic observation of the satellite's height above the sea surface with a degree 10 model of dynamic topography being recovered simultaneously with the orbit parameters, gravity and tidal terms in this model. PGS-3337 has a geoid uncertainty of 60 cm root-mean-square (RMS) globally, with the uncertainty over the altimeter tracked ocean being in the 25 cm range. Doppler determined orbits for Seasat, show large improvements, with the sub-30 cm radial accuracies being achieved. When altimeter data is used in orbit determination, radial orbital accuracies of 20 cm are achieved. The RMS of fit to the altimeter data directly gives 30 cm fits for Seasat when using PGS-3337 and its geoid and dynamic topography model. This performance level is two to three times better than that achieved with earlier Goddard earth models (GEM) using the dynamic topography from long-term oceanographic averages. The recovered dynamic topography reveals the global long wavelength circulation of the oceans with a resolution of 1500 km. The power in the dynamic topography recovery is now found to be closer to that of oceanographic studies than for previous satellite solutions. This is attributed primarily to the improved modeling of the geoid which has occurred. Study of the altimeter residuals reveals regions where tidal models are poor and sea state effects are major limitations.

  7. Evaluation of an improved orthognathic articulator system. 2. Accuracy of occlusal wafers.

    PubMed

    Paul, P E; Barbenel, J C; Walker, F S; Khambay, B S; Moos, K F; Ayoub, A F

    2012-02-01

    The errors produced by occlusal wafers constructed on casts of the teeth mounted on a standard articulator and an improved orthognathic articulator were investigated by carrying out simulated orthognathic surgery on plastic skulls. The wafers were used to relocate the position of the maxillae of the skulls. The vertical and horizontal displacements of the maxillae were determined from measurements of the positions of markers on the skull and teeth. Comparison of the magnitudes of the actual and intended movements showed that wafers constructed on the standard articulator had systematic prediction errors of up to 5mm, but the improved orthognathic articulator showed much smaller random errors. There was a statistically significant improvement in overall accuracy in predicting maxillary Le Fort I position with the use of the improved orthognathic articulator which the authors recommend for clinical use.

  8. Early-Onset Neonatal Sepsis: Still Room for Improvement in Procalcitonin Diagnostic Accuracy Studies.

    PubMed

    Chiesa, Claudio; Pacifico, Lucia; Osborn, John F; Bonci, Enea; Hofer, Nora; Resch, Bernhard

    2015-07-01

    To perform a systematic review assessing accuracy and completeness of diagnostic studies of procalcitonin (PCT) for early-onset neonatal sepsis (EONS) using the Standards for Reporting of Diagnostic Accuracy (STARD) initiative.EONS, diagnosed during the first 3 days of life, remains a common and serious problem. Increased PCT is a potentially useful diagnostic marker of EONS, but reports in the literature are contradictory. There are several possible explanations for the divergent results including the quality of studies reporting the clinical usefulness of PCT in ruling in or ruling out EONS.We systematically reviewed PubMed, Scopus, and the Cochrane Library databases up to October 1, 2014. Studies were eligible for inclusion in our review if they provided measures of PCT accuracy for diagnosing EONS. A data extraction form based on the STARD checklist and adapted for neonates with EONS was used to appraise the quality of the reporting of included studies.We found 18 articles (1998-2014) fulfilling our eligibility criteria which were included in the final analysis. Overall, the results of our analysis showed that the quality of studies reporting diagnostic accuracy of PCT for EONS was suboptimal leaving ample room for improvement. Information on key elements of design, analysis, and interpretation of test accuracy were frequently missing.Authors should be aware of the STARD criteria before starting a study in this field. We welcome stricter adherence to this guideline. Well-reported studies with appropriate designs will provide more reliable information to guide decisions on the use and interpretations of PCT test results in the management of neonates with EONS.

  9. Using geocoded survey data to improve the accuracy of multilevel small area synthetic estimates.

    PubMed

    Taylor, Joanna; Moon, Graham; Twigg, Liz

    2016-03-01

    This paper examines the secondary data requirements for multilevel small area synthetic estimation (ML-SASE). This research method uses secondary survey data sets as source data for statistical models. The parameters of these models are used to generate data for small areas. The paper assesses the impact of knowing the geographical location of survey respondents on the accuracy of estimates, moving beyond debating the generic merits of geocoded social survey datasets to examine quantitatively the hypothesis that knowing the approximate location of respondents can improve the accuracy of the resultant estimates. Four sets of synthetic estimates are generated to predict expected levels of limiting long term illnesses using different levels of knowledge about respondent location. The estimates were compared to comprehensive census data on limiting long term illness (LLTI). Estimates based on fully geocoded data were more accurate than estimates based on data that did not include geocodes. PMID:26857175

  10. Pairwise adaptive thermostats for improved accuracy and stability in dissipative particle dynamics

    NASA Astrophysics Data System (ADS)

    Leimkuhler, Benedict; Shang, Xiaocheng

    2016-11-01

    We examine the formulation and numerical treatment of dissipative particle dynamics (DPD) and momentum-conserving molecular dynamics. We show that it is possible to improve both the accuracy and the stability of DPD by employing a pairwise adaptive Langevin thermostat that precisely matches the dynamical characteristics of DPD simulations (e.g., autocorrelation functions) while automatically correcting thermodynamic averages using a negative feedback loop. In the low friction regime, it is possible to replace DPD by a simpler momentum-conserving variant of the Nosé-Hoover-Langevin method based on thermostatting only pairwise interactions; we show that this method has an extra order of accuracy for an important class of observables (a superconvergence result), while also allowing larger timesteps than alternatives. All the methods mentioned in the article are easily implemented. Numerical experiments are performed in both equilibrium and nonequilibrium settings; using Lees-Edwards boundary conditions to induce shear flow.

  11. Improving the accuracy of convexity splitting methods for gradient flow equations

    NASA Astrophysics Data System (ADS)

    Glasner, Karl; Orizaga, Saulo

    2016-06-01

    This paper introduces numerical time discretization methods which significantly improve the accuracy of the convexity-splitting approach of Eyre (1998) [7], while retaining the same numerical cost and stability properties. A first order method is constructed by iteration of a semi-implicit method based upon decomposing the energy into convex and concave parts. A second order method is also presented based on backwards differentiation formulas. Several extrapolation procedures for iteration initialization are proposed. We show that, under broad circumstances, these methods have an energy decreasing property, leading to good numerical stability. The new schemes are tested using two evolution equations commonly used in materials science: the Cahn-Hilliard equation and the phase field crystal equation. We find that our methods can increase accuracy by many orders of magnitude in comparison to the original convexity-splitting algorithm. In addition, the optimal methods require little or no iteration, making their computation cost similar to the original algorithm.

  12. Lens distortion elimination for improving measurement accuracy of fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Li, Kai; Bu, Jingjie; Zhang, Dongsheng

    2016-10-01

    Fringe projection profilometry (FPP) is a powerful method for three-dimensional (3D) shape measurement. However, the measurement accuracy of the existing FPP is often hindered by the distortion of the lens used in FPP. In this paper, a simple and efficient method is presented to overcome this problem. First, the FPP system is calibrated as a stereovision system. Then, the camera lens distortion is eliminated by correcting the captured images. For the projector lens distortion, distorted fringe patterns are generated according to the lens distortion model. With these distorted fringe patterns, the projector can project undistorted fringe patterns, which means that the projector lens distortion is eliminated. Experimental results show that the proposed method can successfully eliminate the lens distortions of FPP and therefore improves its measurement accuracy.

  13. Multiple ping sonar accuracy improvement using robust motion estimation and ping fusion.

    PubMed

    Yu, Lian; Neretti, Nicola; Intrator, Nathan

    2006-04-01

    Noise degrades the accuracy of sonar systems. We demonstrate a practical method for increasing the effective signal-to-noise ratio (SNR) by fusing time delay information from a burst of multiple sonar pings. This approach can be useful when there is no relative motion between the sonar and the target during the burst of sonar pinging. Otherwise, the relative motion degrades the fusion and therefore, has to be addressed before fusion can be used. In this paper, we present a robust motion estimation algorithm which uses information from multiple receivers to estimate the relative motion between pings in the burst. We then compensate for motion, and show that the fusion of information from the burst of motion compensated pings improves both the resilience to noise and sonar accuracy, consequently increasing the operating range of the sonar system.

  14. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  15. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  16. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  17. Improving Neural Network Prediction Accuracy for PM10 Individual Air Quality Index Pollution Levels.

    PubMed

    Feng, Qi; Wu, Shengjun; Du, Yun; Xue, Huaiping; Xiao, Fei; Ban, Xuan; Li, Xiaodong

    2013-12-01

    Fugitive dust deriving from construction sites is a serious local source of particulate matter (PM) that leads to air pollution in cities undergoing rapid urbanization in China. In spite of this fact, no study has yet been published relating to prediction of high levels of PM with diameters <10 μm (PM10) as adjudicated by the Individual Air Quality Index (IAQI) on fugitive dust from nearby construction sites. To combat this problem, the Construction Influence Index (Ci) is introduced in this article to improve forecasting models based on three neural network models (multilayer perceptron, Elman, and support vector machine) in predicting daily PM10 IAQI one day in advance. To obtain acceptable forecasting accuracy, measured time series data were decomposed into wavelet representations and wavelet coefficients were predicted. Effectiveness of these forecasters were tested using a time series recorded between January 1, 2005, and December 31, 2011, at six monitoring stations situated within the urban area of the city of Wuhan, China. Experimental trials showed that the improved models provided low root mean square error values and mean absolute error values in comparison to the original models. In addition, these improved models resulted in higher values of coefficients of determination and AHPC (the accuracy rate of high PM10 IAQI caused by nearby construction activity) compared to the original models when predicting high PM10 IAQI levels attributable to fugitive dust from nearby construction sites. PMID:24381481

  18. Accounting for filter bandwidth improves the quantitative accuracy of bioluminescence tomography.

    PubMed

    Taylor, Shelley L; Mason, Suzannah K G; Glinton, Sophie L; Cobbold, Mark; Dehghani, Hamid

    2015-09-01

    Bioluminescence imaging is a noninvasive technique whereby surface weighted images of luminescent probes within animals are used to characterize cell count and function. Traditionally, data are collected over the entire emission spectrum of the source using no filters and are used to evaluate cell count/function over the entire spectrum. Alternatively, multispectral data over several wavelengths can be incorporated to perform tomographic reconstruction of source location and intensity. However, bandpass filters used for multispectral data acquisition have a specific bandwidth, which is ignored in the reconstruction. In this work, ignoring the bandwidth is shown to introduce a dependence of the recovered source intensity on the bandwidth of the filters. A method of accounting for the bandwidth of filters used during multispectral data acquisition is presented and its efficacy in increasing the quantitative accuracy of bioluminescence tomography is demonstrated through simulation and experiment. It is demonstrated that while using filters with a large bandwidth can dramatically decrease the data acquisition time, if not accounted for, errors of up to 200% in quantitative accuracy are introduced in two-dimensional planar imaging, even after normalization. For tomographic imaging, the use of this method to account for filter bandwidth dramatically improves the quantitative accuracy. PMID:26325264

  19. Improved single particle localization accuracy with dual objective multifocal plane microscopy.

    PubMed

    Ram, Sripad; Prabhat, Prashant; Ward, E Sally; Ober, Raimund J

    2009-04-13

    In single particle imaging applications, the number of photons detected from the fluorescent label plays a crucial role in the quantitative analysis of the acquired data. For example, in tracking experiments the localization accuracy of the labeled entity can be improved by collecting more photons from the labeled entity. Here, we report the development of dual objective multifocal plane microscopy (dMUM) for single particle studies. The new microscope configuration uses two opposing objective lenses, where one of the objectives is in an inverted position and the other objective is in an upright position. We show that dMUM has a higher photon collection efficiency when compared to standard microscopes. We demonstrate that fluorescent labels can be localized with better accuracy in 2D and 3D when imaged through dMUM than when imaged through a standard microscope. Analytical tools are introduced to estimate the nanoprobe location from dMUM images and to characterize the accuracy with which they can be determined. PMID:19365515

  20. Accounting for filter bandwidth improves the quantitative accuracy of bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie L.; Cobbold, Mark; Dehghani, Hamid

    2015-09-01

    Bioluminescence imaging is a noninvasive technique whereby surface weighted images of luminescent probes within animals are used to characterize cell count and function. Traditionally, data are collected over the entire emission spectrum of the source using no filters and are used to evaluate cell count/function over the entire spectrum. Alternatively, multispectral data over several wavelengths can be incorporated to perform tomographic reconstruction of source location and intensity. However, bandpass filters used for multispectral data acquisition have a specific bandwidth, which is ignored in the reconstruction. In this work, ignoring the bandwidth is shown to introduce a dependence of the recovered source intensity on the bandwidth of the filters. A method of accounting for the bandwidth of filters used during multispectral data acquisition is presented and its efficacy in increasing the quantitative accuracy of bioluminescence tomography is demonstrated through simulation and experiment. It is demonstrated that while using filters with a large bandwidth can dramatically decrease the data acquisition time, if not accounted for, errors of up to 200% in quantitative accuracy are introduced in two-dimensional planar imaging, even after normalization. For tomographic imaging, the use of this method to account for filter bandwidth dramatically improves the quantitative accuracy.

  1. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  2. Individual variation in exploratory behaviour improves speed and accuracy of collective nest selection by Argentine ants

    PubMed Central

    Hui, Ashley; Pinter-Wollman, Noa

    2014-01-01

    Collective behaviours are influenced by the behavioural composition of the group. For example, a collective behaviour may emerge from the average behaviour of the group's constituents, or be driven by a few key individuals that catalyse the behaviour of others in the group. When ant colonies collectively relocate to a new nest site, there is an inherent trade-off between the speed and accuracy of their decision of where to move due to the time it takes to gather information. Thus, variation among workers in exploratory behaviour, which allows gathering information about potential new nest sites, may impact the ability of a colony to move quickly into a suitable new nest. The invasive Argentine ant, Linepithema humile, expands its range locally through the dispersal and establishment of propagules: groups of ants and queens. We examine whether the success of these groups in rapidly finding a suitable nest site is affected by their behavioural composition. We compared nest choice speed and accuracy among groups of all-exploratory, all-nonexploratory and half-exploratory–half-nonexploratory individuals. We show that exploratory individuals improve both the speed and accuracy of collective nest choice, and that exploratory individuals have additive, not synergistic, effects on nest site selection. By integrating an examination of behaviour into the study of invasive species we shed light on the mechanisms that impact the progression of invasion. PMID:25018558

  3. A fiber-optic cure monitoring technique with accuracy improvement of distorted embedded sensors

    NASA Astrophysics Data System (ADS)

    Sampath, Umesh; Kim, Hyunjin; Kim, Dae-gil; Song, Minho

    2015-07-01

    A fiber-optic epoxy cure monitoring technique for efficient wind turbine blade manufacturing and monitoring is presented. To optimize manufacturing cycle, fiber-optic sensors are embedded in composite materials of wind turbine blades. The reflection spectra of the sensors indicate the onset of gelification and the completion of epoxy curing. After manufacturing process, the same sensors are utilized for in-field condition monitoring. Because of residual stresses and strain gradients from the curing process, the embedded sensors may experience distortions in reflection spectra, resulting in measurement errors. We applied a Gaussian curve-fitting algorithm to the distorted spectra, which substantially improved the measurement accuracy.

  4. Processing data, for improved, accuracy, from device for measuring speed of sound in a gas

    DOEpatents

    Owen, Thomas E.

    2006-09-19

    A method, used in connection with a pulse-echo type sensor for determining the speed of sound in a gas, for improving the accuracy of speed of sound measurements. The sensor operates on the principle that speed of sound can be derived from the difference between the two-way travel time of signals reflected from two different target faces of the sensor. This time difference is derived by computing the cross correlation between the two reflections. The cross correlation function may be fitted to a parabola whose vertex represents the optimum time coordinate of the coherence peak, thereby providing an accurate measure of the two-way time diffference.

  5. Improved time efficiency and accuracy in diffusion tensor microimaging with multiple-echo acquisition

    NASA Astrophysics Data System (ADS)

    Gulani, Vikas; Weber, Thomas; Neuberger, Thomas; Webb, Andrew G.

    2005-12-01

    In high-field NMR microscopy rapid single-shot imaging methods, for example, echo planar imaging, cannot be used for determination of the apparent diffusion tensor (ADT) due to large magnetic susceptibility effects. We propose a pulse sequence in which a diffusion-weighted spin-echo is followed by multiple gradient-echoes with additional diffusion weighting. These additional echoes can be used to calculate the ADT and T2∗ maps. We show here that this results in modest but consistent improvements in the accuracy of ADT determination within a given total data acquisition time. The method is tested on excised, chemically fixed rat spinal cords.

  6. A simple method for improving the time-stepping accuracy in atmosphere and ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-12-01

    In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A

  7. OrthoFinder: solving fundamental biases in whole genome comparisons dramatically improves orthogroup inference accuracy.

    PubMed

    Emms, David M; Kelly, Steven

    2015-08-06

    Identifying homology relationships between sequences is fundamental to biological research. Here we provide a novel orthogroup inference algorithm called OrthoFinder that solves a previously undetected gene length bias in orthogroup inference, resulting in significant improvements in accuracy. Using real benchmark datasets we demonstrate that OrthoFinder is more accurate than other orthogroup inference methods by between 8 % and 33 %. Furthermore, we demonstrate the utility of OrthoFinder by providing a complete classification of transcription factor gene families in plants revealing 6.9 million previously unobserved relationships.

  8. Improving the accuracy of multiple integral evaluation by applying Romberg's method

    NASA Astrophysics Data System (ADS)

    Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.

    2009-02-01

    Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.

  9. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  10. Accuracy improvement in peak positioning of spectrally distorted fiber Bragg grating sensors by Gaussian curve fitting

    SciTech Connect

    Lee, Hyun-Wook; Park, Hyoung-Jun; Lee, June-Ho; Song, Minho

    2007-04-20

    To improve measurement accuracy of spectrally distorted fiber Bragg grating temperature sensors, reflection profiles were curve fitted to Gaussian shapes, of which center positions were transformed into temperature information.By applying the Gaussian curve-fitting algorithm in a tunable bandpass filter demodulation scheme,{approx}0.3 deg. C temperature resolution was obtained with a severely distorted grating sensor, which was much better than that obtained using the highest peak search algorithm. A binary search was also used to retrieve the optimal fitting curves with the least amount of processing time.

  11. Improving the performance of E-beam 2nd writing in mask alignment accuracy and pattern faultless for CPL technology

    NASA Astrophysics Data System (ADS)

    Lee, Booky; Hung, Richard; Lin, Orson; Wu, Yuan-Hsun; Kozuma, Makoto; Shih, Chiang-Lin; Hsu, Michael; Hsu, Stephen D.

    2005-01-01

    The chromeless phase lithography (CPL) is a potential technology for low k1 optical image. For the CPL technology, we can control the local transmission rate to get optimized through pitch imaging performance. The CPL use zebra pattern to manipulate the pattern local transmission as a tri-tone structure in mask manufacturing. It needs the 2nd level writing to create the zebra pattern. The zebra pattern must be small enough not to be printed out and the 2nd writing overlay accuracy must keep within 40nm. The request is a challenge to E-beam 2nd writing function. The focus of this paper is in how to improve the overlay accuracy and get a precise pattern to form accurate pattern transmission. To fulfill this work several items have been done. To check the possibility of contamination in E-Beam chamber by the conductive layer coating we monitor the particle count in the E-Beam chamber before and after the coated blank load-unload. The conductivity of our conductive layer has been checked to eliminate the charging effect by optimizing film thickness. The dimension of alignment mark has also been optimized through experimentation. And finally we checked the PR remain to ensure sufficient process window in our etching process. To verify the performance of our process we check the 3D SEM picture. Also we use AIMs to prove the resolution improvement capability in CPL compared to the traditional methods-Binary mask and Half Tone mask. The achieved overlay accuracy and process can provide promising approach for NGL reticle manufacturing of CPL technology.

  12. Indexing Large Visual Vocabulary by Randomized Dimensions Hashing for High Quantization Accuracy: Improving the Object Retrieval Quality

    NASA Astrophysics Data System (ADS)

    Yang, Heng; Wang, Qing; He, Zhoucan

    The bag-of-visual-words approach, inspired by text retrieval methods, has proven successful in achieving high performance in object retrieval on large-scale databases. A key step of these methods is the quantization stage which maps the high-dimensional image feature vectors to discriminatory visual words. In this paper, we consider the quantization step as the nearest neighbor search in large visual vocabulary, and thus proposed a randomized dimensions hashing (RDH) algorithm to efficiently index and search the large visual vocabulary. The experimental results have demonstrated that the proposed algorithm can effectively increase the quantization accuracy compared to the vocabulary tree based methods which represent the state-of-the-art. Consequently, the object retrieval performance can be significantly improved by our method in the large-scale database.

  13. SU-E-J-133: Autosegmentation of Linac CBCT: Improved Accuracy Via Penalized Likelihood Reconstruction

    SciTech Connect

    Chen, Y

    2015-06-15

    Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone and with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that

  14. Improvement of accuracy of planetary ephemerides by the example of EPM

    NASA Astrophysics Data System (ADS)

    Vladimirovna Pitjeva, Elena; Petrovich Pitjev, Nikolay

    2015-08-01

    Fundamental ephemeris of planets and the Moon (EPM) have been created at IAA RAS since the 70-th of the last century. Some of their major versions EPM2004, EPM2008, EPM2011/m are available at ftp://quasar.ipa.nw.ru/incoming/EPM/. EPM2014 version has been created with the updated and refined software package ERA-8. The accuracy of the planetary ephemerides depends on several factors: the adequacy of dynamic model of planet motion to their real motion, the quantity and quality of observational data, as well as the reduction of the observations. Reductions of the observations are well known, however more precise data need more accurate calculation. For example, processing the Cassini spacecraft data required to improve the calculation of proper time.Dynamic models of planetary motion from EPM2004 to EPM2014 have been refined and complicated: perturbations from small asteroids modeled at first by perturbation of a one-dimensional, and then the two-dimensional rings, as well as the inclusion of the 30 largest Trans-Neptunian Objects (TNO) into integration and the perturbations from other TNO modeled by a ring of smaller TNO.The number of observations has increased by an order since EPM2000 and amounts for more than 800,000 data of different types, most of the new observations are precision radio measurements, mainly ranging from different spacecraft. VLBI spacecraft data around planets in the background quasars enable orientation of the planetary ephemeris in the international system ICRF with an accuracy of fractions of mas.Improvement of all these factors helped to increase the accuracy of planetary ephemerides, which was manifested in a decrease of the formal standard deviations of orbital elements of planets by several times, and for planets provided by high-precision data from spacecraft (Venus, Mars, Saturn) - by one order.

  15. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    NASA Astrophysics Data System (ADS)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  16. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  17. Improving the accuracy of a Shack-Hartmann wavefront sensor on extended scenes

    NASA Astrophysics Data System (ADS)

    Rais, M.; Morel, J.-M.; Thiebaut, C.; Delvit, J.-M.; Facciolo, G.

    2016-10-01

    In order to achieve higher resolutions, current earth-observation satellites use larger lightweight main mirrors which are usually deformed over time, impacting on image quality. In the context of active optics, we studied the problem of correcting this main mirror by performing wavefront estimation in a closed loop environment. To this end, a Shack-Hartman wavefront sensor (SHWFS) used on extended scenes could measure the incoming wavefront. The performance of the SHWFS on extended scenes depends entirely on the accuracy of the shift estimation algorithm employed, which should be fast enough to be executed on-board. In this paper we specifically deal with the problem of fast accurate shift estimation in this context. We propose a new algorithm, based on the global optical flow method, that estimates the shifts in linear time. In our experiments, our method proved to be more accurate and stable, as well as less sensitive to noise than all current state-of-the-art methods.

  18. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  19. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  20. Optimizing stepwise rotation of dodecahedron sound source to improve the accuracy of room acoustic measures.

    PubMed

    Martellotta, Francesco

    2013-09-01

    Dodecahedron sound sources are widely used for acoustical measurement purposes as they produce a good approximation of omnidirectional radiation. Evidence shows that such an assumption is acceptable only in the low-frequency range (namely below 1 kHz), while at higher frequencies sound radiation is far from being uniform. In order to improve the accuracy of acoustical measurements obtained from dodecahedron sources, international standard ISO 3382 suggests an averaging of results after a source rotation. This paper investigates the effects of such rotations, both in terms of variations in acoustical parameters and spatial distribution of sound reflections. Taking advantage of a spherical microphone array, the different reflection patterns were mapped as a function of source rotation, showing that some reflections may be considerably attenuated for different aiming directions. This paper investigates the concept of averaging results while changing rotation angles and the minimum number of rotations required to improve the accuracy of the average value. Results show that averages of three measurements carried out at 30° angular steps are closer to actual values and show much less fluctuation. In addition, an averaging of the directional intensity components of the selected responses stabilizes the spatial distribution of the reflections.

  1. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations

    PubMed Central

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-01-01

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364

  2. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  3. Improving the accuracy of mirror measurements by removing noise and lens distortion

    NASA Astrophysics Data System (ADS)

    Wang, Zhenzhou

    2016-11-01

    Telescope mirrors determine the imaging quality and observation ability of telescopes. Unfortunately, manufacturing highly accurate mirrors remains a bottleneck problem in space optics. One main factor is the lack of a technique for measuring the 3D shapes of mirrors accurately for inverse engineering. Researchers have studied and developed techniques for testing the quality of telescope mirrors and methods for measuring the 3D shapes of mirrors for centuries. Among these, interferometers have become popular in evaluating the surface errors of manufactured mirrors. However, interferometers are unable to measure some important mirror parameters directly and accurately, e.g. the paraxial radius, geometry dimension and eccentric errors, and these parameters are essential for mirror manufacturing. In this paper, we aim to remove the noise and lens distortion inherent in the system to improve the accuracy of a previously proposed one-shot projection mirror measurement method. To this end, we propose a ray modeling and a pattern modeling method. The experimental results show that the proposed ray modeling and pattern modeling method can improve the accuracy of the one-shot projection method significantly, making it feasible as a commercial device to measure the shapes of mirrors quantitatively and accurately.

  4. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations.

    PubMed

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-05-26

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices.

  5. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines

    NASA Astrophysics Data System (ADS)

    Sahadevan, Anand S.; Routray, Aurobinda; Das, Bhabani S.; Ahmad, Saquib

    2016-04-01

    Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS-Salinas, ROSIS-Pavia University, and AVIRIS-Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

  6. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  7. COMPASS server for homology detection: improved statistical accuracy, speed and functionality

    PubMed Central

    Sadreyev, Ruslan I.; Tang, Ming; Kim, Bong-Hyun; Grishin, Nick V.

    2009-01-01

    COMPASS is a profile-based method for the detection of remote sequence similarity and the prediction of protein structure. Here we describe a recently improved public web server of COMPASS, http://prodata.swmed.edu/compass. The server features three major developments: (i) improved statistical accuracy; (ii) increased speed from parallel implementation; and (iii) new functional features facilitating structure prediction. These features include visualization tools that allow the user to quickly and effectively analyze specific local structural region predictions suggested by COMPASS alignments. As an application example, we describe the structural, evolutionary and functional analysis of a protein with unknown function that served as a target in the recent CASP8 (Critical Assessment of Techniques for Protein Structure Prediction round 8). URL: http://prodata.swmed.edu/compass PMID:19435884

  8. COMPASS server for homology detection: improved statistical accuracy, speed and functionality.

    PubMed

    Sadreyev, Ruslan I; Tang, Ming; Kim, Bong-Hyun; Grishin, Nick V

    2009-07-01

    COMPASS is a profile-based method for the detection of remote sequence similarity and the prediction of protein structure. Here we describe a recently improved public web server of COMPASS, http://prodata.swmed.edu/compass. The server features three major developments: (i) improved statistical accuracy; (ii) increased speed from parallel implementation; and (iii) new functional features facilitating structure prediction. These features include visualization tools that allow the user to quickly and effectively analyze specific local structural region predictions suggested by COMPASS alignments. As an application example, we describe the structural, evolutionary and functional analysis of a protein with unknown function that served as a target in the recent CASP8 (Critical Assessment of Techniques for Protein Structure Prediction round 8). URL: http://prodata.swmed.edu/compass. PMID:19435884

  9. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    PubMed

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  10. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics.

    PubMed

    Yin, Jian; Fenley, Andrew T; Henriksen, Niel M; Gilson, Michael K

    2015-08-13

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by nonoptimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery.

  11. COMPASS server for homology detection: improved statistical accuracy, speed and functionality.

    PubMed

    Sadreyev, Ruslan I; Tang, Ming; Kim, Bong-Hyun; Grishin, Nick V

    2009-07-01

    COMPASS is a profile-based method for the detection of remote sequence similarity and the prediction of protein structure. Here we describe a recently improved public web server of COMPASS, http://prodata.swmed.edu/compass. The server features three major developments: (i) improved statistical accuracy; (ii) increased speed from parallel implementation; and (iii) new functional features facilitating structure prediction. These features include visualization tools that allow the user to quickly and effectively analyze specific local structural region predictions suggested by COMPASS alignments. As an application example, we describe the structural, evolutionary and functional analysis of a protein with unknown function that served as a target in the recent CASP8 (Critical Assessment of Techniques for Protein Structure Prediction round 8). URL: http://prodata.swmed.edu/compass.

  12. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  13. Leveraging Improvements in Precipitation Measuring from GPM Mission to Achieve Prediction Improvements in Climate, Weather and Hydrometeorology

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2002-01-01

    the way for what ultimately is expected to become an internationally-organized operational global precipitation observing system. Notably, the broad societal applications of GPM are reflected in the United Nation s identification of this mission as a foremost candidate for its Peaceful Uses of Space Program. In this presentation, an overview of the GPM mission design will be presented, followed by an explanation of its scientific agenda as an outgrowth of making improvements in rain retrieval accuracy, microphysics dexterity, sampling frequency, and global coverage. All of these improvements offer new means to observe variability in precipitation and water cycle fluxes and to achieve improved predictability of weather, climate, and hydrometeorology. Specifically, the scientific agenda of GPM has been designed to leverage the measurement improvements to improve prognostic model performance, particularly quantitative precipitation forecasting and its linked phenomena at short, intermediate, and extended time scales. The talk will address how GPM measurements will enable better detection of accelerations and decelerations in regional and global water cycle processes and their relationship to climate variability, better impacts of precipitation data assimilation on numerical weather prediction and global climate reanalysis, and better performance from basin scale hydrometeorological models for short and long term flood-drought forecasting and seasonal fresh water resource assessment. Improved hydrometeorological forecasting will be possible by using continuous global precipitation observations to obtain better closure in water budgets and to generate more realistic forcing of the models themselves to achieve more accurate estimates of interception, infiltration, evaporation/transpiration fluxes, storage, and runoff.

  14. Does probabilistic modelling of linkage disequilibrium evolution improve the accuracy of QTL location in animal pedigree?

    PubMed Central

    2010-01-01

    Background Since 2001, the use of more and more dense maps has made researchers aware that combining linkage and linkage disequilibrium enhances the feasibility of fine-mapping genes of interest. So, various method types have been derived to include concepts of population genetics in the analyses. One major drawback of many of these methods is their computational cost, which is very significant when many markers are considered. Recent advances in technology, such as SNP genotyping, have made it possible to deal with huge amount of data. Thus the challenge that remains is to find accurate and efficient methods that are not too time consuming. The study reported here specifically focuses on the half-sib family animal design. Our objective was to determine whether modelling of linkage disequilibrium evolution improved the mapping accuracy of a quantitative trait locus of agricultural interest in these populations. We compared two methods of fine-mapping. The first one was an association analysis. In this method, we did not model linkage disequilibrium evolution. Therefore, the modelling of the evolution of linkage disequilibrium was a deterministic process; it was complete at time 0 and remained complete during the following generations. In the second method, the modelling of the evolution of population allele frequencies was derived from a Wright-Fisher model. We simulated a wide range of scenarios adapted to animal populations and compared these two methods for each scenario. Results Our results indicated that the improvement produced by probabilistic modelling of linkage disequilibrium evolution was not significant. Both methods led to similar results concerning the location accuracy of quantitative trait loci which appeared to be mainly improved by using four flanking markers instead of two. Conclusions Therefore, in animal half-sib designs, modelling linkage disequilibrium evolution using a Wright-Fisher model does not significantly improve the accuracy of the

  15. Viewing the hand prior to movement improves accuracy of pointing performed toward the unseen contralateral hand.

    PubMed

    Desmurget, M; Rossetti, Y; Jordan, M; Meckler, C; Prablanc, C

    1997-06-01

    It is now well established that the accuracy of pointing movements to visual targets is worse in the full open loop condition (FOL; the hand is never visible) than in the static closed loop condition (SCL; the hand is only visible in static position prior to movement onset). In order to account for this result, it is generally admitted that viewing the hand in static position (SCL) improves the movement planning process by allowing a better encoding of the initial state of the motor apparatus. Interestingly, this wide-spread interpretation has recently been challenged by several studies suggesting that the effect of viewing the upper limb at rest might be explained in terms of the simultaneous vision of the hand and target. This result is supported by recent studies showing that goal-directed movements involve different types of planning (egocentric versus allocentric) depending on whether the hand and target are seen simultaneously or not before movement onset. The main aim of the present study was to test whether or not the accuracy improvement observed when the hand is visible before movement onset is related, at least partially, to a better encoding of the initial state of the upper limb. To address this question, we studied experimental conditions in which subjects were instructed to point with their right index finger toward their unseen left index finger. In that situation (proprioceptive pointing), the hand and target are never visible simultaneously and an improvement of movement accuracy in SCL, with respect to FOL, may only be explained by a better encoding of the initial state of the moving limb when vision is present. The results of this experiment showed that both the systematic and the variable errors were significantly lower in the SCL than in the FOL condition. This suggests: (1) that the effect of viewing the static hand prior to motion does not only depend on the simultaneous vision of the goal and the effector during movement planning; (2) that

  16. Effects of Improvements in Interval Timing on the Mathematics Achievement of Elementary School Students

    ERIC Educational Resources Information Center

    Taub, Gordon E.; McGrew, Kevin S.; Keith, Timothy Z.

    2015-01-01

    This article examines the effect of improvements in timing/rhythmicity on mathematics achievement. A total of 86 participants attending 1st through 4th grades completed pre- and posttest measures of mathematics achievement from the Woodcock-Johnson III Tests of Achievement. Students in the experimental group participated in a 4-week intervention…

  17. On improving the accuracy of the M2 barotropic tides embedded in a high-resolution global ocean circulation model

    NASA Astrophysics Data System (ADS)

    Ngodock, Hans E.; Souopgui, Innocent; Wallcraft, Alan J.; Richman, James G.; Shriver, Jay F.; Arbic, Brian K.

    2016-01-01

    The ocean tidal velocity and elevation can be estimated concurrently with the ocean circulation by adding the astronomical tidal forcing, parameterized topographic internal wave drag, and self-attraction and loading to the general circulation physics. However, the accuracy of these tidal estimates does not yet match accuracies in the best data-assimilative barotropic tidal models. This paper investigates the application of an augmented state ensemble Kalman Filter (ASEnKF) to improve the accuracy of M2 barotropic tides embedded in a 1/12.5° three-dimensional ocean general circulation model. The ASEnKF is an alternative to the techniques typically used with linearized tide-only models; such techniques cannot be applied to the embedded tides in a nonlinear eddying circulation. An extra term, meant to correct for errors in the tide model due to imperfectly known topography and damping terms, is introduced into the tidal forcing. Ensembles of the model are created with stochastically generated forcing correction terms. The discrepancies for each ensemble member with TPXO, an existing data-assimilative tide model, are computed. The ASEnKF method yields an optimal estimate of the model forcing correction terms, that minimizes resultant root mean square (RMS) tidal sea surface elevation error with respect to TPXO, as well as an estimate of the tidal elevation. The deep-water, global area-averaged RMS sea surface elevation error of the principal lunar semidiurnal tide M2 is reduced from 4.4 cm in a best-case non-assimilative solution to 2.6 cm. The largest elevation errors in both the non-assimilative and ASEnKF solutions are in the North Atlantic, a highly resonant basin. Possible pathways for achieving further reductions in the RMS error are discussed.

  18. On improving the accuracy of the M2 barotropic tides embedded in a high-resolution global ocean circulation model

    NASA Astrophysics Data System (ADS)

    Ngodock, Hans; Wallcraft, Alan; Souopgui, Innocent; Richman, James; Shriver, Jay; Arbic, Brian

    2016-04-01

    The ocean tidal velocity and elevation can be estimated concurrently with the ocean circulation by adding the astronomical tidal forcing, parameterized topographic internal wave drag, and self-attraction and loading to the general circulation physics. However, the accuracy of these tidal estimates does not yet match accuracies in the best data-assimilative barotropic tidal models. This paper investigates the application of an Augmented State Ensemble Kalman Filter (ASEnKF) to improve the accuracy of M2 barotropic tides embedded in a 1/12.5° three-dimensional ocean general circulation model. The ASEnKF is an alternative to the techniques typically used with linearized tide-only models; such techniques cannot be applied to the embedded tides in a nonlinear eddying circulation. An extra term, meant to correct for errors in the tide model due to imperfectly known topography and damping terms, is introduced into the tidal forcing. Ensembles of the model are created with stochastically generated forcing correction terms. The discrepancies for each ensemble member with TPXO, an existing data-assimilative tide model, are computed. The ASEnKF method yields an optimal estimate of the model forcing correction terms, that minimizes resultant root mean square (RMS) tidal sea surface elevation error with respect to TPXO, as well as an estimate of the tidal elevation. The deep-water, global area-averaged RMS sea surface elevation error of the principal lunar semidiurnal tide M2 is reduced from 4.4 cm in a best-case non-assimilative solution to 2.6 cm. The largest elevation errors in both the non-assimilative and ASEnKF solutions are in the North Atlantic, a highly resonant basin. Possible pathways for achieving further reductions in the RMS error are discussed.

  19. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  20. SU-E-J-101: Improved CT to CBCT Deformable Registration Accuracy by Incorporating Multiple CBCTs

    SciTech Connect

    Godley, A; Stephans, K; Olsen, L Sheplan

    2015-06-15

    Purpose: Combining prior day CBCT contours with STAPLE was previously shown to improve automated prostate contouring. These accurate STAPLE contours are now used to guide the planning CT to pre-treatment CBCT deformable registration. Methods: Six IGRT prostate patients with daily kilovoltage CBCT had their original planning CT and 9 CBCTs contoured by the same physician. These physician contours for the planning CT and each prior CBCT are deformed to match the current CBCT anatomy, producing multiple contour sets. These sets are then combined using STAPLE into one optimal set (e.g. for day 3 CBCT, combine contours produced using the plan plus day 1 and 2 CBCTs). STAPLE computes a probabilistic estimate of the true contour from this collection of contours by maximizing sensitivity and specificity. The deformation field from planning CT to CBCT registration is then refined by matching its deformed contours to the STAPLE contours. ADMIRE (Elekta Inc.) was used for this. The refinement does not force perfect agreement of the contours, typically Dice’s Coefficient (DC) of > 0.9 is obtained, and the image difference metric remains in the optimization of the deformable registration. Results: The average DC between physician delineated CBCT contours and deformed planning CT contours for the bladder, rectum and prostate was 0.80, 0.79 and 0.75, respectively. The accuracy significantly improved to 0.89, 0.84 and 0.84 (P<0.001 for all) when using the refined deformation field. The average time to run STAPLE with five scans and refine the planning CT deformation was 66 seconds on a Telsa K20c GPU. Conclusion: Accurate contours generated from multiple CBCTs provided guidance for CT to CBCT deformable registration, significantly improving registration accuracy as measured by contour DC. A more accurate deformation field is now available for transferring dose or electron density to the CBCT for adaptive planning. Research grant from Elekta.

  1. Increasing cutaneous afferent feedback improves proprioceptive accuracy at the knee in patients with sensory ataxia.

    PubMed

    Macefield, Vaughan G; Norcliffe-Kaufmann, Lucy; Goulding, Niamh; Palma, Jose-Alberto; Fuente Mora, Cristina; Kaufmann, Horacio

    2016-02-01

    Hereditary sensory and autonomic neuropathy type III (HSAN III) features disturbed proprioception and a marked ataxic gait. We recently showed that joint angle matching error at the knee is positively correlated with the degree of ataxia. Using intraneural microelectrodes, we also documented that these patients lack functional muscle spindle afferents but have preserved large-diameter cutaneous afferents, suggesting that patients with better proprioception may be relying more on proprioceptive cues provided by tactile afferents. We tested the hypothesis that enhancing cutaneous sensory feedback by stretching the skin at the knee joint using unidirectional elasticity tape could improve proprioceptive accuracy in patients with a congenital absence of functional muscle spindles. Passive joint angle matching at the knee was used to assess proprioceptive accuracy in 25 patients with HSAN III and 9 age-matched control subjects, with and without taping. Angles of the reference and indicator knees were recorded with digital inclinometers and the absolute error, gradient, and correlation coefficient between the two sides calculated. Patients with HSAN III performed poorly on the joint angle matching test [mean matching error 8.0 ± 0.8° (±SE); controls 3.0 ± 0.3°]. Following application of tape bilaterally to the knee in an X-shaped pattern, proprioceptive performance improved significantly in the patients (mean error 5.4 ± 0.7°) but not in the controls (3.0 ± 0.2°). Across patients, but not controls, significant increases in gradient and correlation coefficient were also apparent following taping. We conclude that taping improves proprioception at the knee in HSAN III, presumably via enhanced sensory feedback from the skin.

  2. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  3. Improving Cartosat-1 DEM accuracy using synthetic stereo pair and triplet

    NASA Astrophysics Data System (ADS)

    Giribabu, D.; Srinivasa Rao, S.; Krishna Murthy, Y. V. N.

    2013-03-01

    Cartosat-1 is the first Indian Remote Sensing Satellite capable of providing along-track stereo images. Cartosat-1 provides forward stereo images with look angles +26° and -5° with respect to nadir for generating Digital Elevation Models (DEMs), Orthoimages and value added products for various applications. A pitch bias of -21° to the satellite resulted in giving reverse tilt mode stereo pair with look angles of +5° and -26° with respect to nadir. This paper compares DEMs generated using forward, reverse and other possible synthetic stereo pairs for two different types of topographies. Stereo triplet was used to generate DEM for Himalayan mountain topography to overcome the problem of occlusions. For flat to undulating topography it was shown that using Cartosat-1 synthetic stereo pair with look angles of -26° and +26° will produce improved version of DEM. Planimetric and height accuracy (Root Mean Square Error (RMSE)) of less than 2.5 m and 2.95 m respectively were obtained and qualitative analysis shows finer details in comparison with other DEMs. For rugged terrain and steep slopes of Himalayan mountain topography simple stereo pairs may not provide reliable accuracies in DEMs due to occlusions and shadows. Stereo triplet from Cartosat-1 was used to generate DEM for mountainous topography. This DEM shows better reconstruction of elevation model even at occluded region when compared with simple stereo pair based DEM. Planimetric and height accuracy (RMSE) of nearly 3 m were obtained and qualitative analysis shows reduction of outliers at occluded region.

  4. METACOGNITIVE SCAFFOLDS IMPROVE SELF-JUDGMENTS OF ACCURACY IN A MEDICAL INTELLIGENT TUTORING SYSTEM

    PubMed Central

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S.

    2013-01-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of Nodular and Diffuse Dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for two hours before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman-Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments. PMID:24532850

  5. A metrological approach to improve accuracy and reliability of ammonia measurements in ambient air

    NASA Astrophysics Data System (ADS)

    Pogány, Andrea; Balslev-Harder, David; Braban, Christine F.; Cassidy, Nathan; Ebert, Volker; Ferracci, Valerio; Hieta, Tuomas; Leuenberger, Daiana; Martin, Nicholas A.; Pascale, Céline; Peltola, Jari; Persijn, Stefan; Tiebe, Carlo; Twigg, Marsailidh M.; Vaittinen, Olavi; van Wijk, Janneke; Wirtz, Klaus; Niederhauser, Bernhard

    2016-11-01

    The environmental impacts of ammonia (NH3) in ambient air have become more evident in the recent decades, leading to intensifying research in this field. A number of novel analytical techniques and monitoring instruments have been developed, and the quality and availability of reference gas mixtures used for the calibration of measuring instruments has also increased significantly. However, recent inter-comparison measurements show significant discrepancies, indicating that the majority of the newly developed devices and reference materials require further thorough validation. There is a clear need for more intensive metrological research focusing on quality assurance, intercomparability and validations. MetNH3 (Metrology for ammonia in ambient air) is a three-year project within the framework of the European Metrology Research Programme (EMRP), which aims to bring metrological traceability to ambient ammonia measurements in the 0.5–500 nmol mol‑1 amount fraction range. This is addressed by working in three areas: (1) improving accuracy and stability of static and dynamic reference gas mixtures, (2) developing an optical transfer standard and (3) establishing the link between high-accuracy metrological standards and field measurements. In this article we describe the concept, aims and first results of the project.

  6. Does routine repeat testing of critical laboratory values improve their accuracy?

    PubMed Central

    Baradaran Motie, Pooya; Zare-Mirzaie, Ali; Shayanfar, Nasrin; Kadivar, Maryam

    2015-01-01

    Background: Routine repeat testing of critical laboratory values is very common these days to increase their accuracy and to avoid reporting false or infeasible results. We figure that repeat testing of critical laboratory values has any benefits or not. Methods: We examined 2233 repeated critical laboratory values in 13 different hematology and chemistry tests including: hemoglobin, white blood cell, platelet, international normalized ratio, partial thromboplastin time, glucose, potassium, sodium, phosphorus, magnesium, calcium, total bilirubin and direct bilirubin. The absolute difference and the percentage of change between the two tests for each critical value were calculated and then compared with the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error. Results: Repeat testing yielded results that were within the allowable error on 2213 of 2233 specimens (99.1%). There was only one outlier (0.2%) in the white blood cell test category, 9 (2.9%) in the platelet test category, 5 (4%) in the partial thromboplastin time test category, 5 (4.8%) in the international normalized ratio test category and none in other test categories. Conclusion: Routine, repeat testing of critical hemoglobin, white blood cell, platelet, international normalized ratio, partial thromboplastin time, glucose, potassium, sodium, phosphorus, magnesium, calcium, total bilirubin and direct bilirubin results does not have any benefits to increase their accuracy. PMID:26034729

  7. Does an Adolescent's Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    PubMed

    Kerr, Deborah A; Wright, Janine L; Dhaliwal, Satvinder S; Boushey, Carol J

    2015-05-13

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents' accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents' accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days.

  8. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  9. Improving the accuracy of skin elasticity measurement by using Q-parameters in Cutometer.

    PubMed

    Qu, Di; Seehra, G Paul

    2016-01-01

    The skin elasticity parameters (Ue, Uv, Uf, Ur, Ua, and R0 through R9) in the Cutometer are widely used for in vivo measurement of skin elasticity. Their accuracy, however, is impaired by the inadequacy of the definition of a key parameter, the time point of 0.1 s, which separates the elastic and viscoelastic responses of human skin. This study shows why an inflection point (t(IP)) should be calculated from each individual response curve to define skin elasticity, and how the Q-parameters are defined in the Cutometer. By analyzing the strain versus time curves of some pure elastic standards and of a population of 746 human volunteers, a method of determining the t(IP) from each mode 1 response curve was established. The results showed a wide distribution of this parameter ranging from 0.11 to 0.19 s, demonstrating that the current single-valued empirical parameter of 0.1 s was not adequate to represent this property of skin. A set of area-based skin viscoelastic parameters were also defined. The biological elasticity thus obtained correlated well with the study volunteers' chronological age which was statistically significant. We conclude that the Q-parameters are more accurate than the U and R parameters and should be used to improve measurement accuracy of human skin elasticity. PMID:27319059

  10. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    PubMed Central

    Kerr, Deborah A.; Wright, Janine L.; Dhaliwal, Satvinder S.; Boushey, Carol J.

    2015-01-01

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days. PMID:25984743

  11. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  12. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  13. Method for improving terahertz band absorption spectrum measurement accuracy using noncontact sample thickness measurement.

    PubMed

    Li, Zhi; Zhang, Zhaohui; Zhao, Xiaoyan; Su, Haixia; Yan, Fang; Zhang, Han

    2012-07-10

    The terahertz absorption spectrum has a complex nonlinear relationship with sample thickness, which is normally measured mechanically with limited accuracy. As a result, the terahertz absorption spectrum is usually determined incorrectly. In this paper, an iterative algorithm is proposed to accurately determine sample thickness. This algorithm is independent of the initial value used and results in convergent calculations. Precision in sample thickness can be improved up to 0.1 μm. A more precise absorption spectrum can then be extracted. By comparing the proposed method with the traditional method based on mechanical thickness measurements, quantitative analysis experiments on a three-component amino acid mixture shows that the global error decreased from 0.0338 to 0.0301.

  14. Improving the Accuracy of CT Colonography Interpretation: Computer-Aided Diagnosis

    PubMed Central

    Summers, Ronald M.

    2010-01-01

    Synopsis Computer-aided polyp detection aims to improve the accuracy of the colonography interpretation. The computer searches the colonic wall to look for polyp-like protrusions and presents a list of suspicious areas to a physician for further analysis. Computer-aided polyp detection has developed rapidly over the past decade and in the laboratory setting and has sensitivities comparable to those of experts. Computer-aided polyp detection tends to help inexperienced readers more than experienced ones and may also lead to small reductions in specificity. In its currently proposed use as an adjunct to standard image interpretation, computer-aided polyp detection serves as a spellchecker rather than an efficiency enhancer. PMID:20451814

  15. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  16. Improving the Accuracy of the Boundary Integral Method Based on the Helmholtz Integral

    NASA Technical Reports Server (NTRS)

    Koopmann, G. H.; Brod, K.

    1985-01-01

    Several recent papers in the literature have been based on various forms of the Helmholtz integral to compute the radiation fields of vibrating bodies. The surface integral form is given. The symbols of P,R micron, rho,G,R,V, and S micron are acoustic pressure, source coordinate, angular frequency, fluid density, Green function, field coordinate, surface velocity and body surface respectively. A discretized form of the surface integral is also given. Solutions to the surface integral are complicated with the singularity of the Green function at R=R micron and with the uniqueness problem at interior eigen frequencies of the enclosed space. The use of the interior integral circumvents the singularity problem since the field points are chosen in the interior space of the vibrating body where a zero pressure condition exists. The interior integral form is given. The method to improve the accuracy is detailed. Examples of the method is presented for a variety of radiators.

  17. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    NASA Astrophysics Data System (ADS)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  18. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    NASA Astrophysics Data System (ADS)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a

  19. Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.

    PubMed

    Pearce, John A

    2015-12-01

    The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented

  20. Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.

    PubMed

    Pearce, John A

    2015-12-01

    The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented

  1. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage.

    PubMed

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  2. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  3. Evaluating an educational intervention to improve the accuracy of death certification among trainees from various specialties

    PubMed Central

    Villar, Jesús; Pérez-Méndez, Lina

    2007-01-01

    Background The inaccuracy of death certification can lead to the misallocation of resources in health care programs and research. We evaluated the rate of errors in the completion of death certificates among medical residents from various specialties, before and after an educational intervention which was designed to improve the accuracy in the certification of the cause of death. Methods A 90-min seminar was delivered to seven mixed groups of medical trainees (n = 166) from several health care institutions in Spain. Physicians were asked to read and anonymously complete a same case-scenario of death certification before and after the seminar. We compared the rates of errors and the impact of the educational intervention before and after the seminar. Results A total of 332 death certificates (166 completed before and 166 completed after the intervention) were audited. Death certificates were completed with errors by 71.1% of the physicians before the educational intervention. Following the seminar, the proportion of death certificates with errors decreased to 9% (p < 0.0001). The most common error in the completion of death certificates was the listing of the mechanism of death instead of the cause of death. Before the seminar, 56.8% listed respiratory or cardiac arrest as the immediate cause of death. None of the participants listed any mechanism of death after the educational intervention (p < 0.0001). Conclusion Major errors in the completion of the correct cause of death on death certificates are common among medical residents. A simple educational intervention can dramatically improve the accuracy in the completion of death certificates by physicians. PMID:18005414

  4. Accuracy of Suture Passage During Arthroscopic Remplissage—What Anatomic Landmarks Can Improve It?

    PubMed Central

    Garcia, Grant H.; Degen, Ryan M.; Liu, Joseph N.; Kahlenberg, Cynthia A.; Dines, Joshua S.

    2016-01-01

    Background: Recent data suggest that inaccurate suture passage during remplissage may contribute to a loss of external rotation, with the potential to cause posterior shoulder pain because of the proximity to the musculotendinous junction. Purpose: To evaluate the accuracy of suture passage during remplissage and identify surface landmarks to improve accuracy. Study Design: Descriptive laboratory study. Methods: Arthroscopic remplissage was performed on 6 cadaveric shoulder specimens. Two single-loaded suture anchors were used for each remplissage. After suture passage, position was recorded in reference to the posterolateral acromion (PLA), with entry perpendicular to the humeral surface. After these measurements, the location of posterior cuff penetration was identified by careful surgical dissection. Results: Twenty-four sutures were passed in 6 specimens: 6 sutures (25.0%) were correctly passed through the infraspinatus tendon, 12 (50%) were through the infraspinatus muscle or musculotendinous junction (MTJ), and 6 (25%) were through the teres minor. Suture passage through the infraspinatus were on average 25 ± 5.4 mm inferior to the PLA, while sutures passing through the teres minor were on average 35.8 ± 5.7 mm inferior to the PLA. There was an odds ratio of 25 (95% CI, 2.1-298.3; P < .001) that the suture would be through the infraspinatus if the passes were less than 3 cm inferior to the PLA. Sutures passing through muscle and the MTJ were significantly more medial than those passing through tendon, measuring on average 8.1 ± 5.1 mm lateral to the PLA compared with 14.5 ± 5.5 mm (P < .02). If suture passes were greater than 1 cm lateral to the PLA, it was significantly more likely to be in tendon (P = .013). Conclusion: We found remplissage suture passage was inaccurate, with only 25% of sutures penetrating the infraspinatus tendon. Passing sutures 1 cm lateral and within 3 cm inferior of the PLA improves the odds of successful infraspinatus tenodesis

  5. Accuracy of Suture Passage During Arthroscopic Remplissage—What Anatomic Landmarks Can Improve It?

    PubMed Central

    Garcia, Grant H.; Degen, Ryan M.; Liu, Joseph N.; Kahlenberg, Cynthia A.; Dines, Joshua S.

    2016-01-01

    Background: Recent data suggest that inaccurate suture passage during remplissage may contribute to a loss of external rotation, with the potential to cause posterior shoulder pain because of the proximity to the musculotendinous junction. Purpose: To evaluate the accuracy of suture passage during remplissage and identify surface landmarks to improve accuracy. Study Design: Descriptive laboratory study. Methods: Arthroscopic remplissage was performed on 6 cadaveric shoulder specimens. Two single-loaded suture anchors were used for each remplissage. After suture passage, position was recorded in reference to the posterolateral acromion (PLA), with entry perpendicular to the humeral surface. After these measurements, the location of posterior cuff penetration was identified by careful surgical dissection. Results: Twenty-four sutures were passed in 6 specimens: 6 sutures (25.0%) were correctly passed through the infraspinatus tendon, 12 (50%) were through the infraspinatus muscle or musculotendinous junction (MTJ), and 6 (25%) were through the teres minor. Suture passage through the infraspinatus were on average 25 ± 5.4 mm inferior to the PLA, while sutures passing through the teres minor were on average 35.8 ± 5.7 mm inferior to the PLA. There was an odds ratio of 25 (95% CI, 2.1-298.3; P < .001) that the suture would be through the infraspinatus if the passes were less than 3 cm inferior to the PLA. Sutures passing through muscle and the MTJ were significantly more medial than those passing through tendon, measuring on average 8.1 ± 5.1 mm lateral to the PLA compared with 14.5 ± 5.5 mm (P < .02). If suture passes were greater than 1 cm lateral to the PLA, it was significantly more likely to be in tendon (P = .013). Conclusion: We found remplissage suture passage was inaccurate, with only 25% of sutures penetrating the infraspinatus tendon. Passing sutures 1 cm lateral and within 3 cm inferior of the PLA improves the odds of successful infraspinatus tenodesis

  6. Improvement in Interobserver Accuracy in Delineation of the Lumpectomy Cavity Using Fiducial Markers

    SciTech Connect

    Shaikh, Talha; Chen Ting; Khan, Atif; Yue, Ning J.

    2010-11-15

    Purpose: To determine, whether the presence of gold fiducial markers would improve the inter- and intraphysician accuracy in the delineation of the surgical cavity compared with a matched group of patients who did not receive gold fiducial markers in the setting of accelerated partial-breast irradiation (APBI). Methods and Materials: Planning CT images of 22 lumpectomy cavities were reviewed in a cohort of 22 patients; 11 patients received four to six gold fiducial markers placed at the time of surgery. Three physicians categorized the seroma cavity according to cavity visualization score criteria and delineated each of the 22 seroma cavities and the clinical target volume. Distance between centers of mass, percentage overlap, and average surface distance for all patients were assessed. Results: The mean seroma volume was 36.9 cm{sup 3} and 34.2 cm{sup 3} for fiducial patients and non-fiducial patients, respectively (p = ns). Fiducial markers improved the mean cavity visualization score, to 3.6 {+-} 1.0 from 2.5 {+-} 1.3 (p < 0.05). The mean distance between centers of mass, average surface distance, and percentage overlap for the seroma and clinical target volume were significantly improved in the fiducial marker patients as compared with the non-fiducial marker patients (p < 0.001). Conclusions: The placement of gold fiducial markers placed at the time of lumpectomy improves interphysician identification and delineation of the seroma cavity and clinical target volume. This has implications in radiotherapy treatment planning for accelerated partial-breast irradiation and for boost after whole-breast irradiation.

  7. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  8. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  9. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    PubMed Central

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  10. Improving the assessment of ICESat water altimetry accuracy accounting for autocorrelation

    NASA Astrophysics Data System (ADS)

    Abdallah, Hani; Bailly, Jean-Stéphane; Baghdadi, Nicolas; Lemarquand, Nicolas

    2011-11-01

    Given that water resources are scarce and are strained by competing demands, it has become crucial to develop and improve techniques to observe the temporal and spatial variations in the inland water volume. Due to the lack of data and the heterogeneity of water level stations, remote sensing, and especially altimetry from space, appear as complementary techniques for water level monitoring. In addition to spatial resolution and sampling rates in space or time, one of the most relevant criteria for satellite altimetry on inland water is the accuracy of the elevation data. Here, the accuracy of ICESat LIDAR altimetry product is assessed over the Great Lakes in North America. The accuracy assessment method used in this paper emphasizes on autocorrelation in high temporal frequency ICESat measurements. It also considers uncertainties resulting from both in situ lake level reference data. A probabilistic upscaling process was developed. This process is based on several successive ICESat shots averaged in a spatial transect accounting for autocorrelation between successive shots. The method also applies pre-processing of the ICESat data with saturation correction of ICESat waveforms, spatial filtering to avoid measurement disturbance from the land-water transition effects on waveform saturation and data selection to avoid trends in water elevations across space. Initially this paper analyzes 237 collected ICESat transects, consistent with the available hydrometric ground stations for four of the Great Lakes. By adapting a geostatistical framework, a high frequency autocorrelation between successive shot elevation values was observed and then modeled for 45% of the 237 transects. The modeled autocorrelation was therefore used to estimate water elevations at the transect scale and the resulting uncertainty for the 117 transects without trend. This uncertainty was 8 times greater than the usual computed uncertainty, when no temporal correlation is taken into account. This

  11. Study on Improvement of Accuracy in Inertial Photogrammetry by Combining Images with Inertial Measurement Unit

    NASA Astrophysics Data System (ADS)

    Kawasaki, Hideaki; Anzai, Shojiro; Koizumi, Toshio

    2016-06-01

    Inertial photogrammetry is defined as photogrammetry that involves using a camera on which an inertial measurement unit (IMU) is mounted. In inertial photogrammetry, the position and inclination of a shooting camera are calculated using the IMU. An IMU is characterized by error growth caused by time accumulation because acceleration is integrated with respect to time. This study examines the procedure to estimate the position of the camera accurately while shooting using the IMU and the structure from motion (SfM) technology, which is applied in many fields, such as computer vision. When neither the coordinates of the position of the camera nor those of feature points are known, SfM provides a similar positional relationship between the position of the camera and feature points. Therefore, the actual length of positional coordinates is not determined. If the actual length of the position of the camera is unknown, the camera acceleration is obtained by calculating the second order differential of the position of the camera, with respect to the shooting time. The authors had determined the actual length by assigning the position of IMU to the SfM-calculated position. Hence, accuracy decreased because of the error growth, which was the characteristic feature of IMU. In order to solve this problem, a new calculation method was proposed. Using this method, the difference between the IMU-calculated acceleration and the camera-calculated acceleration can be obtained using the method of least squares, and the magnification required for calculating the actual dimension from the position of the camera can be obtained. The actual length can be calculated by multiplying all the SfM point groups by the obtained magnification factor. This calculation method suppresses the error growth, which is due to the time accumulation in IMU, and improves the accuracy of inertial photogrammetry.

  12. Improving the accuracy of estimates of animal path and travel distance using GPS drift-corrected dead reckoning.

    PubMed

    Dewhirst, Oliver P; Evans, Hannah K; Roskilly, Kyle; Harvey, Richard J; Hubel, Tatjana Y; Wilson, Alan M

    2016-09-01

    Route taken and distance travelled are important parameters for studies of animal locomotion. They are often measured using a collar equipped with GPS. Collar weight restrictions limit battery size, which leads to a compromise between collar operating life and GPS fix rate. In studies that rely on linear interpolation between intermittent GPS fixes, path tortuosity will often lead to inaccurate path and distance travelled estimates. Here, we investigate whether GPS-corrected dead reckoning can improve the accuracy of localization and distance travelled estimates while maximizing collar operating life. Custom-built tracking collars were deployed on nine freely exercising domestic dogs to collect high fix rate GPS data. Simulations were carried out to measure the extent to which combining accelerometer-based speed and magnetometer heading estimates (dead reckoning) with low fix rate GPS drift correction could improve the accuracy of path and distance travelled estimates. In our study, median 2-dimensional root-mean-squared (2D-RMS) position error was between 158 and 463 m (median path length 16.43 km) and distance travelled was underestimated by between 30% and 64% when a GPS position fix was taken every 5 min. Dead reckoning with GPS drift correction (1 GPS fix every 5 min) reduced 2D-RMS position error to between 15 and 38 m and distance travelled to between an underestimation of 2% and an overestimation of 5%. Achieving this accuracy from GPS alone would require approximately 12 fixes every minute and result in a battery life of approximately 11 days; dead reckoning reduces the number of fixes required, enabling a collar life of approximately 10 months. Our results are generally applicable to GPS-based tracking studies of quadrupedal animals and could be applied to studies of energetics, behavioral ecology, and locomotion. This low-cost approach overcomes the limitation of low fix rate GPS and enables the long-term deployment of lightweight GPS collars

  13. Improving the accuracy of estimates of animal path and travel distance using GPS drift-corrected dead reckoning.

    PubMed

    Dewhirst, Oliver P; Evans, Hannah K; Roskilly, Kyle; Harvey, Richard J; Hubel, Tatjana Y; Wilson, Alan M

    2016-09-01

    Route taken and distance travelled are important parameters for studies of animal locomotion. They are often measured using a collar equipped with GPS. Collar weight restrictions limit battery size, which leads to a compromise between collar operating life and GPS fix rate. In studies that rely on linear interpolation between intermittent GPS fixes, path tortuosity will often lead to inaccurate path and distance travelled estimates. Here, we investigate whether GPS-corrected dead reckoning can improve the accuracy of localization and distance travelled estimates while maximizing collar operating life. Custom-built tracking collars were deployed on nine freely exercising domestic dogs to collect high fix rate GPS data. Simulations were carried out to measure the extent to which combining accelerometer-based speed and magnetometer heading estimates (dead reckoning) with low fix rate GPS drift correction could improve the accuracy of path and distance travelled estimates. In our study, median 2-dimensional root-mean-squared (2D-RMS) position error was between 158 and 463 m (median path length 16.43 km) and distance travelled was underestimated by between 30% and 64% when a GPS position fix was taken every 5 min. Dead reckoning with GPS drift correction (1 GPS fix every 5 min) reduced 2D-RMS position error to between 15 and 38 m and distance travelled to between an underestimation of 2% and an overestimation of 5%. Achieving this accuracy from GPS alone would require approximately 12 fixes every minute and result in a battery life of approximately 11 days; dead reckoning reduces the number of fixes required, enabling a collar life of approximately 10 months. Our results are generally applicable to GPS-based tracking studies of quadrupedal animals and could be applied to studies of energetics, behavioral ecology, and locomotion. This low-cost approach overcomes the limitation of low fix rate GPS and enables the long-term deployment of lightweight GPS collars.

  14. Strategies for Achieving High Sequencing Accuracy for Low Diversity Samples and Avoiding Sample Bleeding Using Illumina Platform

    PubMed Central

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy. PMID:25860802

  15. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    NASA Astrophysics Data System (ADS)

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the

  16. Improving Reading Achievement Through Increased Motivation, Specific Skill Enhancement, and Practice Time for Elementary Students

    ERIC Educational Resources Information Center

    Ecklund, Britt K.; Lamon, Kathryn M.

    2008-01-01

    The action research project report began when the teacher researchers determined that students at Sites A and B struggled with reading achievement. The purpose of the project was to improve students' reading achievement through increased motivation, specific skill instruction, and additional practice time. The project involved 26 students: 17…

  17. Effective Strategies Urban Superintendents Utilize That Improve the Academic Achievement for African American Males

    ERIC Educational Resources Information Center

    Prioleau, Lushandra

    2013-01-01

    This study examined the effective strategies, resources, and programs urban superintendents utilize to improve the academic achievement for African-American males. This study employed a mixed-methods approach to answer the following research questions regarding urban superintendents and the academic achievement for African-American males: What…

  18. Improving Academic Achievement of At-Risk Students in English Education and Keyboarding I.

    ERIC Educational Resources Information Center

    Klinger, Barbara; Nelson, Denise

    This report describes a program for improving the on-task behavior of at-risk students to increase their academic achievement. The targeted population consisted of high-school students in a growing middle-class community located in a rural area of a midwestern state. The problems of academic under-achievement were documented through data including…

  19. Improving Education Achievement and Attainment in Luxembourg. OECD Economics Department Working Papers, No. 508

    ERIC Educational Resources Information Center

    Carey, David; Ernst, Ekkehard

    2006-01-01

    Improving education achievement in Luxembourg is a priority for strengthening productivity growth and enhancing residents' employment prospects in the private sector, where employers mainly hire cross-border workers. Student achievement in Luxembourg is below the OECD average according to the 2003 OECD PISA study, with the performance gap between…

  20. Can Practice Calibrating by Test Topic Improve Public School Students' Calibration Accuracy and Performance on Tests?

    ERIC Educational Resources Information Center

    Riggs, Rose M.

    2012-01-01

    The effect of a calibration strategy requiring students to predict their scores for each topic on a high stakes test was investigated. The utility of self-efficacy towards predicting achievement and calibration accuracy was also explored. One hundred and ten sixth grade math students enrolled in an urban middle school participated. Students were…

  1. Breaking through barriers: using technology to address executive function weaknesses and improve student achievement.

    PubMed

    Schwartz, David M

    2014-01-01

    Assistive technologies provide significant capabilities for improving student achievement. Improved accessibility, cost, and diversity of applications make integration of technology a powerful tool to compensate for executive function weaknesses and deficits and their impact on student performance, learning, and achievement. These tools can be used to compensate for decreased working memory, poor time management, poor planning and organization, poor initiation, and decreased memory. Assistive technology provides mechanisms to assist students with diverse strengths and weaknesses in mastering core curricular concepts. PMID:25010083

  2. Efforts to improve the diagnostic accuracy of endoscopic ultrasound-guided fine-needle aspiration for pancreatic tumors

    PubMed Central

    Yamabe, Akane; Irisawa, Atsushi; Bhutani, Manoop S.; Shibukawa, Goro; Fujisawa, Mariko; Sato, Ai; Yoshida, Yoshitsugu; Arakawa, Noriyuki; Ikeda, Tsunehiko; Igarashi, Ryo; Maki, Takumi; Yamamoto, Shogo

    2016-01-01

    Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) is widely used to obtain a definitive diagnosis of pancreatic tumors. Good results have been reported for its diagnostic accuracy, with high sensitivity and specificity of around 90%; however, technological developments and adaptations to improve it still further are currently underway. The endosonographic technique can be improved when several tips and tricks useful to overcome challenges of EUS-FNA are known. This review provides various techniques and equipment for improvement in the diagnostic accuracy in EUS-FNA. PMID:27503153

  3. Teachers' Perception of Their Principal's Leadership Style and the Effects on Student Achievement in Improving and Non-Improving Schools

    ERIC Educational Resources Information Center

    Hardman, Brenda Kay

    2011-01-01

    Teachers' perceptions of their school leaders influence student achievement in their schools. The extent of this influence is examined in this study. This quantitative study examined teachers' perceptions of the leadership style of their principals as transformational, transactional or passive-avoidant in improving and non-improving schools in…

  4. Improving the Academic Achievement of Third and Fourth Grade Underachievers as a Result of Improved Self-Esteem.

    ERIC Educational Resources Information Center

    Coakley, Barbara Fairfax

    This study was designed to improve the academic achievement of 35 third- and fourth-grade underachievers through improved self-esteem. Specific goals included focusing on self-concept and learning skills reinforcement, with the ultimate goal of increasing academic performance and motivation. Large group sessions with students focused on…

  5. Phase-Accuracy Comparisons and Improved Far-Field Estimates for 3-D Edge Elements on Tetrahedral Meshes

    NASA Astrophysics Data System (ADS)

    Monk, Peter; Parrott, Kevin

    2001-07-01

    Edge-element methods have proved very effective for 3-D electromagnetic computations and are widely used on unstructured meshes. However, the accuracy of standard edge elements can be criticised because of their low order. This paper analyses discrete dispersion relations together with numerical propagation accuracy to determine the effect of tetrahedral shape on the phase accuracy of standard 3-D edge-element approximations in comparison to other methods. Scattering computations for the sphere obtained with edge elements are compared with results obtained with vertex elements, and a new formulation of the far-field integral approximations for use with edge elements is shown to give improved cross sections over conventional formulations.

  6. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes

    PubMed Central

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-01-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. PMID:26178332

  7. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes.

    PubMed

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-10-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement.

  8. Improvement of olfactometric measurement accuracy and repeatability by optimization of panel selection procedures.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P; Bonati, S

    2010-01-01

    The EN 13725:2003, which standardizes the determination of odour concentration by dynamic olfactometry, fixes the limits for panel selection in terms of individual threshold towards a reference gas (n-butanol in nitrogen) and of standard deviation of the responses. Nonetheless, laboratories have some degrees of freedom in developing their own procedures for panel selection and evaluation. Most Italian olfactometric laboratories use a similar procedure for panel selection, based on the repeated analysis of samples of n-butanol at a concentration of 60 ppm. The first part of this study demonstrates that this procedure may originate a sort of "smartening" of the assessors, which means that they become able to guess the right answers in order to maintain their qualification as panel members, independently from their real olfactory perception. For this reason, the panel selection procedure has been revised with the aim of making it less repetitive, therefore preventing the possibility for panel members to be able to guess the best answers in order to comply with the selection criteria. The selection of new panel members and the screening of the active ones according to this revised procedure proved this new procedure to be more selective than the "standard" one. Finally, the results of the tests with n-butanol conducted after the introduction of the revised procedure for panel selection and regular verification showed an effective improvement of the laboratory measurement performances in terms of accuracy and precision.

  9. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  10. Improving the accuracy of feature extraction for flexible endoscope calibration by spatial super resolution.

    PubMed

    Rupp, Stephan; Elter, Matthias; Winter, Christian

    2007-01-01

    Many applications in the domain of medical as well as industrial image processing make considerable use of flexible endoscopes - so called fiberscopes - to gain visual access to holes, hollows, antrums and cavities that are difficult to enter and examine. For a complete exploration and understanding of an antrum, 3d depth information might be desirable or yet necessary. This often requires the mapping of 3d world coordinates to 2d image coordinates which is estimated by camera calibration. In order to retrieve useful results, the precise extraction of the imaged calibration pattern's markers plays a decisive role in the camera calibration process. Unfortunately, when utilizing fiberscopes, the image conductor introduces a disturbing comb structure to the images that anticipates a (precise) marker extraction. Since the calibration quality crucially depends on subpixel-precise calibration marker positions, we apply static comb structure removal algorithms along with a dynamic spatial resolution enhancement method in order to improve the feature extraction accuracy. In our experiments, we demonstrate that our approach results in a more accurate calibration of flexible endoscopes and thus allows for a more precise reconstruction of 3d information from fiberoptic images. PMID:18003530

  11. Improvement of olfactometric measurement accuracy and repeatability by optimization of panel selection procedures.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P; Bonati, S

    2010-01-01

    The EN 13725:2003, which standardizes the determination of odour concentration by dynamic olfactometry, fixes the limits for panel selection in terms of individual threshold towards a reference gas (n-butanol in nitrogen) and of standard deviation of the responses. Nonetheless, laboratories have some degrees of freedom in developing their own procedures for panel selection and evaluation. Most Italian olfactometric laboratories use a similar procedure for panel selection, based on the repeated analysis of samples of n-butanol at a concentration of 60 ppm. The first part of this study demonstrates that this procedure may originate a sort of "smartening" of the assessors, which means that they become able to guess the right answers in order to maintain their qualification as panel members, independently from their real olfactory perception. For this reason, the panel selection procedure has been revised with the aim of making it less repetitive, therefore preventing the possibility for panel members to be able to guess the best answers in order to comply with the selection criteria. The selection of new panel members and the screening of the active ones according to this revised procedure proved this new procedure to be more selective than the "standard" one. Finally, the results of the tests with n-butanol conducted after the introduction of the revised procedure for panel selection and regular verification showed an effective improvement of the laboratory measurement performances in terms of accuracy and precision. PMID:20220249

  12. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    NASA Astrophysics Data System (ADS)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  13. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study.

    PubMed

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  14. Multimodal nonlinear optical microscopy improves the accuracy of early diagnosis of squamous intraepithelial neoplasia

    NASA Astrophysics Data System (ADS)

    Teh, Seng Khoon; Zheng, Wei; Li, Shuxia; Li, Dong; Zeng, Yan; Yang, Yanqi; Qu, Jianan Y.

    2013-03-01

    We explore diagnostic utility of a multicolor excitation multimodal nonlinear optical (NLO) microscopy for noninvasive detection of squamous epithelial precancer in vivo. The 7,12-dimenthylbenz(a)anthracene treated hamster cheek pouch was used as an animal model of carcinogenesis. The NLO microscope system employed was equipped with the ability to collect multiple tissue endogenous NLO signals such as two-photon excited fluorescence of keratin, nicotinamide adenine dinucleotide, collagen, and tryptophan, and second harmonic generation of collagen in spectral and time domains simultaneously. A total of 34 (11 controlled and 23 treated) Golden Syrian hamsters with 62 in vivo spatially distinct measurement sites were assessed in this study. High-resolution label-free NLO images were acquired from stratum corneum, stratum granulosum-stratum basale, and stroma for all tissue measurement sites. A total of nine and eight features from 745 and 600 nm excitation wavelengths, respectively, involving tissue structural and intrinsic biochemical properties were found to contain significant diagnostic information for precancers detection (p<0.05). Particularly, 600 nm excited tryptophan fluorescence signals emanating from stratum corneum was revealed to provide remarkable diagnostic utility. Multivariate statistical techniques confirmed the integration of diagnostically significant features from multicolor excitation wavelengths yielded improved diagnostic accuracy as compared to using the individual wavelength alone.

  15. Linked color imaging application for improving the endoscopic diagnosis accuracy: a pilot study

    PubMed Central

    Sun, Xiaotian; Dong, Tenghui; Bi, Yiliang; Min, Min; Shen, Wei; Xu, Yang; Liu, Yan

    2016-01-01

    Endoscopy has been widely used in diagnosing gastrointestinal mucosal lesions. However, there are still lack of objective endoscopic criteria. Linked color imaging (LCI) is newly developed endoscopic technique which enhances color contrast. Thus, we investigated the clinical application of LCI and further analyzed pixel brightness for RGB color model. All the lesions were observed by white light endoscopy (WLE), LCI and blue laser imaging (BLI). Matlab software was used to calculate pixel brightness for red (R), green (G) and blue color (B). Of the endoscopic images for lesions, LCI had significantly higher R compared with BLI but higher G compared with WLE (all P < 0.05). R/(G + B) was significantly different among 3 techniques and qualified as a composite LCI marker. Our correlation analysis of endoscopic diagnosis with pathology revealed that LCI was quite consistent with pathological diagnosis (P = 0.000) and the color could predict certain kinds of lesions. ROC curve demonstrated at the cutoff of R/(G+B) = 0.646, the area under curve was 0.646, and the sensitivity and specificity was 0.514 and 0.773. Taken together, LCI could improve efficiency and accuracy of diagnosing gastrointestinal mucosal lesions and benefit target biopsy. R/(G + B) based on pixel brightness may be introduced as a objective criterion for evaluating endoscopic images. PMID:27641243

  16. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  17. Statistical Potentials for Hairpin and Internal Loops Improve the Accuracy of the Predicted RNA Structure

    PubMed Central

    Gardner, David P.; Ren, Pengyu; Ozer, Stuart; Gutell, Robin R.

    2011-01-01

    RNA is directly associated with a growing number of functions within the cell. The accurate prediction of different RNAs higher-order structure from their nucleic acid sequences will provide insight into their functions and molecular mechanics. We have been determining statistical potentials for a collection of structural elements that is larger than the number of structural elements determined with experimentally determined energy values. The experimentally derived free-energies and the statistical potentials for canonical base pair stacks are analogous, demonstrating that statistical potentials derived from comparative data can be used as an alternative energetic parameter. A new computational infrastructure - RNA Comparative Analysis Database (rCAD) - that utilizes a relational database was developed to manipulate and analyze very large sequence alignments and secondary structure datasets. Using rCAD, a richer set of energetic parameters for RNA fundamental structural elements including hairpin and internal loops was determined. A new version of RNAfold was developed to utilize these statistical potentials. Overall, these new statistical potentials for hairpin and internal loops integrated into the new version of RNAfold demonstrated significant improvements in the prediction accuracy of RNA secondary structure. PMID:21889515

  18. Design, evaluation, and dissemination of a plastic syringe clip to improve dosing accuracy of liquid medications.

    PubMed

    Spiegel, Garrett J; Dinh, Cindy; Gutierrez, Amanda; Lukomnik, Julia; Lu, Benjamin; Shah, Kamal; Slough, Tara; Yeh, Ping Teresa; Mirabal, Yvette; Gray, Lauren Vestewig; Marton, Stephanie; Adler, Michelle; Schutze, Gordon E; Wickham, Hadley; Oden, Maria; Richards-Kortum, Rebecca

    2013-09-01

    Pediatricians in Africa requested a tool to improve caregiver dosing of liquid antiretroviral medication. We developed, evaluated and disseminated a clip to control the amount of medication drawn into an oral syringe. In a laboratory, a user tested clips of different lengths, corresponding to different volumes, by drawing water into a syringe with a clip. In Texas and Malawi, 149 adults attempted to measure Pepto-Bismol™ using a syringe with a clip, a syringe without a clip, and a dosing cup, in a randomly assigned order. In the laboratory, the volume of liquid, ranging from 1 to 4.5 mL, drawn into the syringe was always within at least 5 μL of the intended dose. In Texas, 84% of doses were accurate within ±10%, vs. 63% using the syringe alone, and 21% with the dosing cup. In Malawi, 98% of doses were accurate to within ±10%, vs. 90% using the syringe alone, and 27% with the dosing cup. For target accuracy values within ±45% (±21%), a significantly higher fraction of Houston (Kamangira) participants delivered an accurate dose using the syringe with the clip than with the syringe alone (p < 0.05). The clip enables a greater proportion of users to accurately measure liquid medication.

  19. Improvement in the radial accuracy of altimeter-satellite orbits due to the geopotential

    NASA Astrophysics Data System (ADS)

    Klokočník, J.; Kostelecký, J.; Wagner, C. A.

    2008-12-01

    The application of satellite altimetry in geosciences needs a precise computation of the orbit positions of the satellites with altimeters. In particular the knowledge of the radial orbit error is of high interest in this context. Rosborough's theory [Rosborough, G.W., 1986 Satellite Orbit Perturbations due to the Geopotential, CSR-86-1 rep., Center for Space Research, Univ. of Texas, Austin.], ammended by our newer works, describes Earth static gravity induced radial orbit error as a function of latitude, longitude and pass direction. Using this theory applied to precise long-term measurements of crossover altimetry we demonstrate the improvement in the accuracy of the orbit radius due to Earth gravity models, from the early 1980s (order of tens of meters), to the present (order of centimeters and less). The early models, with higher correlations between potential coefficients, show strong variations of the error in longitude as well as latitude, compared to the more recent fields. Currently the static gravity errors in the best of the Earth models are believed to be below the systematic environmental errors in the long-term altimetry.

  20. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  1. Free Form Deformation–Based Image Registration Improves Accuracy of Traction Force Microscopy

    PubMed Central

    Jorge-Peñas, Alvaro; Izquierdo-Alvarez, Alicia; Aguilar-Cuenca, Rocio; Vicente-Manzanares, Miguel; Garcia-Aznar, José Manuel; Van Oosterwyck, Hans; de-Juan-Pardo, Elena M.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate

    2015-01-01

    Traction Force Microscopy (TFM) is a widespread method used to recover cellular tractions from the deformation that they cause in their surrounding substrate. Particle Image Velocimetry (PIV) is commonly used to quantify the substrate’s deformations, due to its simplicity and efficiency. However, PIV relies on a block-matching scheme that easily underestimates the deformations. This is especially relevant in the case of large, locally non-uniform deformations as those usually found in the vicinity of a cell’s adhesions to the substrate. To overcome these limitations, we formulate the calculation of the deformation of the substrate in TFM as a non-rigid image registration process that warps the image of the unstressed material to match the image of the stressed one. In particular, we propose to use a B-spline -based Free Form Deformation (FFD) algorithm that uses a connected deformable mesh to model a wide range of flexible deformations caused by cellular tractions. Our FFD approach is validated in 3D fields using synthetic (simulated) data as well as with experimental data obtained using isolated endothelial cells lying on a deformable, polyacrylamide substrate. Our results show that FFD outperforms PIV providing a deformation field that allows a better recovery of the magnitude and orientation of tractions. Together, these results demonstrate the added value of the FFD algorithm for improving the accuracy of traction recovery. PMID:26641883

  2. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique.

    PubMed

    Velazquez, L; Castro-Palacio, J C

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247

  3. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  4. Improving Estimation Accuracy of Quasars’ Photometric Redshifts by Integration of KNN and SVM

    NASA Astrophysics Data System (ADS)

    Han, Bo; Ding, Hongpeng; Zhang, Yanxia; Zhao, Yongheng

    2015-08-01

    The massive photometric data collected from multiple large-scale sky surveys offers significant opportunities for measuring distances of many celestial objects by photometric redshifts zphot in a wide coverage of the sky. However, catastrophic failure, an unsolved problem for a long time, exists in the current photometric redshift estimation approaches (such as k-nearest-neighbor). In this paper, we propose a novel two-stage approach by integration of k-nearest-neighbor (KNN) and support vector machine (SVM) methods together. In the first stage, we apply KNN algorithm on photometric data and estimate their corresponding zphot. By analysis, we observe two dense regions with catastrophic failure, one in the range of zphot [0.1,1.1], the other in the range of zphot [1.5,2.5]. In the second stage, we map the photometric multiband input pattern of points falling into the two ranges from original attribute space into high dimensional feature space by Gaussian kernel function in SVM. In the high dimensional feature space, many bad estimation points resulted from catastrophic failure by using simple Euclidean distance computation in KNN can be identified by classification hyperplane SVM and further be applied correction. Experimental results based on SDSS data for quasars showed that the two-stage fusion approach can significantly mitigate catastrophic failure and improve the estimation accuracy of photometric redshift.

  5. Free Form Deformation-Based Image Registration Improves Accuracy of Traction Force Microscopy.

    PubMed

    Jorge-Peñas, Alvaro; Izquierdo-Alvarez, Alicia; Aguilar-Cuenca, Rocio; Vicente-Manzanares, Miguel; Garcia-Aznar, José Manuel; Van Oosterwyck, Hans; de-Juan-Pardo, Elena M; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate

    2015-01-01

    Traction Force Microscopy (TFM) is a widespread method used to recover cellular tractions from the deformation that they cause in their surrounding substrate. Particle Image Velocimetry (PIV) is commonly used to quantify the substrate's deformations, due to its simplicity and efficiency. However, PIV relies on a block-matching scheme that easily underestimates the deformations. This is especially relevant in the case of large, locally non-uniform deformations as those usually found in the vicinity of a cell's adhesions to the substrate. To overcome these limitations, we formulate the calculation of the deformation of the substrate in TFM as a non-rigid image registration process that warps the image of the unstressed material to match the image of the stressed one. In particular, we propose to use a B-spline -based Free Form Deformation (FFD) algorithm that uses a connected deformable mesh to model a wide range of flexible deformations caused by cellular tractions. Our FFD approach is validated in 3D fields using synthetic (simulated) data as well as with experimental data obtained using isolated endothelial cells lying on a deformable, polyacrylamide substrate. Our results show that FFD outperforms PIV providing a deformation field that allows a better recovery of the magnitude and orientation of tractions. Together, these results demonstrate the added value of the FFD algorithm for improving the accuracy of traction recovery. PMID:26641883

  6. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    SciTech Connect

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-11-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

  7. Signal processing of MEMS gyroscope arrays to improve accuracy using a 1st order Markov for rate signal modeling.

    PubMed

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model.

  8. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    PubMed

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-01

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty. PMID:25438089

  9. Improving the accuracy of vehicle emissions profiles for urban transportation greenhouse gas and air pollution inventories.

    PubMed

    Reyna, Janet L; Chester, Mikhail V; Ahn, Soyoung; Fraser, Andrew M

    2015-01-01

    Metropolitan greenhouse gas and air emissions inventories can better account for the variability in vehicle movement, fleet composition, and infrastructure that exists within and between regions, to develop more accurate information for environmental goals. With emerging access to high quality data, new methods are needed for informing transportation emissions assessment practitioners of the relevant vehicle and infrastructure characteristics that should be prioritized in modeling to improve the accuracy of inventories. The sensitivity of light and heavy-duty vehicle greenhouse gas (GHG) and conventional air pollutant (CAP) emissions to speed, weight, age, and roadway gradient are examined with second-by-second velocity profiles on freeway and arterial roads under free-flow and congestion scenarios. By creating upper and lower bounds for each factor, the potential variability which could exist in transportation emissions assessments is estimated. When comparing the effects of changes in these characteristics across U.S. cities against average characteristics of the U.S. fleet and infrastructure, significant variability in emissions is found to exist. GHGs from light-duty vehicles could vary by -2%-11% and CAP by -47%-228% when compared to the baseline. For heavy-duty vehicles, the variability is -21%-55% and -32%-174%, respectively. The results show that cities should more aggressively pursue the integration of emerging big data into regional transportation emissions modeling, and the integration of these data is likely to impact GHG and CAP inventories and how aggressively policies should be implemented to meet reductions. A web-tool is developed to aide cities in improving emissions uncertainty.

  10. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Shen, Liangbo; Todorich, Bozho; Shieh, Christine; Kuo, Anthony; Toth, Cynthia; Izatt, Joseph A.

    2016-03-01

    Ophthalmic surgeons manipulate micron-scale tissues using stereopsis through an operating microscope and instrument shadowing for depth perception. While ophthalmic microsurgery has benefitted from rapid advances in instrumentation and techniques, the basic principles of the stereo operating microscope have not changed since the 1930's. Optical Coherence Tomography (OCT) has revolutionized ophthalmic imaging and is now the gold standard for preoperative and postoperative evaluation of most retinal and many corneal procedures. We and others have developed initial microscope-integrated OCT (MIOCT) systems for concurrent OCT and operating microscope imaging, but these are limited to 2D real-time imaging and require offline post-processing for 3D rendering and visualization. Our previously presented 4D MIOCT system can record and display the 3D surgical field stereoscopically through the microscope oculars using a dual-channel heads-up display (HUD) at up to 10 micron-scale volumes per second. In this work, we show that 4D MIOCT guidance improves the accuracy of depth-based microsurgical maneuvers (with statistical significance) in mock surgery trials in a wet lab environment. Additionally, 4D MIOCT was successfully performed in 38/45 (84%) posterior and 14/14 (100%) anterior eye human surgeries, and revealed previously unrecognized lesions that were invisible through the operating microscope. These lesions, such as residual and potentially damaging retinal deformation during pathologic membrane peeling, were visualized in real-time by the surgeon. Our integrated system provides an enhanced 4D surgical visualization platform that can improve current ophthalmic surgical practice and may help develop and refine future microsurgical techniques.

  11. Improving Accuracy in Locating Magnetic Sources: Combing Homogeneous Locator and Extreme Values for Inversion

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-12-01

    In this study, we present an algorithm to improve the accuracy in locating magnetic sources, especially dipole magnetic sources using a combination of a homogeneous locator and positions of extreme values. Homogeneous locator is a homogeneous function and point based algorithm that uses magnetic field intensity and its first and second order tensors. If using all magnetic data points for inversion of the locations of magnetic dipole sources, more solutions than actual targets will be produced. Also, interference from neighboring magnetic sources and noise will deteriorate the situation and make the locating task even more difficult. To improve such a situation, we investigated and compared various cases with different levels of interference and noise, and using a combination of the homogeneous locator and the positions of extreme values for inversion. Results show that (1) if the interference and noise levels are low, using magnetic and its first and second tensor values at the positions where the first vertical derivative of the magnetic field has extreme values - either maximum or minimum as inputs to the homogeneous locator resulted in the best results - accurate horizontal and vertical locations and structure indices; (2) if the interference and noise level are high, using magnetic and its first and second tensor values at the positions where the magnetic field intensity has extreme values as inputs to the homogeneous locator resulted in the best horizontal locations, but still using the values at the positions where the first vertical derivative of the magnetic field has extreme values produce the best vertical locations and structure indices. We applied the verified scheme to the field data for UXO detection and aeromagnetic data of Minnesota for geological structure study and the results are compared to Euler deconvolution and sole homogeneous locator method.

  12. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  13. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    ERIC Educational Resources Information Center

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  14. Achieving plane wave accuracy in linear-scaling density functional theory applied to periodic systems: A case study on crystalline silicon

    NASA Astrophysics Data System (ADS)

    Skylaris, Chris-Kriton; Haynes, Peter D.

    2007-10-01

    Linear-scaling methods for density functional theory promise to revolutionize the scope and scale of first-principles quantum mechanical calculations. Crystalline silicon has been the system of choice for exploratory tests of such methods in the literature, yet attempts at quantitative comparisons under linear-scaling conditions with traditional methods or experimental results have not been forthcoming. A detailed study using the ONETEP code is reported here, demonstrating for the first time that plane wave accuracy can be achieved in linear-scaling calculations on periodic systems.

  15. School Improvement Plans and Student Achievement: Preliminary Evidence from the Quality and Merit Project in Italy

    ERIC Educational Resources Information Center

    Caputo, Andrea; Rastelli, Valentina

    2014-01-01

    This study provides preliminary evidence from an Italian in-service training program addressed to lower secondary school teachers which supports school improvement plans (SIPs). It aims at exploring the association between characteristics/contents of SIPs and student improvement in math achievement. Pre-post standardized tests and text analysis of…

  16. [Physiological Basis of the Improvement of Movement Accuracy with the Use of Stabilographic Training with Biological Feedback].

    PubMed

    Kapilevich, L V; Koshelskaya, E V; Krivoschekov, S G

    2015-01-01

    We studied the physiological parameters of ball hitting by volleyball players in unsupported position and opportunities for their improvement by training with biological feedback. Physiological and biomechanical parameters of a direct attack hit from supported position correlate with biomechanical features ofjump shots. At the same time, the physiological basis of accuracy of shots consists of the improvement of trunk and arm movement coordination in the flight phase, the factors of intramuscular and intermuscular coordination of the hitting arm and the change in the displacement of the center of pressure. The use of computer stabilography training with biological feedback helps to optimize physiological and biomechanical parameters of physical actions in unsupported position, which ultimately causes an increase in the accuracy of jump hitting of the ball. The obtained results open the prospects for applying the method of computer stabilography to improve the performance of accuracy-targeted actions in unsupported position in various sports. PMID:26485791

  17. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  18. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  19. Improvements of Visual Based Shoreline Accuracy From Satellite Imagery and Simultaneous Differential GPS Surveys

    NASA Astrophysics Data System (ADS)

    MacKenzie, R. A.; Jaeger, J. M.; Adams, P. N.; Plant, N. G.; Schaub, R.; Kline, S. W.; Lovering, J. L.

    2011-12-01

    Various tools with different sampling density, intervals and uncertainties are now used in decadal -scale coastal change studies. Lidar has rapidly become the preferred tool for coastal studies with its high data density and moderate accuracy (15 - 30 cm), but the typical sampling interval between surveys is ~5 years. GPS measurements have higher accuracy (10 cm) and can be conducted more frequently than Lidar, but data density is lower and surveyed coastline length is minimized. Imagery is used because of its historical significance, but it is the least accurate. High degrees of uncertainty in shoreline position from remotely sensed imagery results from difficulty in choosing an appropriate visual shoreline proxy. This is due, in part to different visual based shoreline (VBS) proxies (e.g. vegetation line, debris line, high water mark, wet/dry line, low water mark) not being visible on all beaches at all times. Geomorphic clues from remotely sensed imagery combined with knowledge of coastal morphology and DBS surveys can minimize uncertainties in using VBS proxies. We established a relationship between VBS and DBS by conducting 32 kinematic differential GPS surveys over a two-year period at Cape Canaveral, FL. Two Geoeye I (0.5 m res.) satellite images were collected 3 days post DBS survey on June 9, 2009 and during a DBS survey on July 28, 2010. June and July beach states were selected for image collection because they have the lowest average monthly significant wave height (HS) and least variability in HS, providing the highest likelihood of stable beach morphology. The 10-km study area exhibits various beach morphologic states and stability histories. During the July survey the visual high water wet/dry line traced with GPS, did not correspond to the wet/dry line on the GeoEye image. Further analysis shows that a wet-dry line related to grain size, composition and ground water seepage face, visible in all images, provides the best VBS proxy. Direct comparison

  20. Effects of Simulated Interventions to Improve School Entry Academic Skills on Socioeconomic Inequalities in Educational Achievement

    PubMed Central

    Chittleborough, Catherine R; Mittinty, Murthy N; Lawlor, Debbie A; Lynch, John W

    2014-01-01

    Randomized controlled trial evidence shows that interventions before age 5 can improve skills necessary for educational success; the effect of these interventions on socioeconomic inequalities is unknown. Using trial effect estimates, and marginal structural models with data from the Avon Longitudinal Study of Parents and Children (n = 11,764, imputed), simulated effects of plausible interventions to improve school entry academic skills on socioeconomic inequality in educational achievement at age 16 were examined. Progressive universal interventions (i.e., more intense intervention for those with greater need) to improve school entry academic skills could raise population levels of educational achievement by 5% and reduce absolute socioeconomic inequality in poor educational achievement by 15%. PMID:25327718

  1. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy

    PubMed Central

    Monaghan, Kieran A.

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an

  2. Improved accuracy of acute graft-versus-host disease staging among multiple centers.

    PubMed

    Levine, John E; Hogan, William J; Harris, Andrew C; Litzow, Mark R; Efebera, Yvonne A; Devine, Steven M; Reshef, Ran; Ferrara, James L M

    2014-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD.

  3. Improved accuracy of acute graft-versus-host disease staging among multiple centers.

    PubMed

    Levine, John E; Hogan, William J; Harris, Andrew C; Litzow, Mark R; Efebera, Yvonne A; Devine, Steven M; Reshef, Ran; Ferrara, James L M

    2014-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD. PMID:25455279

  4. Improvement in absolute calibration accuracy of Landsat-5 TM with Landsat-7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Chander, Gyanesh; Markham, Brian L.; Micijevic, Esad; Teillet, Philippe M.; Helder, Dennis L.

    2005-08-01

    The ability to detect and quantify changes in the Earth's environment depends on satellites sensors that can provide calibrated, consistent measurements of Earth's surface features through time. A critical step in this process is to put image data from subsequent generations of sensors onto a common radiometric scale. To evaluate Landsat-5 (L5) Thematic Mapper's (TM) utility in this role, image pairs from the L5 TM and Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors were compared. This approach involves comparison of surface observations based on image statistics from large common areas observed eight days apart by the two sensors. The results indicate a significant improvement in the consistency of L5 TM data with respect to L7 ETM+ data, achieved using a revised Look-Up-Table (LUT) procedure as opposed to the historical Internal Calibrator (IC) procedure previously used in the L5 TM product generation system. The average percent difference in reflectance estimates obtained from the L5 TM agree with those from the L7 ETM+ in the Visible and Near Infrared (VNIR) bands to within four percent and in the Short Wave Infrared (SWIR) bands to within six percent.

  5. Improvement in absolute calibration accuracy of Landsat-5 TM with Landsat-7 ETM+ data

    USGS Publications Warehouse

    Chander, G.; Markham, B.L.; Micijevic, E.; Teillet, P.M.; Helder, D.L.; ,

    2005-01-01

    The ability to detect and quantify changes in the Earth's environment depends on satellites sensors that can provide calibrated, consistent measurements of Earth's surface features through time. A critical step in this process is to put image data from subsequent generations of sensors onto a common radiometric scale. To evaluate Landsat-5 (L5) Thematic Mapper's (TM) utility in this role, image pairs from the L5 TM and Landsat-7 (L7) Enhanced Thematic Mapper Plus (ETM+) sensors were compared. This approach involves comparison of surface observations based on image statistics from large common areas observed eight days apart by the two sensors. The results indicate a significant improvement in the consistency of L5 TM data with respect to L7 ETM+ data, achieved using a revised Look-Up-Table (LUT) procedure as opposed to the historical Internal Calibrator (IC) procedure previously used in the L5 TM product generation system. The average percent difference in reflectance estimates obtained from the L5 TM agree with those from the L7 ETM+ in the Visible and Near Infrared (VNIR) bands to within four percent and in the Short Wave Infrared (SWIR) bands to within six percent.

  6. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  7. The Consequences of "School Improvement": Examining the Association between Two Standardized Assessments Measuring School Improvement and Student Science Achievement

    ERIC Educational Resources Information Center

    Maltese, Adam V.; Hochbein, Craig D.

    2012-01-01

    For more than half a century concerns about the ability of American students to compete in a global workplace focused policymakers' attention on improving school performance generally, and student achievement in science, technology, engineering, and mathematics (STEM) specifically. In its most recent form--No Child Left Behind--there is evidence…

  8. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    ERIC Educational Resources Information Center

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  9. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  10. Incorporation of Inter-Subject Information to Improve the Accuracy of Subject-Specific P300 Classifiers.

    PubMed

    Xu, Minpeng; Liu, Jing; Chen, Long; Qi, Hongzhi; He, Feng; Zhou, Peng; Wan, Baikun; Ming, Dong

    2016-05-01

    Although the inter-subject information has been demonstrated to be effective for a rapid calibration of the P300-based brain-computer interface (BCI), it has never been comprehensively tested to find if the incorporation of heterogeneous data could enhance the accuracy. This study aims to improve the subject-specific P300 classifier by adding other subject's data. A classifier calibration strategy, weighted ensemble learning generic information (WELGI), was developed, in which elementary classifiers were constructed by using both the intra- and inter-subject information and then integrated into a strong classifier with a weight assessment. 55 subjects were recruited to spell 20 characters offline using the conventional P300-based BCI, i.e. the P300-speller. Four different metrics, the P300 accuracy and precision, the round accuracy, and the character accuracy, were performed for a comprehensive investigation. The results revealed that the classifier constructed on the training dataset in combination with adding other subject's data was significantly superior to that without the inter-subject information. Therefore, the WELGI is an effective classifier calibration strategy which uses the inter-subject information to improve the accuracy of subject-specific P300 classifiers, and could also be applied to other BCI paradigms.

  11. [A method for improving measuring accuracy in multi-channel impedance spectroscopy (MIS)].

    PubMed

    Thiel, F; Hartung, C

    2004-08-01

    The use of impedance spectroscopy as a diagnostic tool for the investigation of biological objects involves the consideration of numerous parameters impacting on measuring accuracy. This paper describes a calibration method for multichannel instruments that reduces the non-inconsiderable influence of frequency response variations between the channels, thus significantly increasing measuring accuracy. The method is tested in a recently developed, high-resolution, multi-channel bio-impedance analyser. Reduction of the measuring error is demonstrated, and the magnitude and phase resolution is quantified. The advantage of this method lies in its applicability to existing systems. Furthermore, an additional calibration impedance is not needed. PMID:15481406

  12. Signal Processing of MEMS Gyroscope Arrays to Improve Accuracy Using a 1st Order Markov for Rate Signal Modeling

    PubMed Central

    Jiang, Chengyu; Xue, Liang; Chang, Honglong; Yuan, Guangmin; Yuan, Weizheng

    2012-01-01

    This paper presents a signal processing technique to improve angular rate accuracy of the gyroscope by combining the outputs of an array of MEMS gyroscope. A mathematical model for the accuracy improvement was described and a Kalman filter (KF) was designed to obtain optimal rate estimates. Especially, the rate signal was modeled by a first-order Markov process instead of a random walk to improve overall performance. The accuracy of the combined rate signal and affecting factors were analyzed using a steady-state covariance. A system comprising a six-gyroscope array was developed to test the presented KF. Experimental tests proved that the presented model was effective at improving the gyroscope accuracy. The experimental results indicated that six identical gyroscopes with an ARW noise of 6.2 °/√h and a bias drift of 54.14 °/h could be combined into a rate signal with an ARW noise of 1.8 °/√h and a bias drift of 16.3 °/h, while the estimated rate signal by the random walk model has an ARW noise of 2.4 °/√h and a bias drift of 20.6 °/h. It revealed that both models could improve the angular rate accuracy and have a similar performance in static condition. In dynamic condition, the test results showed that the first-order Markov process model could reduce the dynamic errors 20% more than the random walk model. PMID:22438734

  13. Audit-based education: a potentially effective program for improving guideline achievement in CKD patients.

    PubMed

    de Goeij, Moniek C M; Rotmans, Joris I

    2013-09-01

    The achievement of treatment guidelines in patients with chronic kidney disease is poor, and more efforts are needed to improve this. Audit-based education is a program that may contribute to this improvement. de Lusignana et al. investigated whether audit-based education is effective in lowering systolic blood pressure in a primary-care setting. Although the program is inventive and promising, several adjustments are needed before it can be applied as an effective strategy.

  14. Conjugate Fabry-Perot cavity pair for improved astro-comb accuracy.

    PubMed

    Li, Chih-Hao; Chang, Guoqing; Glenday, Alexander G; Langellier, Nicholas; Zibrov, Alexander; Phillips, David F; Kärtner, Franz X; Szentgyorgyi, Andrew; Walsworth, Ronald L

    2012-08-01

    We propose a new astro-comb mode-filtering scheme composed of two Fabry-Perot cavities (coined "conjugate Fabry-Perot cavity pair"). Simulations indicate that this new filtering scheme makes the accuracy of astro-comb spectral lines more robust against systematic errors induced by nonlinear processes associated with power-amplifying and spectral-broadening optical fibers.

  15. Metacognitive Scaffolds Improve Self-Judgments of Accuracy in a Medical Intelligent Tutoring System

    ERIC Educational Resources Information Center

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S.

    2014-01-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were…

  16. Improving the Accuracy of Teacher Self-Evaluation through Staff Development.

    ERIC Educational Resources Information Center

    Smylie, Mark

    This study examined impact of a staff development program on increasing the accuracy of teachers' self-evaluation of classroom performance. Teachers were provided with specific formal feedback through a program called The Effective Use of Time Program (EUOT). The program had four components: (1) observation in the classroom and feedback about…

  17. Bureau of Indian Affairs Schools: New Facilities Management Information System Promising, but Improved Data Accuracy Needed.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    A General Accounting Office (GAO) study evaluated the Bureau of Indian Affairs' (BIA) new facilities management information system (FMIS). Specifically, the study examined whether the new FMIS addresses the old system's weaknesses and meets BIA's management needs, whether BIA has finished validating the accuracy of data transferred from the old…

  18. Improving Students' Creative Thinking and Achievement through the Implementation of Multiple Intelligence Approach with Mind Mapping

    ERIC Educational Resources Information Center

    Widiana, I. Wayan; Jampel, I. Nyoman

    2016-01-01

    This classroom action research aimed to improve the students' creative thinking and achievement in learning science. It conducted through the implementation of multiple intelligences with mind mapping approach and describing the students' responses. The subjects of this research were the fifth grade students of SD 8 Tianyar Barat, Kubu, and…

  19. The Effectiveness of the SSHA in Improving Prediction of Academic Achievement.

    ERIC Educational Resources Information Center

    Wikoff, Richard L.; Kafka, Gene F.

    1981-01-01

    Investigated the effectiveness of the Survey of Study Habits (SSHA) in improving prediction of achievement. The American College Testing Program English and mathematics subtests were good predictors of gradepoint average. The SSHA subtests accounted for an additional 3 percent of the variance. Sex differences were noted. (Author)

  20. Improving Achievement in Low-Performing Schools: Key Results for School Leaders

    ERIC Educational Resources Information Center

    Ward, Randolph E.; Burke, Mary Ann

    2004-01-01

    As accountability in schools becomes more crucial, educators are looking for comprehensive and innovative management practices that respond to challenges and realities of student academic achievement. In order to improve academic performance and the quality of instruction, the entire school community needs to be involved. This book provides six…

  1. Improving Teaching Capacity to Increase Student Achievement: The Key Role of Data Interpretation by School Leaders

    ERIC Educational Resources Information Center

    Lynch, David; Smith, Richard; Provost, Steven; Madden, Jake

    2016-01-01

    Purpose: This paper argues that in a well-organised school with strong leadership and vision coupled with a concerted effort to improve the teaching performance of each teacher, student achievement can be enhanced. The purpose of this paper is to demonstrate that while macro-effect sizes such as "whole of school" metrics are useful for…

  2. Analyzing Academic Achievement of Junior High School Students by an Improved Rough Set Model

    ERIC Educational Resources Information Center

    Pai, Ping-Feng; Lyu, Yi-Jia; Wang, Yu-Min

    2010-01-01

    Rough set theory (RST) is an emerging technique used to deal with problems in data mining and knowledge acquisition. However, the RST approach has not been widely explored in the field of academic achievement. This investigation developed an improved RST (IMRST) model, which employs linear discriminant analysis to determine a reduct of RST, and…

  3. What Matters for Elementary Literacy Coaching? Guiding Principles for Instructional Improvement and Student Achievement

    ERIC Educational Resources Information Center

    L'Allier, Susan; Elish-Piper, Laurie; Bean, Rita M.

    2010-01-01

    Literacy coaches provide job-embedded professional development for teachers, and the number of literacy coaches in elementary schools is increasing. Although literacy coaching offers promise in terms of improving teacher practice and student achievement, guidance is needed regarding the qualifications, activities, and roles of literacy coaches.…

  4. Investing in Educator Data Literacy Improves Student Achievement. Evidence of Impact: The Oregon Data Project

    ERIC Educational Resources Information Center

    Data Quality Campaign, 2012

    2012-01-01

    Since 2007 the Oregon DATA Project has been investing resources to provide educators on-the-job training around effective data use to improve student achievement. New evidence shows that their efforts are paying off. A 2011 Oregon DATA Project report detailed the impact of their investment in the state's educators, finding the following: (1)…

  5. Closing Schools to Improve Student Achievement: What the Research and Researchers Say. Research Summary

    ERIC Educational Resources Information Center

    American Federation of Teachers (NJ), 2012

    2012-01-01

    School districts close schools for many appropriate reasons. School closure has now evolved into a school improvement strategy. Sometimes the strategy is to close the lowest-performing schools rather than low-enrollment schools and move the students into higher-achieving neighborhood schools. School closure also has become a common strategy to…

  6. Dynamic Geometry Software Improves Mathematical Achievement: Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Chan, Kan Kan; Leung, Siu Wai

    2014-01-01

    Dynamic geometry software (DGS) aims to enhance mathematics education. This systematic review and meta-analysis evaluated the quasi-experimental studies on the effectiveness of DGS-based instruction in improving students' mathematical achievement. Research articles published between 1990 and 2013 were identified from major databases according to a…

  7. Power outage estimation for tropical cyclones: improved accuracy with simpler models.

    PubMed

    Nateghi, Roshanak; Guikema, Seth; Quiring, Steven M

    2014-06-01

    In this article, we discuss an outage-forecasting model that we have developed. This model uses very few input variables to estimate hurricane-induced outages prior to landfall with great predictive accuracy. We also show the results for a series of simpler models that use only publicly available data and can still estimate outages with reasonable accuracy. The intended users of these models are emergency response planners within power utilities and related government agencies. We developed our models based on the method of random forest, using data from a power distribution system serving two states in the Gulf Coast region of the United States. We also show that estimates of system reliability based on wind speed alone are not sufficient for adequately capturing the reliability of system components. We demonstrate that a multivariate approach can produce more accurate power outage predictions.

  8. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Astrophysics Data System (ADS)

    Justh, H. L.; Justus, C. G.; Badger, A. M.

    2009-12-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai’s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  9. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2009-01-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  10. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  11. Improving Mars-GRAM: Increasing the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    Extensively utilized for numerous mission applications, the Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model. In a Monte-Carlo mode, Mars-GRAM's perturbation modeling capability is used to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM has been found to be inexact when used during the Mars Science Laboratory (MSL) site selection process for sensitivity studies for MapYear=0 and large optical depth values such as tau=3. Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM) from the surface to 80 km altitude. Mars-GRAM with the MapYear parameter set to 0 utilizes results from a MGCM run with a fixed value of tau=3 at all locations for the entire year. Imprecise atmospheric density and pressure at all altitudes is a consequence of this use of MGCM with tau=3. Density factor values have been determined for tau=0.3, 1 and 3 as a preliminary fix to this pressure-density problem. These factors adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. These density factors are fixed values for all latitudes and Ls and are included in Mars-GRAM Release 1.3. Work currently being done, to derive better multipliers by including variations with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data, will be highlighted in the presentation. The TES limb data utilized in this process has been validated by a comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS). This comparison study was undertaken for locations on Mars of varying latitudes, Ls, and LTST. The more precise density factors will be included in Mars-GRAM 2005 Release 1.4 and thus improve the results of future sensitivity studies done for large

  12. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    PubMed

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety. PMID:21677911

  13. Development of an Automated Bone Mineral Density Software Application: Facilitation Radiologic Reporting and Improvement of Accuracy.

    PubMed

    Tsai, I-Ta; Tsai, Meng-Yuan; Wu, Ming-Ting; Chen, Clement Kuen-Huang

    2016-06-01

    The conventional method of bone mineral density (BMD) report production by dictation and transcription is time consuming and prone to error. We developed an automated BMD reporting system based on the raw data from a dual energy X-ray absorptiometry (DXA) scanner for facilitating the report generation. The automated BMD reporting system, a web application, digests the DXA's raw data and automatically generates preliminary reports. In Jan. 2014, 500 examinations were randomized into an automatic group (AG) and a manual group (MG), and the speed of report generation was compared. For evaluation of the accuracy and analysis of errors, 5120 examinations during Jan. 2013 and Dec. 2013 were enrolled retrospectively, and the context of automatically generated reports (AR) was compared with the formal manual reports (MR). The average time spent for report generation in AG and in MG was 264 and 1452 s, respectively (p < 0.001). The accuracy of calculation of T and Z scores in AR is 100 %. The overall accuracy of AR and MR is 98.8 and 93.7 %, respectively (p < 0.001). The mis-categorization rate in AR and MR is 0.039 and 0.273 %, respectively (p = 0.0013). Errors occurred in AR and can be grouped into key-in errors by technicians and need for additional judgements. We constructed an efficient and reliable automated BMD reporting system. It facilitates current clinical service and potentially prevents human errors from technicians, transcriptionists, and radiologists.

  14. Space Age Geodesy: Global Earth Observations of Ever Improving resolution and Accuracy

    NASA Astrophysics Data System (ADS)

    Carter, W. E.

    2007-12-01

    The launch of Sputnik-I by the USSR in 1957, and the resulting competitive US-USSR space exploration and weapons programs, led to the need for global geodetic measurements of unprecedented accuracy, and the means to develop new observing techniques to meet those needs. By the 1970s the geodetic community developed very long baseline interferometry (VLBI), lunar laser ranging (LLR), and satellite laser ranging (SLR), and launched international tests that led to the establishment of the International Earth Rotation Service (IERS). Today the IERS provides a stable International Celestial Reference Frame (ICRF), and accurate earth orientation parameters (EOP) values, using a combination of VLBI, LLR, SLR, and the Global Positioning System (GPS). There are hundreds of continuously operating GPS stations around the world, providing centimeter station locations and millimeter per year station velocities, in the International Terrestrial Reference Frame (ITRF). The location of any point on earth can be determined relative to the ITRF to within a few centimeters from a few days of GPS observations, and using kinematic GPS, the positions of moving objects can be tracked to a few centimeters at distances of tens of kilometers from the nearest GPS ground stations. This geodetic infrastructure and space age technology has led to the development of new airborne topographic mapping techniques, most significantly, airborne laser swath mapping (ALSM). With ALSM, it is now possible to map thousands of square kilometers of terrain with sub-decimeter vertical accuracy in hours. For example, the entire length of the San Andreas fault, in California, was mapped in a few hundred hours of flying time. Within the next few decades, global ALSM observations will make it possible for scientists to immediately access (by the internet) data bases containing the locations (cm accuracy) and rates of motion (mm per year accuracy) of points on the surface of earth, with sub-meter spatial resolution

  15. Evaluation of an improved orthognathic articulator system: 1. Accuracy of cast orientation.

    PubMed

    Paul, P E; Barbenel, J C; Walker, F S; Khambay, B S; Moos, K F; Ayoub, A F

    2012-02-01

    A systematic study was carried out using plastic model skulls to quantify the accuracy of the transfer of face bow registration to the articulator. A standard Dentatus semi-adjustable articulator system was compared to a purpose built orthognathic articulator system by measuring the maxillary occlusal plane angles of plastic model skulls and of dental casts mounted on the two different types of articulators. There was a statistically significant difference between the two systems; the orthognathic system showed small random errors, but the standard system showed systematic errors of up to 28°.

  16. Significant Improvement of Puncture Accuracy and Fluoroscopy Reduction in Percutaneous Transforaminal Endoscopic Discectomy With Novel Lumbar Location System

    PubMed Central

    Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng

    2015-01-01

    Abstract Prospective nonrandomized control study. The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED). Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory. The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic time, and the secondary endpoint was location time and operation time. A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture time(s) were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P < 0.001). The fluoroscopic times were 14.03 ± 2.54 in Group A and 25.19 ± 4.28 in Group B (P < 0.001). The preoperative location time was 4.67 ± 1.41 minutes in Group A and 6.98 ± 0.94 minutes in Group B (P < 0.001). The operation time was 79.42 ± 10.15 minutes in Group A and 89.65 ± 14.06 minutes in Group B (P

  17. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  18. Quantification of terrestrial laser scanner (TLS) elevation accuracy in oil palm plantation for IFSAR improvement

    NASA Astrophysics Data System (ADS)

    Muhadi, N. A.; Abdullah, A. F.; Kassim, M. S. M.

    2016-06-01

    In order to ensure the oil palm productivity is high, plantation site should be chosen wisely. Slope is one of the essential factors that need to be taken into consideration when doing a site selection. High quality of plantation area map with elevation information is needed for decision-making especially when dealing with hilly and steep area. Therefore, accurate digital elevation models (DEMs) are required. This research aims to increase the accuracy of Interferometric Synthetic Aperture Radar (IFSAR) by integrating Terrestrial Laser Scanner (TLS) to generate DEMs. However, the focus of this paper is to evaluate the z-value accuracy of TLS data and Real-Time Kinematic GPS (RTK-GPS) as a reference. Besides, this paper studied the importance of filtering process in developing an accurate DEMs. From this study, it has been concluded that the differences of z-values between TLS and IFSAR were small if the points were located on route and when TLS data has been filtered. This paper also concludes that laser scanner (TLS) should be set up on the route to reduce elevation error.

  19. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    NASA Astrophysics Data System (ADS)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  20. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  1.  Noninvasive markers of fibrosis: key concepts for improving accuracy in daily clinical practice.

    PubMed

    Duarte-Rojo, Andrés; Altamirano, José Trinidad; Feld, Jordan J

    2012-01-01

    Noninvasive markers of fibrosis have emerged as an alternative to the staging of fibrosis by means of liver biopsy. Apart from being noninvasive and thus lacking the adverse effects of liver biopsy, they offer some advantages such as reduced risk of sampling error, objectiveness in the interpretation of the result, appropriateness for repeated measurements and lower cost. Many studies have validated different panels of blood markers and imaging/transient elastography for the estimation of fibrosis with acceptable accuracy. Clinical scenarios leading to inacurate or failed estimation must be acknowledged, as well as the fact that performance of blood markers and transient elastography, and their diagnostic cut-off values vary among specific liver diseases. The combination of two blood markers or of a blood marker and transient elastography has been shown to increase accuracy of the estimation. Further, unlike liver biopsy the noninvasive markers of fibrosis are not associated with a ceiling effect after cirrhosis is identified, but can discriminate early from advanced stages of cirrhosis. Longitudinal studies have shown their utility as predictors of complications from portal hypertension and mortality, outperforming liver biopsy. In conclusion, noninvasive markers of fibrosis provide major advantages over liver biopsy. The reported performance of some of the available tests particularly when used in combination make them a reliable tool, very attractive for daily clinical practice.

  2. Iterative restoration algorithms for improving the range accuracy in imaging laser radar

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Yan, Huimin; Zhang, Xiuda; Shangguan, Wangpin; Su, Heng

    2010-11-01

    Scannerless imaging laser radar has been a focus of research in these years for its fast imaging speed and high resolution. We introduced a three-dimensional imaging laser radar using intensified CCD as the receiver with constant gain and line modulated gain. The distance map of a scene is obtained from two intensity images. According to the transmission characteristics of the imaging system, a model of degeneration of the gray images is established and the range accuracy of imaging laser radar based on this model is analyzed. The results show that the range accuracy is related with the reflectivity, the actual distance and some other factors on the fast-distance-varying region, while it is mainly concerned with shot noise for the flat area. On the basis of the cause of measurement error and the distribution characteristics of noise, a method which uses iterative restoration algorithms on obtained intensity images is presented, Simulation is carried out and the results show that root mean square error of distance map obtained with this method is decreased by 50%, compared with the distance map obtained by measurement. Finally the restoration results of radar images are demonstrated to verify the effectiveness of this method.

  3. Recipe for Success: An Updated Parents' Guide to Improving Colorado Schools and Student Achievement. Second Edition.

    ERIC Educational Resources Information Center

    Taher, Bonnie; Durr, Pamela

    This guide describes ways that parents can help improve student achievement and school quality. It answers such questions as how to choose the right early-education opportunity for a preschooler, how to make sure a 5-year-old is ready for school, how to help a daughter do well in school, how to work with a daughter's or son's teachers, how to help…

  4. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection is an attractive technology to generate rapid genetic gains in switchgrass and ...

  5. Reconciling multiple data sources to improve accuracy of large-scale prediction of forest disease incidence

    USGS Publications Warehouse

    Hanks, E.M.; Hooten, M.B.; Baker, F.A.

    2011-01-01

    Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.

  6. Method for improving accuracy of virus titration: standardization of plaque assay for Junin virus.

    PubMed

    Bushar, G; Sagripanti, J L

    1990-10-01

    Titrating infective virus is one of the most important and common techniques in virology. However, after many years of widespread use, the parameters governing the accuracy of titration values are still not well understood. It was found that under conditions currently used for virus titration, only a small percentage of virus in the inoculum is adsorbed onto the cells and thereby detected in the titration assay. The objective of our work was to establish the conditions for a plaque assay which could estimate more accurately the titer of Junin virus. Two different stain methods were compared and several parameters governing plaque formation were studied. The volume of the inoculum appeared as the most important factor affecting observed titer. A linear relationship between the volume of inoculum and the reciprocal apparent titer allowed us to estimate an absolute titer by extrapolation. The approach described here is likely to be applicable to the more accurate estimation of the titer of a wide range of virus.

  7. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  8. Accuracy improvement of protrusion angle of carbon nanotube tips by precision multiaxis nanomanipulator

    SciTech Connect

    Young Song, Won; Young Jung, Ki; O, Beom-Hoan; Park, Byong Chon

    2005-02-01

    In order to manufacture a carbon nanotube (CNT) tip in which the attachment angle and position of CNT were precisely adjusted, a nanomanipulator was installed inside a scanning electron microscope (SEM). A CNT tip, atomic force microscopy (AFM) probe to which a nanotube is attached, is known to be the most appropriate probe for measuring the shape of high aspect ratio. The developed nanomanipulator has two sets of modules with the degree of freedom of three-directional rectilinear motion and one-directional rotational motion at an accuracy of tens of nanometers, so it enables the manufacturing of more accurate CNT tips. The present study developed a CNT tip with the error of attachment angle less then 10 deg. through three-dimensional operation of a multiwalled carbon nanotube and an AFM probe inside a SEM.

  9. The Application of Digital Pathology to Improve Accuracy in Glomerular Enumeration in Renal Biopsies

    PubMed Central

    Troost, Jonathan P.; Gasim, Adil; Bagnasco, Serena; Avila-Casado, Carmen; Johnstone, Duncan; Hodgin, Jeffrey B.; Conway, Catherine; Gillespie, Brenda W.; Nast, Cynthia C.; Barisoni, Laura; Hewitt, Stephen M.

    2016-01-01

    Background In renal biopsy reporting, quantitative measurements, such as glomerular number and percentage of globally sclerotic glomeruli, is central to diagnostic accuracy and prognosis. The aim of this study is to determine the number of glomeruli and percent globally sclerotic in renal biopsies by means of registration of serial tissue sections and manual enumeration, compared to the numbers in pathology reports from routine light microscopic assessment. Design We reviewed 277 biopsies from the Nephrotic Syndrome Study Network (NEPTUNE) digital pathology repository, enumerating 9,379 glomeruli by means of whole slide imaging. Glomerular number and the percentage of globally sclerotic glomeruli are values routinely recorded in the official renal biopsy pathology report from the 25 participating centers. Two general trends in reporting were noted: total number per biopsy or average number per level/section. Both of these approaches were assessed for their accuracy in comparison to the analogous numbers of annotated glomeruli on WSI. Results The number of glomeruli annotated was consistently higher than those reported (p<0.001); this difference was proportional to the number of glomeruli. In contrast, percent globally sclerotic were similar when calculated on total glomeruli, but greater in FSGS when calculated on average number of glomeruli (p<0.01). The difference in percent globally sclerotic between annotated and those recorded in pathology reports was significant when global sclerosis is greater than 40%. Conclusions Although glass slides were not available for direct comparison to whole slide image annotation, this study indicates that routine manual light microscopy assessment of number of glomeruli is inaccurate, and the magnitude of this error is proportional to the total number of glomeruli. PMID:27310011

  10. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  11. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    ScienceCinema

    Olivi, Alessandro, M.D.

    2016-07-12

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  12. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    SciTech Connect

    Olivi, Alessandro, M.D.

    2010-08-28

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  13. A localized orbital analysis of the thermochemical errors in hybrid density functional theory: achieving chemical accuracy via a simple empirical correction scheme.

    PubMed

    Friesner, Richard A; Knoll, Eric H; Cao, Yixiang

    2006-09-28

    This paper describes an empirical localized orbital correction model which improves the accuracy of density functional theory (DFT) methods for the prediction of thermochemical properties for molecules of first and second row elements. The B3LYP localized orbital correction version of the model improves B3LYP DFT atomization energy calculations on the G3 data set of 222 molecules from a mean absolute deviation (MAD) from experiment of 4.8 to 0.8 kcal/mol. The almost complete elimination of large outliers and the substantial reduction in MAD yield overall results comparable to the G3 wave-function-based method; furthermore, the new model has zero additional computational cost beyond standard DFT calculations. The following four classes of correction parameters are applied to a molecule based on standard valence bond assignments: corrections to atoms, corrections to individual bonds, corrections for neighboring bonds of a given bond, and radical environmental corrections. Although the model is heuristic and is based on a 22 parameter multiple linear regression to experimental errors, each of the parameters is justified on physical grounds, and each provides insight into the fundamental limitations of DFT, most importantly the failure of current DFT methods to accurately account for nondynamical electron correlation.

  14. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    PubMed

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  15. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  16. Pseudo-inverse linear discriminants for the improvement of overall classification accuracies.

    PubMed

    Daqi, Gao; Ahmed, Dastagir; Lili, Guo; Zejian, Wang; Zhe, Wang

    2016-09-01

    This paper studies the learning and generalization performances of pseudo-inverse linear discriminant (PILDs) based on the processing minimum sum-of-squared error (MS(2)E) and the targeting overall classification accuracy (OCA) criterion functions. There is little practicable significance to prove the equivalency between a PILD with the desired outputs in reverse proportion to the number of class samples and an FLD with the totally projected mean thresholds. When the desired outputs of each class are assigned a fixed value, a PILD is partly equal to an FLD. With the customarily desired outputs {1, -1}, a practicable threshold is acquired, which is only related to sample sizes. If the desired outputs of each sample are changeable, a PILD has nothing in common with an FLD. The optimal threshold may thus be singled out from multiple empirical ones related to sizes and distributed regions. Depending upon the processing MS(2)E criteria and the actually algebraic distances, an iterative learning strategy of PILD is proposed, the outstanding advantages of which are with limited epoch, without learning rate and divergent risk. Enormous experimental results for the benchmark datasets have verified that the iterative PILDs with optimal thresholds have good learning and generalization performances, and even reach the top OCAs for some datasets among the existing classifiers.

  17. Improving the predictive accuracy of hurricane power outage forecasts using generalized additive models.

    PubMed

    Han, Seung-Ryong; Guikema, Seth D; Quiring, Steven M

    2009-10-01

    Electric power is a critical infrastructure service after hurricanes, and rapid restoration of electric power is important in order to minimize losses in the impacted areas. However, rapid restoration of electric power after a hurricane depends on obtaining the necessary resources, primarily repair crews and materials, before the hurricane makes landfall and then appropriately deploying these resources as soon as possible after the hurricane. This, in turn, depends on having sound estimates of both the overall severity of the storm and the relative risk of power outages in different areas. Past studies have developed statistical, regression-based approaches for estimating the number of power outages in advance of an approaching hurricane. However, these approaches have either not been applicable for future events or have had lower predictive accuracy than desired. This article shows that a different type of regression model, a generalized additive model (GAM), can outperform the types of models used previously. This is done by developing and validating a GAM based on power outage data during past hurricanes in the Gulf Coast region and comparing the results from this model to the previously used generalized linear models.

  18. Improving the accuracy of femtosecond optical field measurements and two-dimensional spectra

    NASA Astrophysics Data System (ADS)

    Yetzbacher, Michael Keith

    Femtosecond two-dimensional spectroscopy is an experimental technique for investigating coupling between optical and infrared excitations in molecules, bulk solids, and nanomaterials. In experimental schemes to date, the measured signal is unavoidably distorted by optical density and directional filtering effects. A three-dimensional Fourier transform (3DFT) algorithm capable of modeling distorted signals for arbitrary nonlinear response and arbitrary optical density within the limit of negligible nonlinear distortion is presented. The signal is calculated with Bloch, Kubo and Brownian oscillator models for two-level systems and a 4-level Bloch model; differences between ideal and distorted spectra are discussed. Analytic limits are derived and used to check the 3DFT algorithm for the Bloch model. Equations for modeling experimental distortions are given, as well as guidelines for minimizing experimental distortions. Finally, the instrumental distortions for signals detected with Fourier transform spectral interferometry are discussed. The effects of asymmetric spectrometer resolution and pixellated detectors are examined. Algorithms are presented to measure and correct for the effective linespread of the spectrograph and thereby enable more accurate signal phase recovery. Instrumental limitations on the accuracy of spectral interferometry are discussed.

  19. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  20. Combination of pulse volume recording (PVR) parameters and ankle-brachial index (ABI) improves diagnostic accuracy for peripheral arterial disease compared with ABI alone.

    PubMed

    Hashimoto, Tomoko; Ichihashi, Shigeo; Iwakoshi, Shinichi; Kichikawa, Kimihiko

    2016-06-01

    The ankle-brachial index (ABI) measurement is widely used as a screening tool to detect peripheral arterial disease (PAD). With the advent of the oscillometric ABI device incorporating a system for the measurement of pulse volume recording (PVR), not only ABI but also other parameters, such as the percentage of mean arterial pressure (%MAP) and the upstroke time (UT), can be obtained automatically. The purpose of the present study was to compare the diagnostic accuracy for PAD with ABI alone with that of a combination of ABI, %MAP and UT. This study included 108 consecutive patients on whom 216 limb measurements were performed. The sensitivity, specificity and positive and negative predictive values of ABI, %MAP, UT and their combination were evaluated and compared with CT angiography that was used as a gold standard for the detection of PAD. The diagnostic accuracy as well as the optimal cutoff values of %MAP and UT were evaluated using receiver operating characteristic (ROC) curve analysis. The combination of ABI, %MAP and UT achieved higher sensitivity, negative predictive value and accuracy than ABI alone, particularly for mild stenosis. The areas under the ROC curve for the detection of 50% stenosis with UT and %MAP were 0.798 and 0.916, respectively. The optimal UT and %MAP values to detect ≧50% stenosis artery were 183 ms and 45%, respectively. The combination of ABI, %MAP and UT contributed to the improvement of the diagnostic accuracy for PAD. Consideration of the values of %MAP and UT in addition to ABI may have a significant impact on the detection of early PAD lesions.

  1. Improving the Quality of Nursing Home Care and Medical-Record Accuracy with Direct Observational Technologies

    ERIC Educational Resources Information Center

    Schnelle, John F.; Osterweil, Dan; Simmons, Sandra F.

    2005-01-01

    Nursing home medical-record documentation of daily-care occurrence may be inaccurate, and information is not documented about important quality-of-life domains. The inadequacy of medical record data creates a barrier to improving care quality, because it supports an illusion of care consistent with regulations, which reduces the motivation and…

  2. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  3. Electron Microprobe Analysis of Hf in Zircon: Suggestions for Improved Accuracy of a Difficult Measurement

    NASA Astrophysics Data System (ADS)

    Fournelle, J.; Hanchar, J. M.

    2013-12-01

    It is not commonly recognized as such, but the accurate measurement of Hf in zircon is not a trivial analytical issue. This is important to assess because Hf is often used as an internal standard for trace element analyses of zircon by LA-ICPMS. The issues pertaining to accuracy revolve around: (1) whether the Hf Ma or the La line is used; (2) what accelerating voltage is applied if Zr La is also measured, and (3) what standard for Hf is used. Weidenbach, et al.'s (2004) study of the 91500 zircon demonstrated the spread (in accuracy) of possible EPMA values for six EPMA labs, 2 of which used Hf Ma, 3 used Hf La, and one used Hf Lb, and standards ranged from HfO2, a ZrO2-HfO2 compound, Hf metal, and hafnon. Weidenbach, et al., used the ID-TIMS values as the correct value (0.695 wt.% Hf.), for which not one of the EPMA labs came close to that value (3 were low and 3 were high). Those data suggest: (1) that there is a systematic underestimation error of the 0.695 wt% Hf (ID-TIMS Hf) value if Hf Ma is used; most likely an issue with the matrix correction, as the analytical lines and absorption edges of Zr La, Si Ka and Hf Ma are rather tightly packed in the electromagnetic spectrum. Mass absorption coefficients are easily in error (e.g., Donovan's determination of the MAC of Hf by Si Ka of 5061 differs from the typically used Henke value of 5449 (Donovan et al, 2002); and (2) For utilization of the Hf La line, however, the second order Zr Ka line interferes with Hf La if the accelerating voltage is greater than 17.99 keV. If this higher keV is used and differential mode PHA is applied, only a portion of the interference is removed (e.g., removal of escape peaks), causing an overestimation of Hf content. Unfortunately, it is virtually impossible to apply an interference correction in this case, as it is impossible to locate Hf-free Zr probe standard. We have examined many of the combinations used by those six EPMA labs and concluded that the optimal EPMA is done with Hf

  4. Improved solution accuracy for TDRSS-based TOPEX/Poseidon orbit determination

    NASA Technical Reports Server (NTRS)

    Doll, C. E.; Mistretta, G. D.; Hart, R. C.; Oza, D. H.; Bolvin, D. T.; Cox, C. M.; Nemesure, M.; Niklewski, D. J.; Samii, M. V.

    1994-01-01

    Orbit determination results are obtained by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) using a batch-least-squares estimator available in the Goddard Trajectory Determination System (GTDS) and an extended Kalman filter estimation system to process Tracking and Data Relay Satellite (TDRS) System (TDRSS) measurements. GTDS is the operational orbit determination system used by the FDD in support of the Ocean Topography Experiment (TOPEX)/Poseidon spacecraft navigation and health and safety operations. The extended Kalman filter was implemented in an orbit determination analysis prototype system, closely related to the Real-Time Orbit Determination System/Enhanced (RTOD/E) system. In addition, the Precision Orbit Determination (POD) team within the GSFC Space Geodesy Branch generated an independent set of high-accuracy trajectories to support the TOPEX/Poseidon scientific data. These latter solutions use the geodynamics (GEODYN) orbit determination system with laser ranging and Doppler Orbitography and Radiopositioning integrated by satellite (DORIS) tracking measurements. The TOPEX/Poseidon trajectories were estimated for November 7 through November 11, 1992, the timeframe under study. Independent assessments were made of the consistencies of solutions produced by the batch and sequential methods. The batch-least-squares solutions were assessed based on the solution residuals, while the sequential solutions were assessed based on primarily the estimated covariances. The batch-least-squares and sequential orbit solutions were compared with the definitive POD orbit solutions. The solution differences were generally less than 2 meters for the batch-least-squares and less than 13 meters for the sequential estimation solutions. After the sequential estimation solutions were processed with a smoother algorithm, position differences with POD orbit solutions of less than 7 meters were obtained. The differences among the POD, GTDS, and filter

  5. A case control study to improve accuracy of an electronic fall prevention toolkit.

    PubMed

    Dykes, Patricia C; I-Ching, Evita Hou; Soukup, Jane R; Chang, Frank; Lipsitz, Stuart

    2012-01-01

    Patient falls are a serious and commonly report adverse event in hospitals. In 2009, our team conducted the first randomized control trial of a health information technology-based intervention that significantly reduced falls in acute care hospitals. However, some patients on intervention units with access to the electronic toolkit fell. The purpose of this case control study was to use data mining and modeling techniques to identify the factors associated with falls in hospitalized patients when the toolkit was in place. Our ultimate aim was to apply our findings to improve the toolkit logic and to generate practice recommendations. The results of our evaluation suggest that the fall prevention toolkit logic is accurate but strategies are needed to improve adherence with the fall prevention intervention recommendations generated by the electronic toolkit.

  6. Improving Accuracy and Precision in Gross Alpha and Beta Counting with Proportional Detectors

    SciTech Connect

    Abrantes, J.; Pinhao, N. R.; Melo, J.; Madruga, M. J.

    2008-08-14

    Following ambiguous results of an intercomparison exercise in 2004, we have reevaluated the sample preparation, measurement procedures and data treatment for gross alpha and beta counting. It was found that the Portuguese standards NP 4332:1996 and NP 4330:1996 do not provide enough background for correct treatment of data. This paper describes the improvements made in gross alpha and beta activities measurement with proportional detectors in simultaneous mode. These improvements include a crosstalk correction both on counting and in the minimum detectable activity; the use of adequate statistical tools for data analysis; the use of QC charts to control the stability of the detectors and the background and, attention to the traceability of results. The calibration curves were computed with an appropriate number of replicates and with a throughout evaluation of the uncertainty budget. Based on this model, we established new criteria to report results.

  7. An Approach to Improve Accuracy of Optical Tracking Systems in Cranial Radiation Therapy

    PubMed Central

    Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris

    2015-01-01

    This work presents a new method for the accurate estimation of soft tissue thickness based on near infrared (NIR) laser measurements. By using this estimation, our goal is to develop an improved non-invasive marker-less optical tracking system for cranial radiation therapy. Results are presented for three subjects and reveal an RMS error of less than 0.34 mm. PMID:26180663

  8. Integrating machine learning and physician knowledge to improve the accuracy of breast biopsy.

    PubMed

    Dutra, I; Nassif, H; Page, D; Shavlik, J; Strigel, R M; Wu, Y; Elezaby, M E; Burnside, E

    2011-01-01

    In this work we show that combining physician rules and machine learned rules may improve the performance of a classifier that predicts whether a breast cancer is missed on percutaneous, image-guided breast core needle biopsy (subsequently referred to as "breast core biopsy"). Specifically, we show how advice in the form of logical rules, derived by a sub-specialty, i.e. fellowship trained breast radiologists (subsequently referred to as "our physicians") can guide the search in an inductive logic programming system, and improve the performance of a learned classifier. Our dataset of 890 consecutive benign breast core biopsy results along with corresponding mammographic findings contains 94 cases that were deemed non-definitive by a multidisciplinary panel of physicians, from which 15 were upgraded to malignant disease at surgery. Our goal is to predict upgrade prospectively and avoid surgery in women who do not have breast cancer. Our results, some of which trended toward significance, show evidence that inductive logic programming may produce better results for this task than traditional propositional algorithms with default parameters. Moreover, we show that adding knowledge from our physicians into the learning process may improve the performance of the learned classifier trained only on data. PMID:22195087

  9. Assimilating aircraft-based measurements to improve forecast accuracy of volcanic ash transport

    NASA Astrophysics Data System (ADS)

    Fu, G.; Lin, H. X.; Heemink, A. W.; Segers, A. J.; Lu, S.; Palsson, T.

    2015-08-01

    The 2010 Eyjafjallajökull volcano eruption had serious consequences to civil aviation. This has initiated a lot of research on volcanic ash transport forecast in recent years. For forecasting the volcanic ash transport after eruption onset, a volcanic ash transport and diffusion model (VATDM) needs to be run with Eruption Source Parameters (ESP) such as plume height and mass eruption rate as input, and with data assimilation techniques to continuously improve the initial conditions of the forecast. Reliable and accurate ash measurements are crucial for providing a successful ash clouds advice. In this paper, simulated aircraft-based measurements, as one type of volcanic ash measurements, will be assimilated into a transport model to identify the potential benefit of this kind of observations in an assimilation system. The results show assimilating aircraft-based measurements can significantly improve the state of ash clouds, and further providing an improved forecast as aviation advice. We also show that for advice of aeroplane flying level, aircraft-based measurements should be preferably taken from this level to obtain the best performance on it. Furthermore it is shown that in order to make an acceptable advice for aviation decision makers, accurate knowledge about uncertainties of ESPs and measurements is of great importance.

  10. Progress in Improving the Accuracy of Hugoniot Equation-of-State Measurements at the AWE Helen Laser.

    NASA Astrophysics Data System (ADS)

    Rothman, Stephen; Evans, Andrew; Graham, Peter; Horsfield, Colin

    1998-11-01

    For several years we have been conducting a series of equation-of-state (EOS) experiments using the Helen laser at AWE with the aim of an accuracy of 1% in shock velocity measurements(A.M. Evans, N.J. Freeman, P. Graham, C.J. Horsfield, S.D. Rothman, B.R. Thomas and A.J. Tyrrell, Laser and Particle Beams, vol. 14, no. 2, pp. 113-123, 1996.). Our best results to date are 1.2% in velocity on copper and aluminium double-step targets which lead to 4% in copper principal Hugoniot pressures. The accuracy in pressure depends not only on two measured shock velocities but also target density and the EOS of Al which is used here as a standard. In order to quantify sources of error and to improve accuracy we have measured the preheat-induced expansion of target surfaces using a Michelson interferometer. Analysis of streaks from this has also given reflectivity measurements. We are also investigating the use of a shaped laser pulse designed to give constant pressure for 2.5ns which will reduce the fractional errors in both step transit time and height by allowing the use of a thicker step.

  11. New measures improve the accuracy of the directed-lie test when detecting deception using a mock crime.

    PubMed

    Bell, Brian G; Kircher, John C; Bernhardt, Paul C

    2008-06-01

    The present study tested the accuracy of probable-lie and directed-lie polygraph tests. One hundred and twenty men and women were recruited from the general community and paid $30 to participate in a mock crime experiment. Equal numbers of males and females were assigned to either the guilty or innocent condition with equal numbers in each group receiving either a probable-lie or a directed-lie polygraph test resulting in a 2 x 2 design with two experimental factors (test type and deceptive condition). Half of the participants were guilty and half were innocent of committing a mock theft of $20 from a purse. All participants were paid a $50 bonus if they could convince the polygraph examiner that they were innocent. There were no significant differences in decision accuracy between probable-lie and directed-lie tests, but respiration measures were more diagnostic for the probable-lie test. New physiological measures, skin potential excursion and a new respiratory measure improved the accuracy of the directed-lie test such that 86% of the innocent participants and 93% of the guilty participants were correctly classified.

  12. An evaluation of the effectiveness of PROMPT therapy in improving speech production accuracy in six children with cerebral palsy.

    PubMed

    Ward, Roslyn; Leitão, Suze; Strauss, Geoff

    2014-08-01

    This study evaluates perceptual changes in speech production accuracy in six children (3-11 years) with moderate-to-severe speech impairment associated with cerebral palsy before, during, and after participation in a motor-speech intervention program (Prompts for Restructuring Oral Muscular Phonetic Targets). An A1BCA2 single subject research design was implemented. Subsequent to the baseline phase (phase A1), phase B targeted each participant's first intervention priority on the PROMPT motor-speech hierarchy. Phase C then targeted one level higher. Weekly speech probes were administered, containing trained and untrained words at the two levels of intervention, plus an additional level that served as a control goal. The speech probes were analysed for motor-speech-movement-parameters and perceptual accuracy. Analysis of the speech probe data showed all participants recorded a statistically significant change. Between phases A1-B and B-C 6/6 and 4/6 participants, respectively, recorded a statistically significant increase in performance level on the motor speech movement patterns targeted during the training of that intervention. The preliminary data presented in this study make a contribution to providing evidence that supports the use of a treatment approach aligned with dynamic systems theory to improve the motor-speech movement patterns and speech production accuracy in children with cerebral palsy.

  13. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  14. Development of an algorithm to improve the accuracy of dose delivery in Gamma Knife radiosurgery

    NASA Astrophysics Data System (ADS)

    Cernica, George Dumitru

    2007-12-01

    Gamma Knife stereotactic radiosurgery has demonstrated decades of successful treatments. Despite its high spatial accuracy, the Gamma Knife's planning software, GammaPlan, uses a simple exponential as the TPR curve for all four collimator sizes, and a skull scaling device to acquire ruler measurements to interpolate a threedimensional spline to model the patient's skull. The consequences of these approximations have not been previously investigated. The true TPR curves of the four collimators were measured by blocking 200 of the 201 sources with steel plugs. Additional attenuation was provided through the use of a 16 cm tungsten sphere, designed to enable beamlet measurements along one axis. TPR, PDD, and beamlet profiles were obtained using both an ion chamber and GafChromic EBT film for all collimators. Additionally, an in-house planning algorithm able to calculate the contour of the skull directly from an image set and implement the measured beamlet data in shot time calculations was developed. Clinical and theoretical Gamma Knife cases were imported into our algorithm. The TPR curves showed small deviations from a simple exponential curve, with average discrepancies under 1%, but with a maximum discrepancy of 2% found for the 18 mm collimator beamlet at shallow depths. The consequences on the PDD of the of the beamlets were slight, with a maximum of 1.6% found with the 18 mm collimator beamlet. Beamlet profiles of the 4 mm, 8 mm, and 14 mm showed some underestimates of the off-axis ratio near the shoulders (up to 10%). The toes of the profiles were underestimated for all collimators, with differences up to 7%. Shot times were affected by up to 1.6% due to TPR differences, but clinical cases showed deviations by no more than 0.5%. The beamlet profiles affected the dose calculations more significantly, with shot time calculations differing by as much as 0.8%. The skull scaling affected the shot time calculations the most significantly, with differences of up to 5

  15. Modified surface loading process for achieving improved performance of the quantum dot-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Jin, Zhongxiu; Zhu, Jun; Xu, Yafeng; Zhou, Li; Dai, Songyuan

    2016-06-01

    Achieving high surface coverage of the colloidal quantum dots (QDs) on TiO2 films has been challenging for quantum dot-sensitized solar cells (QDSCs). Herein, a general surface engineering approach was proposed to increase the loading of these QDs. It was found that S2- treatment/QD re-uptake process can significantly improve the attachment of the QDs on TiO2 films. Surface concentration of the QDs was improved by ∼60%, which in turn greatly enhances light absorption and decreases carrier recombination in QDSCs. Ensuing QDSCs with optimized QD loading exhibit a power conversion efficiency of 3.66%, 83% higher than those fabricated with standard procedures.

  16. Evaluation of Doppler shifts to improve the accuracy of primary atomic fountain clocks.

    PubMed

    Guéna, Jocelyne; Li, Ruoxin; Gibble, Kurt; Bize, Sébastien; Clairon, André

    2011-04-01

    We demonstrate agreement between measurements and ab initio calculations of the frequency shifts caused by distributed cavity phase variations in the microwave cavity of a primary atomic fountain clock. Experimental verification of the finite element models of the cavities gives the first quantitative evaluation of this leading uncertainty and allows it to be reduced to δν/ν=±8.4×10(-17). Applying these experimental techniques to clocks with improved microwave cavities will yield negligible distributed cavity phase uncertainties, less than ±1×10(-17).

  17. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2010-01-01

    The poster provides an overview of techniques to improve the Mars Global Reference Atmospheric Model (Mars-GRAM) sensitivity. It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for TES MapYear = 0 and large optical depth values such as tau = 3 is less than realistic. A preliminary fix has been made to Mars-GRAM by adding a density factor value that was determined for tau = 0.3, 1 and 3.

  18. Quality improvement in diabetes--successful in achieving better care with hopes for prevention.

    PubMed

    Haw, J Sonya; Narayan, K M Venkat; Ali, Mohammed K

    2015-09-01

    Diabetes affects 29 million Americans and is associated with billions of dollars in health expenditures and lost productivity. Robust evidence has shown that lifestyle interventions in people at high risk for diabetes and comprehensive management of cardiometabolic risk factors like glucose, blood pressure, and lipids can delay the onset of diabetes and its complications, respectively. However, realizing the "triple aim" of better health, better care, and lower cost in diabetes has been hampered by low adoption of lifestyle interventions to prevent diabetes and poor achievement of care goals for those with diabetes. To achieve better care, a number of quality improvement (QI) strategies targeting the health system, healthcare providers, and/or patients have been evaluated in both controlled trials and real-world programs, and have shown some successes, though barriers still impede wider adoption, effectiveness, real-world feasibility, and scalability. Here, we summarize the effectiveness and cost-effectiveness data regarding QI strategies in diabetes care and discuss the potential role of quality monitoring and QI in trying to implement primary prevention of diabetes more widely and effectively. Over time, achieving better care and better health will likely help bend the ever-growing cost curve. PMID:26495771

  19. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    PubMed

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-09

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration.

  20. Improving the accuracy of computed 13C NMR shift predictions by specific environment error correction: fragment referencing.

    PubMed

    Andrews, Keith G; Spivey, Alan C

    2013-11-15

    The accuracy of both Gauge-including atomic orbital (GIAO) and continuous set of gauge transformations (CSGT) (13)C NMR spectra prediction by Density Functional Theory (DFT) at the B3LYP/6-31G** level is shown to be usefully enhanced by employing a 'fragment referencing' method for predicting chemical shifts without recourse to empirical scaling. Fragment referencing refers to a process of reducing the error in calculating a particular NMR shift by consulting a similar molecule for which the error in the calculation is easily deduced. The absolute accuracy of the chemical shifts predicted when employing fragment referencing relative to conventional techniques (e.g., using TMS or MeOH/benzene dual referencing) is demonstrated to be improved significantly for a range of substrates, which illustrates the superiority of the technique particularly for systems with similar chemical shifts arising from different chemical environments. The technique is particularly suited to molecules of relatively low molecular weight containing 'non-standard' magnetic environments, e.g., α to halogen atoms, which are poorly predicted by other methods. The simplicity and speed of the technique mean that it can be employed to resolve routine structural assignment problems that require a degree of accuracy not provided by standard incremental or hierarchically ordered spherical description of environment (HOSE) algorithms. The approach is also demonstrated to be applicable when employing the MP2 method at 6-31G**, cc-pVDZ, aug-cc-pVDZ, and cc-pVTZ levels, although none of these offer advantage in terms of accuracy of prediction over the B3LYP/6-31G** DFT method.

  1. Improving GLOBALlAND30 Artificial Type Extraction Accuracy in Low-Density Residents

    NASA Astrophysics Data System (ADS)

    Hou, Lili; Zhu, Ling; Peng, Shu; Xie, Zhenlei; Chen, Xu

    2016-06-01

    GlobalLand 30 is the first 30m resolution land cover product in the world. It covers the area within 80°N and 80°S. There are ten classes including artificial cover, water bodies, woodland, lawn, bare land, cultivated land, wetland, sea area, shrub and snow,. The TM imagery from Landsat is the main data source of GlobalLand 30. In the artificial surface type, one of the omission error happened on low-density residents' part. In TM images, hash distribution is one of the typical characteristics of the low-density residents, and another one is there are a lot of cultivated lands surrounded the low-density residents. Thus made the low-density residents part being blurred with cultivated land. In order to solve this problem, nighttime light remote sensing image is used as a referenced data, and on the basis of NDBI, we add TM6 to calculate the amount of surface thermal radiation index TR-NDBI (Thermal Radiation Normalized Difference Building Index) to achieve the purpose of extracting low-density residents. The result shows that using TR-NDBI and the nighttime light remote sensing image are a feasible and effective method for extracting low-density residents' areas.

  2. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    NASA Astrophysics Data System (ADS)

    Starn, J. J.

    2013-12-01

    -flow finite-difference transport simulations (MT3DMS). Results show more accurate simulation of pumping-well BTCs for a given grid cell size when using analytical solutions. The code base is extended to transient flow and BTCs are compared to results from MT3DMS simulations. Results show the particle-based solutions can resolve transient behavior using coarser model grids with far less computational effort than MT3DMS. The effect of simulation accuracy on parameter estimates (porosity) also is investigated. Porosity estimated using more accurate analytical solutions are less biased than in synthetic finite-difference transport simulations, which tend to be biased by coarseness of the grid. Eliminating the bias by using a finer grid comes at the expense of much larger computational effort. Finally, the code base was applied to an actual groundwater-flow model of Salt Lake Valley, Utah. Particle simulations using the Python code base compare well with finite-difference simulations, but with less computational effort, and have the added advantage of delineating flow paths, thus explicitly connecting solute source areas with receptors, and producing complete particle-age distributions. Knowledge of source areas and age distribution greatly enhances the analysis of dissolved solids data in Salt Lake Valley.

  3. Improving quality and reducing inequities: a challenge in achieving best care

    PubMed Central

    Nicewander, David A.; Qin, Huanying; Ballard, David J.

    2006-01-01

    The health care quality chasm is better described as a gulf for certain segments of the population, such as racial and ethnic minority groups, given the gap between actual care received and ideal or best care quality. The landmark Institute of Medicine report Crossing the Quality Chasm: A New Health System for the 21st Century challenges all health care organizations to pursue six major aims of health care improvement: safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness. “Equity” aims to ensure that quality care is available to all and that the quality of care provided does not differ by race, ethnicity, or other personal characteristics unrelated to a patient's reason for seeking care. Baylor Health Care System is in the unique position of being able to examine the current state of equity in a typical health care delivery system and to lead the way in health equity research. Its organizational vision, “culture of quality,” and involved leadership bode well for achieving equitable best care. However, inequities in access, use, and outcomes of health care must be scrutinized; the moral, ethical, and economic issues they raise and the critical injustice they create must be remedied if this goal is to be achieved. Eliminating any observed inequities in health care must be synergistically integrated with quality improvement. Quality performance indicators currently collected and evaluated indicate that Baylor Health Care System often performs better than the national average. However, there are significant variations in care by age, gender, race/ethnicity, and socioeconomic status that indicate the many remaining challenges in achieving “best care” for all. PMID:16609733

  4. Employee Perceptions of Progress with Implementing a Student-Centered Model of Institutional Improvement: An Achieving the Dream Case Study

    ERIC Educational Resources Information Center

    Cheek, Annesa LeShawn

    2011-01-01

    Achieving the Dream is a national initiative focused on helping more community college students succeed, particularly students of color and low-income students. Achieving the Dream's student-centered model of institutional improvement focuses on eliminating gaps and raising student achievement by helping institutions build a culture of evidence…

  5. From Guide to Practice: Improving Your After School Science Program to Increase Student Academic Achievement

    NASA Astrophysics Data System (ADS)

    Taylor, J.

    2013-12-01

    Numerous science organizations, such as NASA, offer educational outreach activities geared towards after school. For some programs, the primary goal is to grow students' love of science. For others, the programs are also intended to increase academic achievement. For those programs looking to support student learning in out-of-school time environments, aligning the program with learning during the classroom day can be a challenge. The Institute for Education Sciences, What Works Clearinghouse, put together a 'Practice Guide' for maximizing learning time beyond the regular school day. These practice guides provide concrete recommendations for educators supported by research. While this guide is not specific to any content or subject-area, the recommendations provided align very well with science education. After school science is often viewed as a fun, dynamic environment for students. Indeed, one of the recommendations to ensure time is structured according to students' needs is to provide relevant and interesting experiences. Given that our after school programs provide such creative environments for students, what other components are needed to promote increased academic achievement? The recommendations provided to academic achievement, include: 1. Align Instruction, 2. Maximize Attendance and Participation, 3. Adapt Instruction, 4. Provide Engaging Experiences, and 5. Evaluate Program. In this session we will examine these five recommendations presented in the Practice Guide, discuss how these strategies align with science programs, and examine what questions each program should address in order to provide experiences that lend themselves to maximizing instruction. Roadblocks and solutions for overcoming challenges in each of the five areas will be presented. Jessica Taylor will present this research based on her role as an author on the Practice Guide, 'Improving Academic Achievement in Out-of-School Time' and her experience working in various informal science

  6. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  7. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2peak)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  8. Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules

    PubMed Central

    Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh

    2011-01-01

    This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232

  9. Improving accuracy and usability of growth charts: case study in Rwanda

    PubMed Central

    Brown, Suzana; McSharry, Patrick

    2016-01-01

    Objectives We evaluate and compare manually collected paper records against electronic records for monitoring the weights of children under the age of 5. Setting Data were collected by 24 community health workers (CHWs) in 2 Rwandan communities, 1 urban and 1 rural. Participants The same CHWs collected paper and electronic records. Paper data contain weight and age for 320 boys and 380 girls. Electronic data contain weight and age for 922 girls and 886 boys. Electronic data were collected over 9 months; most of the data is cross-sectional, with about 330 children with time-series data. Both data sets are compared with the international standard provided by the WHO growth chart. Primary and secondary outcome measures The plan was to collect 2000 individual records for the electronic data set—we finally collected 1878 records. Paper data were collected by the same CHWs, but most data were fragmented and hard to read. We transcribed data only from children for whom we were able to obtain the date of birth, to determine the exact age at the time of measurement. Results Mean absolute error (MAE) and mean absolute percentage error (MAPE) provide a way to quantify the magnitude of the error in using a given model. Comparing a model, log(weight)=a+b log(age), shows that electronic records provide considerable improvements over paper records, with 40% reduction in both performance metrics. Electronic data improve performance over the WHO model by 10% in MAPE and 7% in MAE. Results are statistically significant using the Kolmogorov-Smirnov test at p<0.01. Conclusions This study demonstrates that using modern electronic tools for health data collection is allowing better tracking of health indicators. We have demonstrated that electronic records facilitate development of a country-specific model that is more accurate than the international standard provided by the WHO growth chart. PMID:26817635

  10. Improving Delivery Accuracy of Stereotactic Body Radiotherapy to a Moving Tumor Using Simplified Volumetric Modulated Arc Therapy

    PubMed Central

    Ko, Young Eun; Cho, Byungchul; Kim, Su Ssan; Song, Si Yeol; Choi, Eun Kyung; Ahn, Seung Do; Yi, Byongyong

    2016-01-01

    Purpose To develop a simplified volumetric modulated arc therapy (VMAT) technique for more accurate dose delivery in thoracic stereotactic body radiation therapy (SBRT). Methods and Materials For each of the 22 lung SBRT cases treated with respiratory-gated VMAT, a dose rate modulated arc therapy (DrMAT) plan was retrospectively generated. A dynamic conformal arc therapy plan with 33 adjoining coplanar arcs was designed and their beam weights were optimized by an inverse planning process. All sub-arc beams were converted into a series of control points with varying MLC segment and dose rates and merged into an arc beam for a DrMAT plan. The plan quality of original VMAT and DrMAT was compared in terms of target coverage, compactness of dose distribution, and dose sparing of organs at risk. To assess the delivery accuracy, the VMAT and DrMAT plans were delivered to a motion phantom programmed with the corresponding patients’ respiratory signal; results were compared using film dosimetry with gamma analysis. Results The plan quality of DrMAT was equivalent to that of VMAT in terms of target coverage, dose compactness, and dose sparing for the normal lung. In dose sparing for other critical organs, DrMAT was less effective than VMAT for the spinal cord, heart, and esophagus while being well within the limits specified by the Radiation Therapy Oncology Group. Delivery accuracy of DrMAT to a moving target was similar to that of VMAT using a gamma criterion of 2%/2mm but was significantly better using a 2%/1mm criterion, implying the superiority of DrMAT over VMAT in SBRT for thoracic/abdominal tumors with respiratory movement. Conclusion We developed a DrMAT technique for SBRT that produces plans of a quality similar to that achieved with VMAT but with better delivery accuracy. This technique is well-suited for small tumors with motion uncertainty. PMID:27333199

  11. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  12. Improvement of orbit determination accuracy for Beidou Navigation Satellite System with Two-way Satellite Time Frequency Transfer

    NASA Astrophysics Data System (ADS)

    Tang, Chengpan; Hu, Xiaogong; Zhou, Shanshi; Guo, Rui; He, Feng; Liu, Li; Zhu, Lingfeng; Li, Xiaojie; Wu, Shan; Zhao, Gang; Yu, Yang; Cao, Yueling

    2016-10-01

    The Beidou Navigation Satellite System (BDS) manages to estimate simultaneously the orbits and clock offsets of navigation satellites, using code and carrier phase measurements of a regional network within China. The satellite clock offsets are also directly measured with Two-way Satellite Time Frequency Transfer (TWSTFT). Satellite laser ranging (SLR) residuals and comparisons with the precise ephemeris indicate that the radial error of GEO satellites is much larger than that of IGSO and MEO satellites and that the BDS orbit accuracy is worse than GPS. In order to improve the orbit determination accuracy for BDS, a new orbit determination strategy is proposed, in which the satellite clock measurements from TWSTFT are fixed as known values, and only the orbits of the satellites are solved. However, a constant systematic error at the nanosecond level can be found in the clock measurements, which is obtained and then corrected by differencing the clock measurements and the clock estimates from orbit determination. The effectiveness of the new strategy is verified by a GPS regional network orbit determination experiment. With the IGS final clock products fixed, the orbit determination and prediction accuracy for GPS satellites improve by more than 50% and the 12-h prediction User Range Error (URE) is better than 0.12 m. By processing a 25-day of measurement from the BDS regional network, an optimal strategy for the satellite-clock-fixed orbit determination is identified. User Equivalent Ranging Error is reduced by 27.6% for GEO satellites, but no apparent reduction is found for IGSO/MEO satellites. The SLR residuals exhibit reductions by 59% and 32% for IGSO satellites but no reductions for GEO and MEO satellites.

  13. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    SciTech Connect

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  14. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT).

    PubMed

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-21

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3 ± 1.0 and 3.3 ± 3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target.

  15. Pre-operative Thresholds for Achieving Meaningful Clinical Improvement after Arthroscopic Treatment of Femoroacetabular Impingement

    PubMed Central

    Nwachukwu, Benedict U.; Fields, Kara G.; Nawabi, Danyal H.; Kelly, Bryan T.; Ranawat, Anil S.

    2016-01-01

    sagittal CEA was the only variable maintaining significance (p = 0.032). Conclusion: We used a large prospective hip arthroscopy database to identify pre-operative patient outcome score thresholds predictive of meaningful post-operative outcome improvement after arthroscopic FAI treatment. This is the largest reported hip arthroscopy cohort to define MCID and the first to do so for iHOT-33. The HOS-ADL may have the best predictive ability for achieving MCID after hip arthroscopy. Patients with relatively high pre-operative ADL, quality of life and functional status appear to have a high chance for achieveing MCID up to our defined thresholds. Hip dysplasia is an important outcome modifier. The findings of this study may be useful for managing preoperative expectation for patients undergoing arthroscopic FAI surgery.

  16. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  17. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    SciTech Connect

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  18. Improved RSA accuracy with DLT and balanced calibration marker distributions with an assessment of initial-calibration.

    PubMed

    Choo, Anthony M T; Oxland, Thomas R

    2003-02-01

    Roentgen stereophotogrammetric analysis (RSA) has been used for over 25 years for accurate micromotion measurement in a wide variety of orthopaedic applications. This study investigated two possible improvements to the method. First, direct linear transformation (DLT) was compared to the traditional RSA reconstruction algorithm. The two methods were considered with respect to standard extrapolation and interpolation calibration cages. Matlab simulations showed that reconstruction accuracy was greatly improved (>60%) by combining DLT with an even distribution of enclosing calibration markers. Second, a benchtop study using phantoms translated at 0.0254-mm intervals showed initial-calibration, followed by removal of the interpolation cage for subsequent exposures, was potentially twice as accurate as self-calibration with an extrapolation cage. These results showed optimizations for the application of RSA when unobstructed space is required.

  19. Improving the accuracy and efficiency of time-resolved electronic spectra calculations: Cellular dephasing representation with a prefactor

    SciTech Connect

    Zambrano, Eduardo; Šulc, Miroslav; Vaníček, Jiří

    2013-08-07

    Time-resolved electronic spectra can be obtained as the Fourier transform of a special type of time correlation function known as fidelity amplitude, which, in turn, can be evaluated approximately and efficiently with the dephasing representation. Here we improve both the accuracy of this approximation—with an amplitude correction derived from the phase-space propagator—and its efficiency—with an improved cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. We demonstrate the advantages of the new methodology by computing dispersed time-resolved stimulated emission spectra in the harmonic potential, pyrazine, and the NCO molecule. In contrast, we show that in strongly chaotic systems such as the quartic oscillator the original dephasing representation is more appropriate than either the cellular or prefactor-corrected methods.

  20. Drift Removal for Improving the Accuracy of Gait Parameters Using Wearable Sensor Systems

    PubMed Central

    Takeda, Ryo; Lisco, Giulia; Fujisawa, Tadashi; Gastaldi, Laura; Tohyama, Harukazu; Tadano, Shigeru

    2014-01-01

    Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR) digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases. PMID:25490587

  1. Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values

    NASA Astrophysics Data System (ADS)

    Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru

    Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.

  2. Radiographic and Anatomic Basis for Prostate Contouring Errors and Methods to Improve Prostate Contouring Accuracy

    SciTech Connect

    McLaughlin, Patrick W.; Evans, Cheryl M.S.; Feng, Mary; Narayana, Vrinda

    2010-02-01

    Purpose: Use of highly conformal radiation for prostate cancer can lead to both overtreatment of surrounding normal tissues and undertreatment of the prostate itself. In this retrospective study we analyzed the radiographic and anatomic basis of common errors in computed tomography (CT) contouring and suggest methods to correct them. Methods and Materials: Three hundred patients with prostate cancer underwent CT and magnetic resonance imaging (MRI). The prostate was delineated independently on the data sets. CT and MRI contours were compared by use of deformable registration. Errors in target delineation were analyzed and methods to avoid such errors detailed. Results: Contouring errors were identified at the prostatic apex, mid gland, and base on CT. At the apex, the genitourinary diaphragm, rectum, and anterior fascia contribute to overestimation. At the mid prostate, the anterior and lateral fasciae contribute to overestimation. At the base, the bladder and anterior fascia contribute to anterior overestimation. Transition zone hypertrophy and bladder neck variability contribute to errors of overestimation and underestimation at the superior base, whereas variable prostate-to-seminal vesicle relationships with prostate hypertrophy contribute to contouring errors at the posterior base. Conclusions: Most CT contouring errors can be detected by (1) inspection of a lateral view of prostate contours to detect projection from the expected globular form and (2) recognition of anatomic structures (genitourinary diaphragm) on the CT scans that are clearly visible on MRI. This study shows that many CT prostate contouring errors can be improved without direct incorporation of MRI data.

  3. Drift removal for improving the accuracy of gait parameters using wearable sensor systems.

    PubMed

    Takeda, Ryo; Lisco, Giulia; Fujisawa, Tadashi; Gastaldi, Laura; Tohyama, Harukazu; Tadano, Shigeru

    2014-12-05

    Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR) digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases.

  4. Combining double-difference relocation with regional depth-phase modelling to improve hypocentre accuracy

    NASA Astrophysics Data System (ADS)

    Ma, Shutian; Eaton, David W.

    2011-05-01

    Precise and accurate earthquake hypocentres are critical for various fields, such as the study of tectonic process and seismic-hazard assessment. Double-difference relocation methods are widely used and can dramatically improve the precision of event relative locations. In areas of sparse seismic network coverage, however, a significant trade-off exists between focal depth, epicentral location and the origin time. Regional depth-phase modelling (RDPM) is suitable for sparse networks and can provide focal-depth information that is relatively insensitive to uncertainties in epicentral location and independent of errors in the origin time. Here, we propose a hybrid method in which focal depth is determined using RDPM and then treated as a fixed parameter in subsequent double-difference calculations, thus reducing the size of the system of equations and increasing the precision of the hypocentral solutions. Based on examples using small earthquakes from eastern Canada and southwestern USA, we show that the application of this technique yields solutions that appear to be more robust and accurate than those obtained by standard double-difference relocation method alone.

  5. Radiotherapy dosimetry audit: three decades of improving standards and accuracy in UK clinical practice and trials

    PubMed Central

    Aird, Edwin GA; Bolton, Steve; Miles, Elizabeth A; Nisbet, Andrew; Snaith, Julia AD; Thomas, Russell AS; Venables, Karen; Thwaites, David I

    2015-01-01

    Dosimetry audit plays an important role in the development and safety of radiotherapy. National and large scale audits are able to set, maintain and improve standards, as well as having the potential to identify issues which may cause harm to patients. They can support implementation of complex techniques and can facilitate awareness and understanding of any issues which may exist by benchmarking centres with similar equipment. This review examines the development of dosimetry audit in the UK over the past 30 years, including the involvement of the UK in international audits. A summary of audit results is given, with an overview of methodologies employed and lessons learnt. Recent and forthcoming more complex audits are considered, with a focus on future needs including the arrival of proton therapy in the UK and other advanced techniques such as four-dimensional radiotherapy delivery and verification, stereotactic radiotherapy and MR linear accelerators. The work of the main quality assurance and auditing bodies is discussed, including how they are working together to streamline audit and to ensure that all radiotherapy centres are involved. Undertaking regular external audit motivates centres to modernize and develop techniques and provides assurance, not only that radiotherapy is planned and delivered accurately but also that the patient dose delivered is as prescribed. PMID:26329469

  6. [Improving laser center wavelength detection accuracy based on multi-level combination prisms].

    PubMed

    Liu, Xiao-Dong; Zhang, Zhi-Jie

    2011-08-01

    In order to improve the spectral resolution of birefringence prism under the conditions of ensuring the quality of interference fringes image, the system used multi-level combination prisms and designed the method of interferometer fringes splice. According to calculation of the interferometer fringes intensity of multi-level combination prisms, the optical path difference function and the spectrum resolution, the present paper analyzed that the least spectrum resolution is 2.875 cm(-1) in multi-level combination prisms of four prisms structure. The method of interferometer fringes splice was designed to splice the section interferometer fringes, and in experiment the size of multi-level combination prisms is 30 mm x 28 mm x 10 mm. The standard 635 nm laser for getting the interferometer fringes was dealed with. Experimental data show that the detection spectrum distribution of the 635.0 nm laser was distorted by the direct splicing of the interference fringes, while the detection spectrum distribution of the 635.0 nm laser was consistent with the standard spectrum by the method of interferometer fringes splice. So the method can effectively avoid spectrum distortion by interferometer fringes splice in multi-level combination prisms.

  7. Gravity Probe B data analysis status and potential for improved accuracy of scientific results

    NASA Astrophysics Data System (ADS)

    Everitt, C. W. F.; Adams, M.; Bencze, W.; Buchman, S.; Clarke, B.; Conklin, J.; DeBra, D. B.; Dolphin, M.; Heifetz, M.; Hipkins, D.; Holmes, T.; Keiser, G. M.; Kolodziejczak, J.; Li, J.; Lockhart, J. M.; Muhlfelder, B.; Parkinson, B. W.; Salomon, M.; Silbergleit, A.; Solomonik, V.; Stahl, K.; Turneaure, J. P.; Worden, P. W., Jr.

    2008-06-01

    Gravity Probe B (GP-B) is a landmark physics experiment in space designed to yield precise tests of two fundamental predictions of Einstein's theory of general relativity, the geodetic and frame-dragging effects, by means of cryogenic gyroscopes in Earth orbit. Launched on 20 April 2004, data collection began on 28 August 2004 and science operations were completed on 29 September 2005 upon liquid helium depletion. During the course of the experiment, two unexpected and mutually-reinforcing complications were discovered: (1) larger than expected 'misalignment' torques on the gyroscopes producing classical drifts larger than the relativity effects under study and (2) a damped polhode oscillation that complicated the calibration of the instrument's scale factor against the aberration of starlight. Steady progress through 2006 and 2007 established the methods for treating both problems; in particular, an extended effort from January 2007 on 'trapped flux mapping' led in August 2007 to a dramatic breakthrough, resulting in a factor of ~20 reduction in data scatter. This paper reports results up to November 2007. Detailed investigation of a central 85-day segment of the data has yielded robust measurements of both relativity effects. Expansion to the complete science data set, along with anticipated improvements in modeling and in the treatment of systematic errors may be expected to yield a 3 6% determination of the frame-dragging effect.

  8. Electronic prescribing: improving the efficiency and accuracy of prescribing in the ambulatory care setting.

    PubMed

    Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto

    2014-01-01

    Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues.

  9. Improving accuracy and reliability of 186-keV measurements for unattended enrichment monitoring

    SciTech Connect

    Ianakiev, Kiril D; Boyer, Brian D; Swinhoe, Martyn T; Moss, Calvin E; Goda, Joetta M; Favalli, Andrea; Lombardi, Marcie; Paffett, Mark T; Hill, Thomas R; MacArthur, Duncan W; Smith, Morag K

    2010-04-13

    Improving the quality of safeguards measurements at Gas Centrifuge Enrichment Plants (GCEPs), whilst reducing the inspection effort, is an important objective given the number of existing and new plants that need to be safeguarded. A useful tool in many safeguards approaches is the on-line monitoring of enrichment in process pipes. One aspect of this measurement is a simple, reliable and precise passive measurement of the 186-keV line from {sup 235}U. (The other information required is the amount of gas in the pipe. This can be obtained by transmission measurements or pressure measurements). In this paper we describe our research efforts towards such a passive measurement system. The system includes redundant measurements of the 186-keV line from the gas and separately from the wall deposits. The design also includes measures to reduce the effect of the potentially important background. Such an approach would practically eliminate false alarms and can maintain the operation of the system even with a hardware malfunction in one of the channels. The work involves Monte Carlo modeling and the construction of a proof-of-principle prototype. We will carry out experimental tests with UF{sub 6} gas in pipes with and without deposits in order to demonstrate the deposit correction.

  10. What's ahead in glucose monitoring? New techniques hold promise for improved ease and accuracy.

    PubMed

    Bode, B W; Sabbah, H; Davidson, P C

    2001-04-01

    Advances in blood glucose monitoring have made it easier, more comfortable, and more practical for patients to monitor frequently. The new meters for intermittent monitoring are smaller and less dependent on technical aptitude than older models. They require less blood, and many provide downloadable information for glucose analysis. Data systems used with new meters provide valuable information that can dramatically improve glycemic control. Continuous glucose sensing (figure 4) is another major breakthrough in management of diabetes. Current systems allow only retrospective analyses, but real-time readings should be available in the near future. Such technological advances hold promise for preventing both hypoglycemia and hyperglycemia and for reducing the risk of long-term complications associated with diabetes. An artificial, mechanical islet cell may be the big next step toward bringing this disease under control. By combining continuous glucose monitoring data with continuous insulin delivery via an external or an implantable insulin pump, the outlook promises to be much brighter for patients with type 1 diabetes. PMID:11317468

  11. Using standards to improve middle school students' accuracy at evaluating the quality of their recall.

    PubMed

    Lipko, Amanda R; Dunlosky, John; Hartwig, Marissa K; Rawson, Katherine A; Swan, Karen; Cook, Dale

    2009-12-01

    When recalling key term definitions from class materials, students may recall entirely incorrect definitions, yet will often claim that these commission errors are entirely correct; that is, they are overconfident in the quality of their recall responses. We investigated whether this overconfidence could be reduced by providing various standards to middle school students as they evaluated their recall responses. Students studied key term definitions, attempted to recall each one, and then were asked to score the quality of their recall. In Experiment 1, they evaluated their recall responses by rating each response as fully correct, partially correct, or incorrect. Most important, as they evaluated a particular response, it was presented either alone (i.e., without a standard) or with the correct definition present. Providing this full-definition standard reduced overconfidence in commission errors: Students assigned full or partial credit to 73% of their commission errors when they received no standard, whereas they assigned credit to only 44% of these errors when receiving the full-definition standard. In Experiment 2, a new standard was introduced: Idea units from each definition were presented, and students indicated whether each idea unit was in their response. After making these idea-unit judgments, the students then evaluated the quality of their entire response. Idea-unit standards further reduced overconfidence. Thus, although middle school students are overconfident in evaluating the quality of their recall responses, using standards substantially reduces this overconfidence and promises to improve the efficacy of their self-regulated learning.

  12. Improving radiotherapy planning, delivery accuracy, and normal tissue sparing using cutting edge technologies.

    PubMed

    Glide-Hurst, Carri K; Chetty, Indrin J

    2014-04-01

    In the United States, more than half of all new invasive cancers diagnosed are non-small cell lung cancer, with a significant number of these cases presenting at locally advanced stages, resulting in about one-third of all cancer deaths. While the advent of stereotactic ablative radiation therapy (SABR, also known as stereotactic body radiotherapy, or SBRT) for early-staged patients has improved local tumor control to >90%, survival results for locally advanced stage lung cancer remain grim. Significant challenges exist in lung cancer radiation therapy including tumor motion, accurate dose calculation in low density media, limiting dose to nearby organs at risk, and changing anatomy over the treatment course. However, many recent technological advancements have been introduced that can meet these challenges, including four-dimensional computed tomography (4DCT) and volumetric cone-beam computed tomography (CBCT) to enable more accurate target definition and precise tumor localization during radiation, respectively. In addition, advances in dose calculation algorithms have allowed for more accurate dosimetry in heterogeneous media, and intensity modulated and arc delivery techniques can help spare organs at risk. New delivery approaches, such as tumor tracking and gating, offer additional potential for further reducing target margins. Image-guided adaptive radiation therapy (IGART) introduces the potential for individualized plan adaptation based on imaging feedback, including bulky residual disease, tumor progression, and physiological changes that occur during the treatment course. This review provides an overview of the current state of the art technology for lung cancer volume definition, treatment planning, localization, and treatment plan adaptation.

  13. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    PubMed

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers.

  14. A statistical filtering procedure to improve the accuracy of estimating population parameters in feed composition databases.

    PubMed

    Yoder, P S; St-Pierre, N R; Weiss, W P

    2014-09-01

    Accurate estimates of mean nutrient composition of feeds, nutrient variance (i.e., standard deviation), and covariance (i.e., correlation) are needed to develop a more quantitative approach of formulating diets to reduce risk and optimize safety factors. Commercial feed-testing laboratories have large databases of composition values for many feeds, but because of potentially misidentified feeds or poorly defined feed names, these databases are possibly contaminated by incorrect results and could generate inaccurate statistics. The objectives of this research were to (1) design a procedure (also known as a mathematical filter) that generates accurate estimates of the first 2 moments [i.e., the mean and (co)variance] of the nutrient distributions for the largest subpopulation within a feed in the presence of outliers and multiple subpopulations, and (2) use the procedure to generate feed composition tables with accurate means, variances, and correlations. Feed composition data (>1,300,000 samples) were collected from 2 major US commercial laboratories. A combination of a univariate step and 2 multivariate steps (principal components analysis and cluster analysis) were used to filter the data. On average, 13.5% of the total samples of a particular feed population were removed, of which the multivariate steps removed the majority (66% of removed samples). For some feeds, inaccurate identification (e.g., corn gluten feed samples included in the corn gluten meal population) was a primary reason for outliers, whereas for other feeds, subpopulations of a broader population were identified (e.g., immature alfalfa silage within a broad population of alfalfa silage). Application of the procedure did not usually affect the mean concentration of nutrients but greatly reduced the standard deviation and often changed the correlation estimates among nutrients. More accurate estimates of the variation of feeds and how they tend to vary will improve the economic evaluation of feeds

  15. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  16. Identifying the procedural gap and improved methods for maintaining accuracy during total hip arthroplasty.

    PubMed

    Gross, Allan; Muir, Jeffrey M

    2016-09-01

    Osteoarthritis is a ubiquitous condition, affecting 26 million Americans each year, with up to 17% of adults over age 75 suffering from one variation of arthritis. The hip is one of the most commonly affected joints and while there are conservative options for treatment, as symptoms progress, many patients eventually turn to surgery to manage their pain and dysfunction. Early surgical options such as osteotomy or arthroscopy are reserved for younger, more active patients with less severe disease and symptoms. Total hip arthroplasty offers a viable solution for patients with severe degenerative changes; however, post-surgical discrepancies in leg length, offset and component malposition are common and cause significant complications. Such discrepancies are associated with consequences such as low back pain, neurological deficits, instability and overall patient dissatisfaction. Current methods for managing leg length and offset during hip arthroplasty are either inaccurate and susceptible to error or are cumbersome, expensive and lengthen surgical time. There is currently no viable option that provides accurate, real-time data to surgeons regarding leg length, offset and cup position in a cost-effective manner. As such, we hypothesize that a procedural gap exists in hip arthroplasty, a gap into which fall a large majority of arthroplasty patients who are at increased risk of complications following surgery. These complications and associated treatments place significant stress on the healthcare system. The costs associated with addressing leg length and offset discrepancies can be minor, requiring only heel lifts and short-term rehabilitation, but can also be substantial, with revision hip arthroplasty costs of up to $54,000 per procedure. The need for a cost-effective, simple to use and unobtrusive technology to address this procedural gap in hip arthroplasty and improve patient outcomes is of increasing importance. Given the aging of the population, the projected

  17. Improving accuracy in shallow-landslide susceptibility analyses at regional scale

    NASA Astrophysics Data System (ADS)

    Iovine, Giulio G. R.; Rago, Valeria; Frustaci, Francesco; Bruno, Claudia; Giordano, Stefania; Muto, Francesco; Gariano, Stefano L.; Pellegrino, Annamaria D.; Conforti, Massimo; Pascale, Stefania; Distilo, Daniela; Basile, Vincenzo; Soleri, Sergio; Terranova, Oreste G.

    2015-04-01

    Calabria (southern Italy) is particularly exposed to geo-hydrological risk. In the last decades, slope instabilities, mainly related to rainfall-induced landslides, repeatedly affected its territory. Among these, shallow landslides, characterized by abrupt onset and extremely rapid movements, are among the most destructive and dangerous phenomena for people and infrastructures. In this study, a susceptibility analysis to shallow landslides has been performed by refining a method recently applied in Costa Viola - central Calabria (Iovine et al., 2014), and only focusing on landslide source activations (regardless of their possible evolution as debris flows). A multivariate approach has been applied to estimating the presence/absence of sources, based on linear statistical relationships with a set of causal variables. The different classes of numeric causal variables have been determined by means of a data clustering method, designed to determine the best arrangement. A multi-temporal inventory map of sources, mainly obtained from interpretation of air photographs taken in 1954-1955, and in 2000, has been adopted to selecting the training and the validation sets. Due to the wide extend of the territory, the analysis has been iteratively performed by a step-by-step decreasing cell-size approach, by adopting greater spatial resolutions and thematic details (e.g. lithology, land-use, soil, morphometry, rainfall) for high-susceptible sectors. Through a sensitivity analysis, the weight of the considered factors in predisposing shallow landslides has been evaluated. The best set of variables has been identified by iteratively including one variable at a time, and comparing the results in terms of performance. Furthermore, susceptibility evaluations obtained through logistic regression have been compared to those obtained by applying neural networks. Obtained results may be useful to improve land utilization planning, and to select proper mitigation measures in shallow

  18. Charting the course for home health care quality: action steps for achieving sustainable improvement: conference proceedings.

    PubMed

    Feldman, Penny Hollander; Peterson, Laura E; Reische, Laurie; Bruno, Lori; Clark, Amy

    2004-12-01

    On June 30 and July 1, 2003, the first national meeting Charting the Course for Home Health Care Quality: Action Steps for Achieving Sustainable Improvement convened in New York City. The Center for Home Care Policy & Research of the Visiting Nurse Service of New York (VNSNY) hosted the meeting with support from the Robert Wood Johnson Foundation. Fifty-seven attendees from throughout the United States participated. The participants included senior leaders and managers and nurses working directly in home care today. The meeting's objectives were to: 1. foster dialogue among key constituents influencing patient safety and home care, 2. promote information-sharing across sectors and identify areas where more information is needed, and, 3. develop an agenda and strategy for moving forward. This article reports the meeting's proceedings.

  19. A New-Generation Continuous Glucose Monitoring System: Improved Accuracy and Reliability Compared with a Previous-Generation System

    PubMed Central

    Bailey, Timothy; Watkins, Elaine; Liljenquist, David; Price, David; Nakamura, Katherine; Boock, Robert; Peyser, Thomas

    2013-01-01

    Abstract Background Use of continuous glucose monitoring (CGM) systems can improve glycemic control, but widespread adoption of CGM utilization has been limited, in part because of real and perceived problems with accuracy and reliability. This study compared accuracy and performance metrics for a new-generation CGM system with those of a previous-generation device. Subjects and Methods Subjects were enrolled in a 7-day, open-label, multicenter pivotal study. Sensor readings were compared with venous YSI measurements (blood glucose analyzer from YSI Inc., Yellow Springs, OH) every 15 min (±5 min) during in-clinic visits. The aggregate and individual sensor accuracy and reliability of a new CGM system, the Dexcom® (San Diego, CA) G4™ PLATINUM (DG4P), were compared with those of the previous CGM system, the Dexcom SEVEN® PLUS (DSP). Results Both study design and subject characteristics were similar. The aggregate mean absolute relative difference (MARD) for DG4P was 13% compared with 16% for DSP (P<0.0001), and 82% of DG4P readings were within ±20 mg/dL (for YSI ≤80 mg/dL) or 20% of YSI values (for YSI >80 mg/dL) compared with 76% for DSP (P<0.001). Ninety percent of the DG4P sensors had an individual MARD ≤20% compared with only 76% of DSP sensors (P=0.015). Half of DG4P sensors had a MARD less than 12.5% compared with 14% for the DSP sensors (P=0.028). The mean absolute difference for biochemical hypoglycemia (YSI <70 mg/dL) for DG4P was 11 mg/dL compared with 16 mg/dL for DSP (P<0.001). Conclusions The performance of DG4P was significantly improved compared with that of DSP, which may increase routine clinical use of CGM and improve patient outcomes. PMID:23777402

  20. Improved beam steering accuracy of a single beam with a 1D phase-only spatial light modulator.

    PubMed

    Engström, David; Bengtsson, Jörgen; Eriksson, Emma; Goksör, Mattias

    2008-10-27

    The limited number of pixels and their quantized phase modulation values limit the positioning accuracy when a phase-only one dimensional spatial light modulator (SLM) is used for beam steering. Applying the straightforward recipe for finding the optimal setting of the SLM pixels, based on individually optimizing the field contribution from each pixel to the field in the steering position, the inaccuracy can be a significant fraction of the diffraction limited spot size. This is especially true in the vicinity of certain steering angles where precise positioning is particularly difficult. However, by including in the optimization of the SLM setting an extra degree of freedom, we show that the steering accuracy can be drastically improved by a factor proportional to the number of pixels in the SLM. The extra degree of freedom is a global phase offset of all the SLM pixels which takes on a different value for each steering angle. Beam steering experiments were performed with the SLM being set both according to the conventional and the new recipe, and the results were in very good agreement with the theoretical predictions.

  1. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    PubMed

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load. PMID:27370431

  2. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points - A Review.

    PubMed

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  3. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    PubMed Central

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  4. Rapid cell blocks improve accuracy of breast FNAs beyond that provided by conventional cell blocks regardless of immediate adequacy evaluation.

    PubMed

    Akalin, A; Lu, D; Woda, B; Moss, L; Fischer, A

    2008-07-01

    A new "rapid cell block" technique (RCB; the predecessor of Cellienttrade mark Automated Cell Block System) is efficient at recovering sparse material. We previously found that RCBs of breast fine-needle aspirations (FNAs) frequently allow histologic classification of problematic ductal proliferative lesions. Previous studies that did not emphasize cell blocks found that on-site evaluation (OSE) of breast FNAs improves diagnosis. The purpose of this study was to determine if RCB could replace the utility of OSE of breast FNAs. The study included 604 consecutive ultrasound-guided noncyst breast FNAs composed of three cohorts based on the presence or absence of immediate adequacy assessment, conventional (collodion bag) cell blocks (CCB), and RCB. The cohort with OSE together with CCB did not perform as well as the cohort without OSE but with RCB. In a third cohort, performance characteristics of RCBs and CCBs were compared in an independent review by two cytopathologists blinded to the final cytology and follow-up histology diagnosis. By itself, the RCB histologic section was diagnostic 97% of the time, and it provided a diagnostic accuracy superior to CCB by itself and comparable to that provided by the combination of the smears with CCB. Highest accuracy was obtained by combining smears/monolayer preparations and RCB. Replacing OSE with RCBs provided substantial cost savings and savings of time for cytopathologists, radiologists, and their assistants.

  5. Photoplethysmogram intensity ratio: A potential indicator for improving the accuracy of PTT-based cuffless blood pressure estimation.

    PubMed

    Ding, Xiao-Rong; Zhang, Yuan-Ting

    2015-01-01

    The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy. PMID:26736283

  6. EEG activity during movement planning encodes upcoming peak speed and acceleration and improves the accuracy in predicting hand kinematics.

    PubMed

    Yang, Lingling; Leung, Howard; Plank, Markus; Snider, Joe; Poizner, Howard

    2015-01-01

    The relationship between movement kinematics and human brain activity is an important and fundamental question for the development of neural prosthesis. The peak velocity and the peak acceleration could best reflect the feedforward-type movement; thus, it is worthwhile to investigate them further. Most related studies focused on the correlation between kinematics and brain activity during the movement execution or imagery. However, human movement is the result of the motor planning phase as well as the execution phase and researchers have demonstrated that statistical correlations exist between EEG activity during the motor planning and the peak velocity and the peak acceleration using grand-average analysis. In this paper, we examined whether the correlations were concealed in trial-to-trial decoding from the low signal-to-noise ratio of EEG activity. The alpha and beta powers from the movement planning phase were combined with the alpha and beta powers from the movement execution phase to predict the peak tangential speed and acceleration. The results showed that EEG activity from the motor planning phase could also predict the peak speed and the peak acceleration with a reasonable accuracy. Furthermore, the decoding accuracy of the peak speed and the peak acceleration could both be improved by combining band powers from the motor planning phase with the band powers from the movement execution.

  7. Achievement for All: improving psychosocial outcomes for students with special educational needs and disabilities.

    PubMed

    Humphrey, Neil; Lendrum, Ann; Barlow, Alexandra; Wigelsworth, Michael; Squires, Garry

    2013-04-01

    Students with special educational needs and disabilities (SEND) are at a greatly increased risk of experiencing poor psychosocial outcomes. Developing effective interventions that address the cause of these outcomes has therefore become a major policy priority in recent years. We report on a national evaluation of the Achievement for All (AfA) programme that was designed to improve outcomes for students with SEND through: (1) academic assessment, tracking and intervention, (2) structured conversations with parents, and (3) developing provision to improve wider outcomes (e.g. positive relationships). Using a quasi-experimental, pre-test-post-test control group design, we assessed the impact of AfA on teacher ratings of the behaviour problems, positive relationships and bullying of students with SEND over an 18-month period. Participants were 4758 students with SEND drawn from 323 schools across England. Our main impact analysis demonstrated that AfA had a significant impact on all three response variables when compared to usual practice. Hierarchical linear modelling of data from the intervention group highlighted a range of school-level contextual factors and implementation activities and student-level individual differences that moderated the impact of AfA on our study outcomes. The implications of our findings are discussed, and study strengths and limitations are noted.

  8. The Stories Clinicians Tell: Achieving High Reliability and Improving Patient Safety

    PubMed Central

    Cohen, Daniel L; Stewart, Kevin O

    2016-01-01

    The patient safety movement has been deeply affected by the stories patients have shared that have identified numerous opportunities for improvements in safety. These stories have identified system and/or human inefficiencies or dysfunctions, possibly even failures, often resulting in patient harm. Although patients’ stories tell us much, less commonly heard are the stories of clinicians and how their personal observations regarding the environments they work in and the circumstances and pressures under which they work may degrade patient safety and lead to harm. If the health care industry is to function like a high-reliability industry, to improve its processes and achieve the outcomes that patients rightly deserve, then leaders and managers must seek and value input from those on the front lines—both clinicians and patients. Stories from clinicians provided in this article address themes that include incident identification, disclosure and transparency, just culture, the impact of clinical workload pressures, human factors liabilities, clinicians as secondary victims, the impact of disruptive and punitive behaviors, factors affecting professional morale, and personal failings. PMID:26580146

  9. Achievement for All: improving psychosocial outcomes for students with special educational needs and disabilities.

    PubMed

    Humphrey, Neil; Lendrum, Ann; Barlow, Alexandra; Wigelsworth, Michael; Squires, Garry

    2013-04-01

    Students with special educational needs and disabilities (SEND) are at a greatly increased risk of experiencing poor psychosocial outcomes. Developing effective interventions that address the cause of these outcomes has therefore become a major policy priority in recent years. We report on a national evaluation of the Achievement for All (AfA) programme that was designed to improve outcomes for students with SEND through: (1) academic assessment, tracking and intervention, (2) structured conversations with parents, and (3) developing provision to improve wider outcomes (e.g. positive relationships). Using a quasi-experimental, pre-test-post-test control group design, we assessed the impact of AfA on teacher ratings of the behaviour problems, positive relationships and bullying of students with SEND over an 18-month period. Participants were 4758 students with SEND drawn from 323 schools across England. Our main impact analysis demonstrated that AfA had a significant impact on all three response variables when compared to usual practice. Hierarchical linear modelling of data from the intervention group highlighted a range of school-level contextual factors and implementation activities and student-level individual differences that moderated the impact of AfA on our study outcomes. The implications of our findings are discussed, and study strengths and limitations are noted. PMID:23380579

  10. Using radar ground-truth to validate and improve the location accuracy of a lightning direction-finding network

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.

    1989-01-01

    A technique is described in which isolated radar echoes associated with clusters of lightning strikes are used to validate and improve the location accuracy of a lightning-direction-finding network. Using this technique, site errors of a magnetic direction-finding network for locating lightning strikes to ground were accurately determined. The technique offers advantages over existing techniques in that large sample sizes are readily attainable over a broad area on a regular basis; the technique can also provide additional constraints to redundant data methods such as that described by Orville (1987). Since most lightning strike networks have either partial or full weather radar coverage, the technique is practical for all but a few users.

  11. Unsupervised HLA Peptidome Deconvolution Improves Ligand Prediction Accuracy and Predicts Cooperative Effects in Peptide-HLA Interactions.

    PubMed

    Bassani-Sternberg, Michal; Gfeller, David

    2016-09-15

    Ag presentation on HLA molecules plays a central role in infectious diseases and tumor immunology. To date, large-scale identification of (neo-)Ags from DNA sequencing data has mainly relied on predictions. In parallel, mass spectrometry analysis of HLA peptidome is increasingly performed to directly detect peptides presented on HLA molecules. In this study, we use a novel unsupervised approach to assign mass spectrometry-based HLA peptidomics data to their cognate HLA molecules. We show that incorporation of deconvoluted HLA peptidomics data in ligand prediction algorithms can improve their accuracy for HLA alleles with few ligands in existing databases. The results of our computational analysis of large datasets of naturally processed HLA peptides, together with experimental validation and protein structure analysis, further reveal how HLA-binding motifs change with peptide length and predict new cooperative effects between distant residues in HLA-B07:02 ligands. PMID:27511729

  12. Improving the quantitative accuracy of cerebral oxygen saturation in monitoring the injured brain using atlas based Near Infrared Spectroscopy models.

    PubMed

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J E; Su, Zhangjie; Dehghani, Hamid

    2016-08-01

    The application of Near Infrared Spectroscopy (NIRS) for the monitoring of the cerebral oxygen saturation within the brain is well established, albeit using temporal data that can only measure relative changes of oxygenation state of the brain from a baseline. The focus of this investigation is to demonstrate that hybridisation of existing near infrared probe designs and reconstruction techniques can pave the way to produce a system and methods that can be used to monitor the absolute oxygen saturation in the injured brain. Using registered Atlas models in simulation, a novel method is outlined by which the quantitative accuracy and practicality of NIRS for specific use in monitoring the injured brain, can be improved, with cerebral saturation being recovered to within 10.1 ± 1.8% of the expected values. PMID:27003677

  13. Unsupervised HLA Peptidome Deconvolution Improves Ligand Prediction Accuracy and Predicts Cooperative Effects in Peptide-HLA Interactions.

    PubMed

    Bassani-Sternberg, Michal; Gfeller, David

    2016-09-15

    Ag presentation on HLA molecules plays a central role in infectious diseases and tumor immunology. To date, large-scale identification of (neo-)Ags from DNA sequencing data has mainly relied on predictions. In parallel, mass spectrometry analysis of HLA peptidome is increasingly performed to directly detect peptides presented on HLA molecules. In this study, we use a novel unsupervised approach to assign mass spectrometry-based HLA peptidomics data to their cognate HLA molecules. We show that incorporation of deconvoluted HLA peptidomics data in ligand prediction algorithms can improve their accuracy for HLA alleles with few ligands in existing databases. The results of our computational analysis of large datasets of naturally processed HLA peptides, together with experimental validation and protein structure analysis, further reveal how HLA-binding motifs change with peptide length and predict new cooperative effects between distant residues in HLA-B07:02 ligands.

  14. Improving optical fiber current sensor accuracy using artificial neural networks to compensate temperature and minor non-ideal effects

    NASA Astrophysics Data System (ADS)

    Zimmermann, Antonio C.; Besen, Marcio; Encinas, Leonardo S.; Nicolodi, Rosane

    2011-05-01

    This article presents a practical signal processing methodology, based on Artificial Neural Networks - ANN, to process the measurement signals of typical Fiber Optic Current Sensors - FOCS, achieving higher accuracy from temperature and non-linearity compensation. The proposed idea resolve FOCS primary problems, mainly when it is difficult to determine all errors sources present in the physical phenomenon or the measurement equation becomes too nonlinear to be applied in a wide measurement range. The great benefit of ANN is to get a transfer function for the measurement system taking in account all unknowns, even those from unwanted and unknowing effects, providing a compensated output after the ANN training session. Then, the ANN training is treated like a black box, based on experimental data, where the transfer function of the measurement system, its unknowns and non-idealities are processed and compensated at once, given a fast and robust alternative to the FOCS theoretical method. A real FOCS system was built and the signals acquired from the photo-detectors are processed by the Faraday's Laws formulas and the ANN method, giving measurement results for both signal processing strategies. The coil temperature measurements are also included in the ANN signal processing. To compare these results, a current measuring instrument standard is used together with a metrological calibration procedure. Preliminary results from a variable temperature experiment shows the higher accuracy, better them 0.2% of maximum error, of the ANN methodology, resulting in a quick and robust method to hands with FOCS difficulties on of non-idealities compensation.

  15. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal

    SciTech Connect

    Zhou Zhongxing; Gao Feng; Zhao Huijuan; Zhang Lixin

    2011-03-15

    Purpose: Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. Methods: In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Results: Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering

  16. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    SciTech Connect

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-02-15

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  17. Improving Science Achievement and Attitudes of Students With and Without Learning Disabilities

    NASA Astrophysics Data System (ADS)

    Sanders-White, Pamela

    The primary purpose of this study was to investigate the effect of structured note-taking compared to traditional note-taking on the acquisition of scientific knowledge for students with and without learning disabilities (LD) and students with reading difficulties (RD). An additional purpose was to examine whether the two note-taking methods affected students' attitudes toward science. The sample population consisted of 203 fifth grade students across four public schools in the southern area of the United States. A standardized instrument aligned to Florida's science standards was used to measure the acquisition of scientific knowledge and the Test of Science-Related Attitudes (TOSRA) was used to measure seven distinct science-related attitudes. For meaningful analyses, students with LD and students with RD were collapsed to form a single group due to the small numbers of participants in each of the subgroups; the collapsed group was referred to as "low achievers." A three-way repeated measures ANOVA was conducted to determine the effects of the pretest-posttest Science Interim assessment by group, type of student, and gender. The pretest-posttest Science Interim assessment scores were the within-group factor, while group, type of student, and gender were the between-groups factors. Results revealed that there was a significant interaction between the pretest-posttest Science Interim assessment and group, F(1, 191) = 9.320, p = .003, indicating that scientific knowledge scores increased for the experimental group, but decreased for the control group. Results also indicated that there was a significant three-way interaction between the pretest-posttest Science Interim assessment, group, and gender, F(1, 191) = 5.197, p = .024, showing that all participants in the experimental group improved their scores; while in the control group, female scores decreased and male scores increased. Participants in the experimental and control groups did not show improved attitudes

  18. Structuring Out-of-School Time to Improve Academic Achievement. IES Practice Guide. NCEE 2009-012

    ERIC Educational Resources Information Center

    Beckett, Megan; Borman, Geoffrey; Capizzano, Jeffrey; Parsley, Danette; Ross, Steven; Schirm, Allen; Taylor, Jessica

    2009-01-01

    Out-of-school time programs can enhance academic achievement by helping students learn outside the classroom. The purpose of this practice guide is to provide recommendations for organizing and delivering school-based out-of-school time (OST) programs to improve the academic achievement of student participants. The five recommendations in this…

  19. Evaluating Landsat 8 Satellite Sensor Data for Improved Vegetation Mapping Accuracy of the New Hampshire Coastal Watershed Area

    NASA Astrophysics Data System (ADS)

    Ledoux, Lindsay

    the previous Landsat sensor (Landsat 7). Once classification had been performed, traditional and area-based accuracy assessments were implemented. Comparison measures were also calculated (i.e. Kappa, Z test statistic). The results from this study indicate that, while using Landsat 8 imagery is useful, the additional spectral bands provided in the Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) do not provide an improvement in vegetation classification accuracy in this study.

  20. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  1. Improved Accuracy of Continuous Glucose Monitoring Systems in Pediatric Patients with Diabetes Mellitus: Results from Two Studies

    PubMed Central

    2016-01-01

    Abstract Objective: This study was designed to evaluate accuracy, performance, and safety of the Dexcom (San Diego, CA) G4® Platinum continuous glucose monitoring (CGM) system (G4P) compared with the Dexcom G4 Platinum with Software 505 algorithm (SW505) when used as adjunctive management to blood glucose (BG) monitoring over a 7-day period in youth, 2–17 years of age, with diabetes. Research Design and Methods: Youth wore either one or two sensors placed on the abdomen or upper buttocks for 7 days, calibrating the device twice daily with a uniform BG meter. Participants had one in-clinic session on Day 1, 4, or 7, during which fingerstick BG measurements (self-monitoring of blood glucose [SMBG]) were obtained every 30 ± 5 min for comparison with CGM, and in youth 6–17 years of age, reference YSI glucose measurements were obtained from arterialized venous blood collected every 15 ± 5 min for comparison with CGM. The sensor was removed by the participant/family after 7 days. Results: In comparison of 2,922 temporally paired points of CGM with the reference YSI measurement for G4P and 2,262 paired points for SW505, the mean absolute relative difference (MARD) was 17% for G4P versus 10% for SW505 (P < 0.0001). In comparison of 16,318 temporally paired points of CGM with SMBG for G4P and 4,264 paired points for SW505, MARD was 15% for G4P versus 13% for SW505 (P < 0.0001). Similarly, error grid analyses indicated superior performance with SW505 compared with G4P in comparison of CGM with YSI and CGM with SMBG results, with greater percentages of SW505 results falling within error grid Zone A or the combined Zones A plus B. There were no serious adverse events or device-related serious adverse events for either the G4P or the SW505, and there was no sensor breakoff. Conclusions: The updated algorithm offers substantial improvements in accuracy and performance in pediatric patients with diabetes. Use of CGM with improved performance has

  2. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  3. Studying the Effect of Adaptive Momentum in Improving the Accuracy of Gradient Descent Back Propagation Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Rehman, Muhammad Zubair; Nawi, Nazri Mohd.

    Despite being widely used in the practical problems around the world, Gradient Descent Back-propagation algorithm comes with problems like slow convergence and convergence to local minima. Previous researchers have suggested certain modifications to improve the convergence in gradient Descent Back-propagation algorithm such as careful selection of input weights and biases, learning rate, momentum, network topology, activation function and value for 'gain' in the activation function. This research proposed an algorithm for improving the working performance of back-propagation algorithm which is 'Gradient Descent with Adaptive Momentum (GDAM)' by keeping the gain value fixed during all network trials. The performance of GDAM is compared with 'Gradient Descent with fixed Momentum (GDM)' and 'Gradient Descent Method with Adaptive Gain (GDM-AG)'. The learning rate is fixed to 0.4 and maximum epochs are set to 3000 while sigmoid activation function is used for the experimentation. The results show that GDAM is a better approach than previous methods with an accuracy ratio of 1.0 for classification problems like Wine Quality, Mushroom and Thyroid disease.

  4. Improvement of the accuracy of the aircraft center of gravity by employing optical fiber Bragg grating technology

    NASA Astrophysics Data System (ADS)

    Zhang, Hongtao; Wang, Pengfei; Fan, LingLing; Guan, Liang; Zhao, Qiming; Cui, Hong-Liang

    2010-04-01

    Safety flight of aircrafts requires that the aircraft center of gravity (CG) must fall within specified limits established by the manufacturer. However, the aircraft CG depends not only on the structure of planes, but also on the passengers and their luggage. The current method of estimating the weight of passengers and luggage by the average weight may result in a violation of this requirement. To reduce the discrepancy between the actual weight and estimated weight, we propose a method of improving the accuracy of calculating the CG of the plane by weighing the passengers and their personal luggage. This method is realized by a Weigh-In-Motion (WIM) system installed at boarding gates based on optical fiber Bragg grating (FBG) technology. One prototype of WIM is fabricated and tested at lab. The resolution of this system is 2 kg and can be further improved by advanced manufacture technology. With the accurate weight of passengers and luggage coming from the WIM system and the locations of passengers and luggage obtained from boarding cards, the aircraft CG can be calculated correctly. This method can be applied into other fields, such as escalators, boarding gates for ferries.

  5. The use of film dosimetry of the penumbra region to improve the accuracy of intensity modulated radiotherapy

    SciTech Connect

    Arnfield, Mark R.; Otto, Karl; Aroumougame, Vijayan R.; Alkins, Ryan D.

    2005-01-01

    Accurate measurements of the penumbra region are important for the proper modeling of the radiation beam for linear accelerator-based intensity modulated radiation therapy. The usual data collection technique with a standard ionization chamber artificially broadens the measured beam penumbrae due to volume effects. The larger the chamber, the greater is the spurious increase in penumbra width. This leads to inaccuracies in dose calculations of small fields, including small fields or beam segments used in IMRT. This source of error can be rectified by the use of film dosimetry for penumbra measurements because of its high spatial resolution. The accuracy of IMRT calculations with a pencil beam convolution model in a commercial treatment planning system was examined using commissioning data with and without the benefit of film dosimetry of the beam penumbrae. A set of dose-spread kernels of the pencil beam model was calculated based on commissioning data that included beam profiles gathered with a 0.6-cm-i.d. ionization chamber. A second set of dose-spread kernels was calculated using the same commissioning data with the exception of the penumbrae, which were measured with radiographic film. The average decrease in the measured width of the 80%-20% penumbrae of various square fields of size 3-40 cm, at 5 cm depth in water-equivalent plastic was 0.27 cm. Calculations using the pencil beam model after it was re-commissioned using film dosimetry of the penumbrae gave better agreement with measurements of IMRT fields, including superior reproduction of high dose gradient regions and dose extrema. These results show that accurately measuring the beam penumbrae improves the accuracy of the dose distributions predicted by the treatment planning system and thus is important when commissioning beam models used for IMRT.

  6. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    SciTech Connect

    Patel, Prashant; Rangarajan, Balaji; Mangat, Kamarjit E-mail: kamarjit.mangat@nhs.net

    2015-08-15

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree.

  7. Deriving bio-equivalents from in vitro bioassays: assessment of existing uncertainties and strategies to improve accuracy and reporting.

    PubMed

    Wagner, Martin; Vermeirssen, Etiënne L M; Buchinger, Sebastian; Behr, Maximilian; Magdeburg, Axel; Oehlmann, Jörg

    2013-08-01

    Bio-equivalents (e.g., 17β-estradiol or dioxin equivalents) are commonly employed to quantify the in vitro effects of complex human or environmental samples. However, there is no generally accepted data analysis strategy for estimating and reporting bio-equivalents. Therefore, the aims of the present study are to 1) identify common mathematical models for the derivation of bio-equivalents from the literature, 2) assess the ability of those models to correctly predict bio-equivalents, and 3) propose measures to reduce uncertainty in their calculation and reporting. We compiled a database of 234 publications that report bio-equivalents. From the database, we extracted 3 data analysis strategies commonly used to estimate bio-equivalents. These models are based on linear or nonlinear interpolation, and the comparison of effect concentrations (ECX ). To assess their accuracy, we employed simulated data sets in different scenarios. The results indicate that all models lead to a considerable misestimation of bio-equivalents if certain mathematical assumptions (e.g., goodness of fit, parallelism of dose-response curves) are violated. However, nonlinear interpolation is most suitable to predict bio-equivalents from single-point estimates. Regardless of the model, subsequent linear extrapolation of bio-equivalents generates additional inaccuracy if the prerequisite of parallel dose-response curves is not met. When all these factors are taken into consideration, it becomes clear that data analysis introduces considerable uncertainty in the derived bio-equivalents. To improve accuracy and transparency of bio-equivalents, we propose a novel data analysis strategy and a checklist for reporting Minimum Information about Bio-equivalent ESTimates (MIBEST).

  8. A breast-specific, negligible-dose scatter correction technique for dedicated cone-beam breast CT: a physics-based approach to improve Hounsfield Unit accuracy

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Burkett, George, Jr.; Boone, John M.

    2014-11-01

    The purpose of this research was to develop a method to correct the cupping artifact caused from x-ray scattering and to achieve consistent Hounsfield Unit (HU) values of breast tissues for a dedicated breast CT (bCT) system. The use of a beam passing array (BPA) composed of parallel-holes has been previously proposed for scatter correction in various imaging applications. In this study, we first verified the efficacy and accuracy using BPA to measure the scatter signal on a cone-beam bCT system. A systematic scatter correction approach was then developed by modeling the scatter-to-primary ratio (SPR) in projection images acquired with and without BPA. To quantitatively evaluate the improved accuracy of HU values, different breast tissue-equivalent phantoms were scanned and radially averaged HU profiles through reconstructed planes were evaluated. The dependency of the correction method on object size and number of projections was studied. A simplified application of the proposed method on five clinical patient scans was performed to demonstrate efficacy. For the typical 10-18 cm breast diameters seen in the bCT application, the proposed method can effectively correct for the cupping artifact and reduce the variation of HU values of breast equivalent material from 150 to 40 HU. The measured HU values of 100% glandular tissue, 50/50 glandular/adipose tissue, and 100% adipose tissue were approximately 46, -35, and -94, respectively. It was found that only six BPA projections were necessary to accurately implement this method, and the additional dose requirement is less than 1% of the exam dose. The proposed method can effectively correct for the cupping artifact caused from x-ray scattering and retain consistent HU values of breast tissues.

  9. A proposal for a test of Weak Equivalence Principle with improved accuracy using a cryogenic differential accelerometer installed on a pendulum

    NASA Astrophysics Data System (ADS)

    Iafolla, V. A.; Fiorenza, E.; Lefevre, C.; Lucchesi, D. M.; Lucente, M.; Magnafico, C.; Nozzoli, S.; Peron, R.; Santoli, F.; Lorenzini, E. C.; Milyukov, V.; Shapiro, I. I.; Glashow, S.

    2016-01-01

    We present here the concept for a new experimental test of the Weak Equivalence Principle (WEP) carried out in the gravity field of the Sun. The WEP, stating the independence of the gravitational acceleration a body is subject to from its mass and composition, is at the basis of general relativity theory and more in general of metric theories of gravitation. It is therefore very important to test it to the precision allowable by current technology. The experiment here proposed aims at measuring the relative acceleration of two test masses in free fall, searching for a possible violation of the WEP, which would show up as a non-zero acceleration signal. The core of the experiment is constituted by a differential accelerometer with zero baseline, whose central elements are two test masses of different materials. This differential accelerometer is placed on a pendulum, in such a way as to make the common center of mass coincident with the center of mass of the pendulum itself. Ensuring a very precise centering, such a system should provide a high degree of attenuation of the local seismic noise, which - together with an integration time of the order of tens of days - would allow a test of the WEP with an accuracy improved by at least an order of magnitude with respect to the best measurements achieved so far. One of the strengths of this proposal is the know-how acquired from a previous study and technology development (GReAT: General Relativity Accuracy Test) that involved a test of the WEP in the gravity field of the Earth, in free fall inside a co-moving capsule released from a stratospheric balloon. The concept of the experiment is introduced, with particular attention to the differential accelerometer and its accommodation on the pendulum. A preliminary estimate of the attainable precision is given, along with a critical analysis of the associated challenges.

  10. 41 CFR 102-193.25 - What type of records management business process improvements should my agency strive to achieve?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... management business process improvements should my agency strive to achieve? 102-193.25 Section 102-193.25...-193.25 What type of records management business process improvements should my agency strive to... correspondence; (b) Design forms that are easy to fill-in, read, transmit, process, and retrieve, and...

  11. 41 CFR 102-193.25 - What type of records management business process improvements should my agency strive to achieve?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... management business process improvements should my agency strive to achieve? 102-193.25 Section 102-193.25...-193.25 What type of records management business process improvements should my agency strive to... correspondence; (b) Design forms that are easy to fill-in, read, transmit, process, and retrieve, and...

  12. 41 CFR 102-193.25 - What type of records management business process improvements should my agency strive to achieve?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... management business process improvements should my agency strive to achieve? 102-193.25 Section 102-193.25...-193.25 What type of records management business process improvements should my agency strive to... correspondence; (b) Design forms that are easy to fill-in, read, transmit, process, and retrieve, and...

  13. 41 CFR 102-193.25 - What type of records management business process improvements should my agency strive to achieve?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What type of records management business process improvements should my agency strive to achieve? 102-193.25 Section 102-193.25...-193.25 What type of records management business process improvements should my agency strive...

  14. Principal Readiness and Professional Development to Conduct Effective Teacher Evaluations That Lead to Improved Student Achievement

    ERIC Educational Resources Information Center

    Hunziker, Shawn

    2012-01-01

    Education reform is the focus of many of the political agendas today. The research is clear that the best way to increase student achievement is by having highly effective teachers in the classroom. As a result of prior research, both the state and federal governments have created mandates and legislation aimed at achieving that goal. One of the…

  15. Pieces of the Puzzle: Factors in Improving Achievement of Urban School Districts. Education Outlook. No. 4

    ERIC Educational Resources Information Center

    Casserly, Michael

    2012-01-01

    In one of the first large-scale analyses of urban trends on the National Assessment of Educational Progress (NAEP), the Council of the Great City Schools and the American Institutes for Research identified urban school systems that demonstrated high achievement or significant achievement gains on the NAEP, and examined possible factors behind…

  16. Developing and Improving Modified Achievement Level Descriptors: Rationale, Procedures, and Tools

    ERIC Educational Resources Information Center

    Quenemoen, Rachel; Albus, Debra; Rogers, Chris; Lazarus, Sheryl

    2010-01-01

    Some states are developing alternate assessments based on modified achievement standards (AA-MAS) to measure the academic achievement of some students with disabilities (Albus, Lazarus, Thurlow, & Cormier, 2009; Lazarus, Thurlow, Christensen, & Cormier, 2007). These assessments measure the same content as the general assessment for a given…

  17. Principal Leadership: Creating a Culture of Academic Optimism to Improve Achievement for All Students

    ERIC Educational Resources Information Center

    McGuigan, Leigh; Hoy, Wayne K.

    2006-01-01

    Since the Coleman Report (1966), educational researchers have tried to identify school properties that make a difference in student achievement and overcome the negative influence of low socioeconomic status. We theorized that academic optimism was a latent construct that enhanced student achievement and that enabling school structure provided a…

  18. An Empirical Prior Improves Accuracy for Bayesian Estimation of Transcription Factor Binding Site Frequencies within Gene Promoters

    PubMed Central

    Ramsey, Stephen A.

    2015-01-01

    A Bayesian method for sampling from the distribution of matches to a precompiled transcription factor binding site (TFBS) sequence pattern (conditioned on an observed nucleotide sequence and the sequence pattern) is described. The method takes a position frequency matrix as input for a set of representative binding sites for a transcription factor and two sets of noncoding, 5′ regulatory sequences for gene sets that are to be compared. An empirical prior on the frequency A (per base pair of gene-vicinal, noncoding DNA) of TFBSs is developed using data from the ENCODE project and incorporated into the method. In addition, a probabilistic model for binding site occurrences conditioned on λ is developed analytically, taking into account the finite-width effects of binding sites. The count of TFBS β (conditioned on the observed sequence) is sampled using Metropolis–Hastings with an information entropy-based move generator. The derivation of the method is presented in a step-by-step fashion, starting from specific conditional independence assumptions. Empirical results show that the newly proposed prior on β improves accuracy for estimating the number of TFBS within a set of promoter sequences. PMID:27812284

  19. Improving the accuracy of sediment-associated constituent concentrations in whole storm water samples by wet-sieving

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.; Bowman, G.

    2007-01-01

    Sand-sized particles (>63 ??m) in whole storm water samples collected from urban runoff have the potential to produce data with substantial bias and/or poor precision both during sample splitting and laboratory analysis. New techniques were evaluated in an effort to overcome some of the limitations associated with sample splitting and analyzing whole storm water samples containing sand-sized particles. Wet-sieving separates sand-sized particles from a whole storm water sample. Once separated, both the sieved solids and the remaining aqueous (water suspension of particles less than 63 ??m) samples were analyzed for total recoverable metals using a modification of USEPA Method 200.7. The modified version digests the entire sample, rather than an aliquot, of the sample. Using a total recoverable acid digestion on the entire contents of the sieved solid and aqueous samples improved the accuracy of the derived sediment-associated constituent concentrations. Concentration values of sieved solid and aqueous samples can later be summed to determine an event mean concentration. ?? ASA, CSSA, SSSA.

  20. The Accuracy of ADC Measurements in Liver Is Improved by a Tailored and Computationally Efficient Local-Rigid Registration Algorithm

    PubMed Central

    Ragheb, Hossein; Thacker, Neil A.; Guyader, Jean-Marie; Klein, Stefan; deSouza, Nandita M.; Jackson, Alan

    2015-01-01

    This study describes post-processing methodologies to reduce the effects of physiological motion in measurements of apparent diffusion coefficient (ADC) in the liver. The aims of the study are to improve the accuracy of ADC measurements in liver disease to support quantitative clinical characterisation and reduce the number of patients required for sequential studies of disease progression and therapeutic effects. Two motion correction methods are compared, one based on non-rigid registration (NRA) using freely available open source algorithms and the other a local-rigid registration (LRA) specifically designed for use with diffusion weighted magnetic resonance (DW-MR) data. Performance of these methods is evaluated using metrics computed from regional ADC histograms on abdominal image slices from healthy volunteers. While the non-rigid registration method has the advantages of being applicable on the whole volume and in a fully automatic fashion, the local-rigid registration method is faster while maintaining the integrity of the biological structures essential for analysis of tissue heterogeneity. Our findings also indicate that the averaging commonly applied to DW-MR images as part of the acquisition protocol should be avoided if possible. PMID:26204105