Science.gov

Sample records for achieve improved accuracy

  1. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  2. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  3. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  4. Achieving seventh-order amplitude accuracy in leapfrog integrations

    NASA Astrophysics Data System (ADS)

    Williams, Paul

    2015-04-01

    The leapfrog time-stepping scheme is commonly used in general circulation models of weather and climate. The Robert-Asselin filter is used in conjunction with it, to damp the computational mode. Although the leapfrog scheme makes no amplitude errors when integrating linear oscillations, the Robert-Asselin filter introduces first-order amplitude errors. The RAW filter, which was recently proposed as an improvement, eliminates the first-order amplitude errors and yields third-order amplitude accuracy. This development has been shown to significantly increase the skill of medium-range weather forecasts. However, it has not previously been shown how to further improve the accuracy by eliminating the third- and higher-order amplitude errors. This presentation will show that leapfrogging over a suitably weighted blend of the filtered and unfiltered tendencies eliminates the third-order amplitude errors and yields fifth-order amplitude accuracy. It will also show that the use of a more discriminating (1,-4,6,-4,1) filter instead of a (1,-2,1) filter eliminates the fifth-order amplitude errors and yields seventh-order amplitude accuracy. Other related schemes are obtained by varying the values of the filter parameters, and it is found that several combinations offer an appealing compromise of stability and accuracy. The proposed new schemes are shown to yield substantial forecast improvements in a medium-complexity atmospheric general circulation model. They appear to be attractive alternatives to the filtered leapfrog schemes currently used in many weather and climate models. Reference Williams PD (2013) Achieving seventh-order amplitude accuracy in leapfrog integrations. Monthly Weather Review 141(9), pp 3037-3051. DOI: 10.1175/MWR-D-12-00303.1

  5. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  6. Achieving Seventh-Order Amplitude Accuracy in Leapfrog Integrations

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2014-12-01

    The leapfrog time-stepping scheme is commonly used in general circulation models of the atmosphere and ocean. The Robert-Asselin filter is used in conjunction with it, to damp the computational mode. Although the leapfrog scheme makes no amplitude errors when integrating linear oscillations, the Robert-Asselin filter introduces first-order amplitude errors. The RAW filter, which was recently proposed as an improvement, eliminates the first-order amplitude errors and yields third-order amplitude accuracy. This development has been shown to significantly increase the skill of medium-range weather forecasts. However, it has not previously been shown how to further improve the accuracy by eliminating the third- and higher-order amplitude errors. This presentation will show that leapfrogging over a suitably weighted blend of the filtered and unfiltered tendencies eliminates the third-order amplitude errors and yields fifth-order amplitude accuracy. It will also show that the use of a more discriminating (1, -4, 6, -4, 1) filter instead of a (1, -2, 1) filter eliminates the fifth-order amplitude errors and yields seventh-order amplitude accuracy. Other related schemes are obtained by varying the values of the filter parameters, and it is found that several combinations offer an appealing compromise of stability and accuracy. The proposed new schemes are shown to yield substantial forecast improvements in a medium-complexity atmospheric general circulation model. They appear to be attractive alternatives to the filtered leapfrog schemes currently used in many weather and climate models.

  7. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  8. Improving the Accuracy of Self-Corrected Mathematics Homework.

    ERIC Educational Resources Information Center

    Miller, Tracy L.; And Others

    1993-01-01

    A study investigated the effect of reward on sixth graders' self-correction inaccuracy and monitored the effect of improved self-correction on mathematics homework achievement. There was a low baseline rate of student self-correction inaccuracy. Offering a reward for improved accuracy caused the rate of inaccuracy to decrease significantly.…

  9. Training in timing improves accuracy in golf.

    PubMed

    Libkuman, Terry M; Otani, Hajime; Steger, Neil

    2002-01-01

    In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497

  10. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  11. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  12. Accuracy required and achievable in radiotherapy dosimetry: have modern technology and techniques changed our views?

    NASA Astrophysics Data System (ADS)

    Thwaites, David

    2013-06-01

    In this review of the accuracy required and achievable in radiotherapy dosimetry, older approaches and evidence-based estimates for 3DCRT have been reprised, summarising and drawing together the author's earlier evaluations where still relevant. Available evidence for IMRT uncertainties has been reviewed, selecting information from tolerances, QA, verification measurements, in vivo dosimetry and dose delivery audits, to consider whether achievable uncertainties increase or decrease for current advanced treatments and practice. Overall there is some evidence that they tend to increase, but that similar levels should be achievable. Thus it is concluded that those earlier estimates of achievable dosimetric accuracy are still applicable, despite the changes and advances in technology and techniques. The one exception is where there is significant lung involvement, where it is likely that uncertainties have now improved due to widespread use of more accurate heterogeneity models. Geometric uncertainties have improved with the wide availability of IGRT.

  13. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  14. Can Judges Improve Academic Achievement?

    ERIC Educational Resources Information Center

    Greene, Jay P.; Trivitt, Julie R.

    2008-01-01

    Over the last 3 decades student achievement has remained essentially unchanged in the United States, but not for a lack of spending. Over the same period a myriad of education reforms have been suggested and per-pupil spending has more than doubled. Since the 1990s the education reform attempts have frequently included judicial decisions to revise…

  15. Improving Achievement in Gaelic. Improving Series

    ERIC Educational Resources Information Center

    Her Majesty's Inspectorate of Education, 2005

    2005-01-01

    The aim of this report is to provide an overview of the development of provision for Gaelic education in Scottish schools, 11 years on from the Inspectorate's last major review of the area. The report sets out to provide an update showing what progress has been achieved since that time, with a specific focus on pre-school, primary and secondary…

  16. Total solar irradiance record accuracy and recent improvements

    NASA Astrophysics Data System (ADS)

    Kopp, Greg

    The total solar irradiance (TSI) data record includes uninterrupted measurements from over 10 spaceborne instruments spanning the last 31 years. Continuity of on-orbit measurements allows adjustments for instrument offsets to create a TSI composite needed for estimating solar influences on Earth's climate. Because climate sensitivities to solar forcings are determined not only from direct TSI measurements over recent 11-year solar cycles but also from reconstructions of historical solar variability based on the recent measurements, the accuracy of the TSI record is critical. This climate data record currently relies on both instrument stability and measurement continuity, although improvements in absolute accuracy via better instrument calibrations and new test facilities promise to reduce this current reliance on continuity. The Total Irradiance Monitor (TIM) is striving for improved levels of absolute accuracy, and a new TSI calibration facility is now able to validate the accuracy of modern instruments and diagnose causes of offsets between different TSI instruments. The instrument offsets are due to calibration errors. As of early 2010, none of the on-orbit instruments have been calibrated end-to-end to the needed accuracy levels. The new TSI Radiometer Facility (TRF) built for NASA's Glory mission provides these new calibration capabilities. Via direct optical power comparisons to a NIST-calibrated cryogenic radiometer, this ground-based facility provides calibrations of a TSI instrument much as the instrument is operated in space: under vacuum, at full solar irradiance power levels, and with uniform incoming light for irradiance measurements. Both the PICARD/PREMOS and the upcoming Glory/TIM instruments have been tested in this new facility, helping improve the absolute accuracy of the TSI data record and diagnose the causes of existing instrument offsets. In addition to being benchmarked to this new ground-based reference, the Glory/TIM and the future TSIS

  17. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  18. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information.

    PubMed

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  19. Objective sampling with EAGLE to improve acoustic prediction accuracy

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.

    2003-10-01

    Some Navy operations require extensive acoustic calculations. The standard computational approach is to calculate on a regular grid of points and radials. In complex environmental areas, this implies a dense grid and many radials (i.e., long run times) to achieve acceptable accuracy and detail. However, Navy tactical decision aid calculations must be timely and exhibit adequate accuracy or the results may be too old or too imprecise to be valuable. This dilemma led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces irregular acoustic grids [Rike and DelBalzo, Proc. IEEE Oceans (2002)]. Its premise is that physical environmental complexity controls the need for dense sampling in space and azimuth, and that transmission loss already computed for nearby coordinates on previous iterations can be used to predict that complexity. Recent work in this area to further increase accuracy and efficiency by using better metrics and interpolation routines has led to the Efficient Acoustic Gridder for Littoral Environments (EAGLE). On each iteration, EAGLE produces an acoustic field for the entire area of interest with ever-increasing resolution and accuracy. An example is presented where approximately an order of magnitude efficiency improvement (over regular grids) is demonstrated. [Work sponsored by ONR.

  20. Bayesian reclassification statistics for assessing improvements in diagnostic accuracy.

    PubMed

    Huang, Zhipeng; Li, Jialiang; Cheng, Ching-Yu; Cheung, Carol; Wong, Tien-Yin

    2016-07-10

    We propose a Bayesian approach to the estimation of the net reclassification improvement (NRI) and three versions of the integrated discrimination improvement (IDI) under the logistic regression model. Both NRI and IDI were proposed as numerical characterizations of accuracy improvement for diagnostic tests and were shown to retain certain practical advantage over analysis based on ROC curves and offer complementary information to the changes in area under the curve. Our development is a new contribution towards Bayesian solution for the estimation of NRI and IDI, which eases computational burden and increases flexibility. Our simulation results indicate that Bayesian estimation enjoys satisfactory performance comparable with frequentist estimation and achieves point estimation and credible interval construction simultaneously. We adopt the methodology to analyze a real data from the Singapore Malay Eye Study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26875442

  1. Is accuracy of serum free light chain measurement achievable?

    PubMed

    Jacobs, Joannes F M; Tate, Jillian R; Merlini, Giampaolo

    2016-06-01

    The serum free light chain (FLC) assay has proven to be an important complementary test in the management of patients with monoclonal gammopathies. The serum FLC assay has value for patients with plasma cell disorders in the context of screening and diagnosis, prognostic stratification, and quantitative monitoring. Nonetheless, serum FLC measurements have analytical limitations which give rise to differences in FLC reporting depending on which FLC assay and analytical platform is used. As the FLC measurements are incorporated in the International Myeloma Working Group guidelines for the evaluation and management of plasma cell dyscrasias, this may directly affect clinical decisions. As new certified methods for serum FLC assays emerge, the need to harmonise patient FLC results becomes increasingly important. In this opinion paper we provide an overview of the current lack of accuracy and harmonisation in serum FLC measurements. The clinical consequence of non-harmonized FLC measurements is that an individual patient may or may not meet certain diagnostic, prognostic, or response criteria, depending on which FLC assay and platform is used. We further discuss whether standardisation of serum FLC measurements is feasible and provide an overview of the steps needed to be taken towards harmonisation of FLC measurements. PMID:26641970

  2. Meteor orbit determination with improved accuracy

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria

    2015-08-01

    Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217

  3. Generalized and Heuristic-Free Feature Construction for Improved Accuracy

    PubMed Central

    Fan, Wei; Zhong, Erheng; Peng, Jing; Verscheure, Olivier; Zhang, Kun; Ren, Jiangtao; Yan, Rong; Yang, Qiang

    2010-01-01

    State-of-the-art learning algorithms accept data in feature vector format as input. Examples belonging to different classes may not always be easy to separate in the original feature space. One may ask: can transformation of existing features into new space reveal significant discriminative information not obvious in the original space? Since there can be infinite number of ways to extend features, it is impractical to first enumerate and then perform feature selection. Second, evaluation of discriminative power on the complete dataset is not always optimal. This is because features highly discriminative on subset of examples may not necessarily be significant when evaluated on the entire dataset. Third, feature construction ought to be automated and general, such that, it doesn't require domain knowledge and its improved accuracy maintains over a large number of classification algorithms. In this paper, we propose a framework to address these problems through the following steps: (1) divide-conquer to avoid exhaustive enumeration; (2) local feature construction and evaluation within subspaces of examples where local error is still high and constructed features thus far still do not predict well; (3) weighting rules based search that is domain knowledge free and has provable performance guarantee. Empirical studies indicate that significant improvement (as much as 9% in accuracy and 28% in AUC) is achieved using the newly constructed features over a variety of inductive learners evaluated against a number of balanced, skewed and high-dimensional datasets. Software and datasets are available from the authors. PMID:21544257

  4. Improving DNA sequencing accuracy and throughput

    SciTech Connect

    Nelson, D.O. |

    1996-12-31

    LLNL is beginning to explore statistical approaches to the problem of determining the DNA sequence underlying data obtained from fluorescence-based gel electrophoresis. Among the features of this problem that make it interesting to statisticians include: (1) the underlying mechanics of electrophoresis is quite complex and still not completely understood; (2) the yield of fragments of any given size can be quite small and variable; (3) the mobility of fragments of a given size can depend on the terminating base; (4) the data consists of samples from one or more continuous, non-stationary signals; (5) boundaries between segments generated by distinct elements of the underlying sequence are ill-defined or nonexistent in the signal; and (6) the sampling rate of the signal greatly exceeds the rate of evolution of the underlying discrete sequence. Current approaches to base calling address only some of these issues, and usually in a heuristic, ad hoc way. In this article we describe some of our initial efforts towards increasing base calling accuracy and throughput by providing a rational, statistical foundation to the process of deducing sequence from signal. 31 refs., 12 figs.

  5. Improving Student Achievement through Character Education.

    ERIC Educational Resources Information Center

    Finck, Chip; Hansen, Cynthia; Jensen, Jane

    This report describes a program for improving moral character to increase academic achievement. Targeted population consisted of middle school students in a growing middle class community in a northern suburb of Chicago, Illinois. The problem, an absence of proper moral character, was documented through data collected from discipline referrals to…

  6. Handbook of Research on Improving Student Achievement.

    ERIC Educational Resources Information Center

    Cawelti, Gordon, Ed.

    This handbook is designed to identify classroom practices that research has shown to result in higher student achievement. The fundamental idea behind this book is that in order to succeed, efforts to improve instruction must foucs on the existing knowledge base about effective teaching and learning. The chapters are: (1) "Introduction" (Gordon…

  7. Improving Student Achievement Using Expert Learning Systems

    ERIC Educational Resources Information Center

    Green, Ronny; Smith, Bob; Leech, Don

    2004-01-01

    Both educators and the public are demanding improvements in student achievement and school performance. However, students meeting the highest college admission standards are increasingly selecting fields of study other than teaching. How can we increase teacher competence when many of our brightest teacher prospects are going into other fields?…

  8. Improving Student Achievement through Alternative Assessments.

    ERIC Educational Resources Information Center

    Durning, Jermaine; Matyasec, Maryann

    An attempt was made to improve students' academic grades and students' opinions of themselves as learners through the use of alternative assessments. The format of mastery learning using the direct instruction practice model was combined with performance-based assessment to increase achievement, self-esteem, and higher level thinking skills.…

  9. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  10. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…

  11. "Battleship Numberline": A Digital Game for Improving Estimation Accuracy on Fraction Number Lines

    ERIC Educational Resources Information Center

    Lomas, Derek; Ching, Dixie; Stampfer, Eliane; Sandoval, Melanie; Koedinger, Ken

    2011-01-01

    Given the strong relationship between number line estimation accuracy and math achievement, might a computer-based number line game help improve math achievement? In one study by Rittle-Johnson, Siegler and Alibali (2001), a simple digital game called "Catch the Monster" provided practice in estimating the location of decimals on a number line.…

  12. Improving Homework Accuracy: Interdependent Group Contingencies and Randomized Components

    ERIC Educational Resources Information Center

    Reinhardt, Danielle; Theodore, Lea A.; Bray, Melissa A.; Kehle, Thomas J.

    2009-01-01

    Homework is an often employed teaching strategy that has strong positive effects on academic achievement across grade levels, content areas, and student ability levels. To maximize academic learning, accuracy of homework should be addressed. The present investigation employed a multiple-baseline design across academic behaviors to examine the…

  13. Pancreatic cancer-improved care achievable

    PubMed Central

    Buanes, Trond A

    2014-01-01

    Pancreatic adenocarcinoma is one of the most aggressive cancers, and the decline in mortality observed in most other cancer diseases, has so far not taken place in pancreatic cancer. Complete tumor resection is a requirement for potential cure, and the reorganization of care in the direction of high patient-volume centers, offering multimodal treatment, has improved survival and Quality of Life. Also the rates and severity grade of complications are improving in high-volume pancreatic centers. One of the major problems worldwide is underutilization of surgery in resectable pancreatic cancer. Suboptimal investigation, follow up and oncological treatment outside specialized centers are additional key problems. New chemotherapeutic regimens like FOLFIRINOX have improved survival in patients with metastatic disease, and different adjuvant treatment options result in well documented survival benefit. Neoadjuvant treatment is highly relevant, but needs further evaluation. Also adjuvant immunotherapy, in the form of vaccination with synthetic K-Ras-peptides, has been shown to produce long term immunological memory in cytotoxic T-cells in long term survivors. Improvement in clinical outcome is already achievable and further progress is expected in the near future for patients treated with curative as well as palliative intention. PMID:25132756

  14. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    NASA Astrophysics Data System (ADS)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  15. Improving Student Achievement in Math and Science

    NASA Technical Reports Server (NTRS)

    Sullivan, Nancy G.; Hamsa, Irene Schulz; Heath, Panagiota; Perry, Robert; White, Stacy J.

    1998-01-01

    As the new millennium approaches, a long anticipated reckoning for the education system of the United States is forthcoming, Years of school reform initiatives have not yielded the anticipated results. A particularly perplexing problem involves the lack of significant improvement of student achievement in math and science. Three "Partnership" projects represent collaborative efforts between Xavier University (XU) of Louisiana, Southern University of New Orleans (SUNO), Mississippi Valley State University (MVSU), and the National Aeronautics and Space Administration (NASA), Stennis Space Center (SSC), to enhance student achievement in math and science. These "Partnerships" are focused on students and teachers in federally designated rural and urban empowerment zones and enterprise communities. The major goals of the "Partnerships" include: (1) The identification and dissemination of key indices of success that account for high performance in math and science; (2) The education of pre-service and in-service secondary teachers in knowledge, skills, and competencies that enhance the instruction of high school math and science; (3) The development of faculty to enhance the quality of math and science courses in institutions of higher education; and (4) The incorporation of technology-based instruction in institutions of higher education. These goals will be achieved by the accomplishment of the following objectives: (1) Delineate significant ?best practices? that are responsible for enhancing student outcomes in math and science; (2) Recruit and retain pre-service teachers with undergraduate degrees in Biology, Math, Chemistry, or Physics in a graduate program, culminating with a Master of Arts in Curriculum and Instruction; (3) Provide faculty workshops and opportunities for travel to professional meetings for dissemination of NASA resources information; (4) Implement methodologies and assessment procedures utilizing performance-based applications of higher order

  16. Improving the accuracy of phase-shifting techniques

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Redondo-Galvan, Arturo

    2015-05-01

    The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera-projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.

  17. The Influence of Overt Practice, Achievement Level, and Explanatory Style on Calibration Accuracy and Performance

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; O'Shea, Patrick; Allen, Dwight

    2005-01-01

    The authors measured the influence of overt calibration practice, achievement level, and explanatory style on calibration accuracy and exam performance. Students (N = 356) were randomly assigned to either an overt practice or no-practice condition. Students in the overt practice condition made predictions and postdictions about their performance…

  18. Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Fukaya, Tatsushi

    2013-01-01

    The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…

  19. Operators, service companies improve horizontal drilling accuracy offshore

    SciTech Connect

    Lyle, D.

    1996-04-01

    Continuing efforts to get more and better measurement and logging equipment closer to the bit improve accuracy in offshore drilling. Using current technology, both in measurement while drilling and logging while drilling, a target can consistently be hit within five vertical feet.

  20. An improved method for determining force balance calibration accuracy

    NASA Astrophysics Data System (ADS)

    Ferris, Alice T.

    The results of an improved statistical method used at Langley Research Center for determining and stating the accuracy of a force balance calibration are presented. The application of the method for initial loads, initial load determination, auxiliary loads, primary loads, and proof loads is described. The data analysis is briefly addressed.

  1. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  2. Improved accuracy of the reference network of Bosnia and Herzegovina

    NASA Astrophysics Data System (ADS)

    Mulic, M.; Bilajbegovic, A.

    2012-04-01

    Availability of the reprocessed IGS05 precise orbits opened the door to the possibilities of the re-processing of two GPS campaigns in the Bosnia and Herzegovina, organized in the year 2000 and 2005. The data of the GPS observations processed using the Bernese software, version 5.0. Results were in the IGS05 reference frame. Corrections for the delays of GPS signals passing through the troposphere were estimated for every 2 hours, and their projection on the observed height of the stations was calculated using wet Neill mapping functions, but horizontal gradients were estimated for every 4 hours. Results of reprocessing shows improved accuracy. It could be generally said that the accuracy of the all three components of the positions were within the 10 mm and accuracy of the processed velocities for the identical stations were about 1mm/year. Common campaign in the middle epoch used to evaluate velocities of identical stations. For stations that are not observed in both of these campaigns, velocities were interpolated using a polynomial of third degree. So, re-processing of the campaigns resulted in the improved accuracy of the realization of a geodetic reference network for Bosnia and Herzegovina.

  3. Do Charter Schools Improve Student Achievement?

    ERIC Educational Resources Information Center

    Clark, Melissa A.; Gleason, Philip M.; Tuttle, Christina Clark; Silverberg, Marsha K.

    2015-01-01

    This article presents findings from a lottery-based study of the impacts of a broad set of 33 charter middle schools across 13 states on student achievement. To estimate charter school impacts, we compare test score outcomes of students admitted to these schools through the randomized admissions lotteries with outcomes of applicants who were not…

  4. Proven Strategies for Improving Learning & Achievement.

    ERIC Educational Resources Information Center

    Brown, Duane

    The purpose of this book is to give student support personnel tools that: (1) will be recognized by educators as directly related to enhancing academic performance; (2) can be used with confidence that they will have the desired impact on achievement; and (3) are culturally sensitive. Chapters contain detailed presentation of the technology as…

  5. Improved Snow Mapping Accuracy with Revised MODIS Snow Algorithm

    NASA Technical Reports Server (NTRS)

    Riggs, George; Hall, Dorothy K.

    2012-01-01

    The MODIS snow cover products have been used in over 225 published studies. From those reports, and our ongoing analysis, we have learned about the accuracy and errors in the snow products. Revisions have been made in the algorithms to improve the accuracy of snow cover detection in Collection 6 (C6), the next processing/reprocessing of the MODIS data archive planned to start in September 2012. Our objective in the C6 revision of the MODIS snow-cover algorithms and products is to maximize the capability to detect snow cover while minimizing snow detection errors of commission and omission. While the basic snow detection algorithm will not change, new screens will be applied to alleviate snow detection commission and omission errors, and only the fractional snow cover (FSC) will be output (the binary snow cover area (SCA) map will no longer be included).

  6. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  7. Post-processing for improving hyperspectral anomaly detection accuracy

    NASA Astrophysics Data System (ADS)

    Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang

    2015-10-01

    Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.

  8. Improved single-strand DNA sizing accuracy in capillary electrophoresis.

    PubMed Central

    Rosenblum, B B; Oaks, F; Menchen, S; Johnson, B

    1997-01-01

    Interpolation algorithms can be developed to size unknown single-stranded (ss) DNA fragments based on their electrophoretic mobilities, when they are compared with the mobilities of standard fragments of known sizes; however, sequence-specific anomalous electrophoretic migration can affect the accuracy and precision of the called sizes of the fragments. We used the anomalous migration of ssDNA fragments to optimize denaturation conditions for capillary electrophoresis. The capillary electrophoretic system uses a refillable polymer that both coats the capillary wall to suppress electro-osmotic flow and acts as the sieving matrix. The addition of 8 M urea to the polymer solution, as in slab gel electrophoresis, is insufficient to fully denature some anomalously migrating ssDNA fragments in this capillary electrophoresis system. The sizing accuracy of these fragments is significantly improved by the addition of 2-pyrrolidinone, or increased capillary temperature (60 degrees C). the effect of these two denaturing strategies is additive, and the best accuracy and precision in sizing results are obtained with a combination of chemical and thermal denaturation. PMID:9380518

  9. Reshaping Personnel Policies to Improve Student Achievement

    ERIC Educational Resources Information Center

    Koppich, Julia E.; Gerstein, Amy

    2007-01-01

    The "Getting Down to Facts" (GDTF) studies released in March 2007 offered a clear diagnosis of the issues facing California's education system. Now, as California moves beyond the facts and begins the search for ways to improve the performance of California schools and students, the state faces a critical policy dilemma. On the one hand, the…

  10. Strategic School Funding for Improved Student Achievement

    ERIC Educational Resources Information Center

    Chambers, Jay G.; Brown, James R.; Levin, Jesse; Jubb, Steve; Harper, Dorothy; Tolleson, Ray; Manship, Karen

    2010-01-01

    This article features Strategic School Funding for Results (SSFR) project, a new joint initiative of the American Institutes for Research (AIR) and Pivot Learning Partners (PLP) aimed at improving school finance, human resources, and management systems in large urban school districts. The goal of the project is to develop and implement more…

  11. a Method to Achieve Large Volume, High Accuracy Photogrammetric Measurements Through the Use of AN Actively Deformable Sensor Mounting Platform

    NASA Astrophysics Data System (ADS)

    Sargeant, B.; Robson, S.; Szigeti, E.; Richardson, P.; El-Nounu, A.; Rafla, M.

    2016-06-01

    When using any optical measurement system one important factor to consider is the placement of the sensors in relation to the workpiece being measured. When making decisions on sensor placement compromises are necessary in selecting the best placement based on the shape and size of the object of interest and the desired resolution and accuracy. One such compromise is in the distance the sensors are placed from the measurement surface, where a smaller distance gives a higher spatial resolution and local accuracy and a greater distance reduces the number of measurements necessary to cover a large area reducing the build-up of errors between measurements and increasing global accuracy. This paper proposes a photogrammetric approach whereby a number of sensors on a continuously flexible mobile platform are used to obtain local measurements while the position of the sensors is determined by a 6DoF tracking solution and the results combined to give a single set of measurement data within a continuous global coordinate system. The ability of this approach to achieve both high accuracy measurement and give results over a large volume is then tested and areas of weakness to be improved upon are identified.

  12. How Patients Can Improve the Accuracy of their Medical Records

    PubMed Central

    Dullabh, Prashila M.; Sondheimer, Norman K.; Katsh, Ethan; Evans, Michael A.

    2014-01-01

    Objectives: Assess (1) if patients can improve their medical records’ accuracy if effectively engaged using a networked Personal Health Record; (2) workflow efficiency and reliability for receiving and processing patient feedback; and (3) patient feedback’s impact on medical record accuracy. Background: Improving medical record’ accuracy and associated challenges have been documented extensively. Providing patients with useful access to their records through information technology gives them new opportunities to improve their records’ accuracy and completeness. A new approach supporting online contributions to their medication lists by patients of Geisinger Health Systems, an online patient-engagement advocate, revealed this can be done successfully. In late 2011, Geisinger launched an online process for patients to provide electronic feedback on their medication lists’ accuracy before a doctor visit. Patient feedback was routed to a Geisinger pharmacist, who reviewed it and followed up with the patient before changing the medication list shared by the patient and the clinicians. Methods: The evaluation employed mixed methods and consisted of patient focus groups (users, nonusers, and partial users of the feedback form), semi structured interviews with providers and pharmacists, user observations with patients, and quantitative analysis of patient feedback data and pharmacists’ medication reconciliation logs. Findings/Discussion: (1) Patients were eager to provide feedback on their medications and saw numerous advantages. Thirty percent of patient feedback forms (457 of 1,500) were completed and submitted to Geisinger. Patients requested changes to the shared medication lists in 89 percent of cases (369 of 414 forms). These included frequency—or dosage changes to existing prescriptions and requests for new medications (prescriptions and over-the counter). (2) Patients provided useful and accurate online feedback. In a subsample of 107 forms

  13. MemBrain: Improving the Accuracy of Predicting Transmembrane Helices

    PubMed Central

    Shen, Hongbin; Chou, James J.

    2008-01-01

    Prediction of transmembrane helices (TMH) in α helical membrane proteins provides valuable information about the protein topology when the high resolution structures are not available. Many predictors have been developed based on either amino acid hydrophobicity scale or pure statistical approaches. While these predictors perform reasonably well in identifying the number of TMHs in a protein, they are generally inaccurate in predicting the ends of TMHs, or TMHs of unusual length. To improve the accuracy of TMH detection, we developed a machine-learning based predictor, MemBrain, which integrates a number of modern bioinformatics approaches including sequence representation by multiple sequence alignment matrix, the optimized evidence-theoretic K-nearest neighbor prediction algorithm, fusion of multiple prediction window sizes, and classification by dynamic threshold. MemBrain demonstrates an overall improvement of about 20% in prediction accuracy, particularly, in predicting the ends of TMHs and TMHs that are shorter than 15 residues. It also has the capability to detect N-terminal signal peptides. The MemBrain predictor is a useful sequence-based analysis tool for functional and structural characterization of helical membrane proteins; it is freely available at http://chou.med.harvard.edu/bioinf/MemBrain/. PMID:18545655

  14. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  15. Improving spatial updating accuracy in absence of external feedback.

    PubMed

    Mackrous, I; Simoneau, M

    2015-08-01

    Updating the position of an earth-fixed target during whole-body rotation seems to rely on cognitive processes such as the utilization of external feedback. According to perceptual learning models, improvement in performance can also occur without external feedback. The aim of this study was to assess spatial updating improvement in the absence and in the presence of external feedback. While being rotated counterclockwise (CCW), participants had to predict when their body midline had crossed the position of a memorized target. Four experimental conditions were tested: (1) Pre-test: the target was presented 30° in the CCW direction from participant's midline. (2) Practice: the target was located 45° in the CCW direction from participant's midline. One group received external feedback about their spatial accuracy (Mackrous and Simoneau, 2014) while the other group did not. (3) Transfer T(30)CCW: the target was presented 30° in the CCW direction to evaluate whether improvement in performance, during practice, generalized to other target eccentricity. (4) Transfer T(30)CW: the target was presented 30° in the clockwise (CW) direction and participants were rotated CW. This transfer condition evaluated whether improvement in performance generalized to the untrained rotation direction. With practice, performance improved in the absence of external feedback (p=0.004). Nonetheless, larger improvement occurred when external feedback was provided (ps=0.002). During T(30)CCW, performance remained better for the feedback than the no-feedback group (p=0.005). However, no group difference was observed for the untrained direction (p=0.22). We demonstrated that spatial updating improved without external feedback but less than when external feedback was given. These observations are explained by a mixture of calibration processes and supervised vestibular learning. PMID:25987200

  16. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  17. Improved accuracy of the remote sensing of sea surface temperature

    NASA Technical Reports Server (NTRS)

    Dalu, G.; Prabhakara, C.; Lo, R. C.

    1981-01-01

    A method is described for determining the water vapor content to within + or - 0.4 g/sq cm from remotely sensed radiances in three infrared channels, 11, 13, 18 microns. Using this method, it is possible to significantly improve the accuracy of sea surface temperature (SST) over what is obtainable with the two channel technique. A radiative computational scheme for the radiative transfer equation is used to study the manner in which the equivalent radiative temperature of the atmosphere changes as a function of wave number for different atmospheric conditions. Average climatological conditions are used to simulate the radiative response of the atmosphere. This radiative transfer simulation is used to compute brightness temperatures for radiosonde profiles obtained from oceanographic ships, which temperatures are in turn used to estimate the SST. Nimbus 4 IRIS spectral measurements corresponding to the profiles were used in the same way for purposes of comparison.

  18. Image accuracy improvements in microwave tomographic thermometry: phantom experience.

    PubMed

    Meaney, P M; Paulsen, K D; Fanning, M W; Li, D; Fang, Q

    2003-01-01

    Evaluation of a laboratory-scale microwave imaging system for non-invasive temperature monitoring has previously been reported with good results in terms of both spatial and temperature resolution. However, a new formulation of the reconstruction algorithm in terms of the log-magnitude and phase of the electric fields has dramatically improved the ability of the system to track the temperature-dependent electrical conductivity distribution. This algorithmic enhancement was originally implemented as a way of improving overall imaging capability in cases of large, high contrast permittivity scatterers, but has also proved to be sensitive to subtle conductivity changes as required in thermal imaging. Additional refinements in the regularization procedure have strengthened the reliability and robustness of image convergence. Imaging experiments were performed for a single heated target consisting of a 5.1 cm diameter PVC tube located within 15 and 25 cm diameter monopole antenna arrays, respectively. The performance of both log-magnitude/phase and complex-valued reconstructions when subjected to four different regularization schemes has been compared based on this experimental data. The results demonstrate a significant accuracy improvement (to 0.2 degrees C as compared with 1.6 degrees C for the previously published approach) in tracking thermal changes in phantoms where electrical properties vary linearly with temperature over a range relevant to hyperthermia cancer therapy. PMID:12944168

  19. Advantages of improved timing accuracy in PET cameras using LSOscintillator

    SciTech Connect

    Moses, William W.

    2002-12-02

    PET scanners based on LSO have the potential forsignificantly better coincidence timing resolution than the 6 ns fwhmtypically achieved with BGO. This study analyzes the performanceenhancements made possible by improved timing as a function of thecoincidence time resolution. If 500 ps fwhm coincidence timing resolutioncan be achieved in a complete PET camera, the following four benefits canbe realized for whole-body FDG imaging: 1) The random event rate can bereduced by using a narrower coincidence timing window, increasing thepeak NECR by~;50 percent. 2) Using time-of-flight in the reconstructionalgorithm will reduce the noise variance by a factor of 5. 3) Emissionand transmission data can be acquired simultaneously, reducing the totalscan time. 4) Axial blurring can be reduced by using time-of-flight todetermine the correct axial plane that each event originated from. Whiletime-of-flight was extensively studied in the 1980's, practical factorslimited its effectiveness at that time and little attention has been paidto timing in PET since then. As these potential improvements aresubstantial and the advent of LSO PET cameras gives us the means toobtain them without other sacrifices, efforts to improve PET timingshould resume after their long dormancy.

  20. Improvement of measuring accuracy of an optical CMM

    NASA Astrophysics Data System (ADS)

    Chao, Z. X.; Ong, S. S.; Tan, S. L.

    In this paper, a laser interferometer was used to evaluate the linear performance of an optical coordinate measuring machine with a measuring area of 400 mm x 400 mm at 20-mm interval. The evaluation results show that the linear performance of the Y-axis varies a lot at different X positions and offline error compensation method was implemented. A 200 mm glass scale was used to verify the method. The results showed that the variations at different X positions improved from 0.35 μm to 0.15 μm within the compensated area. A better measurement uncertainty was also achieved.

  1. Glacier Mapping With Landsat Tm: Improvements and Accuracy

    NASA Astrophysics Data System (ADS)

    Paul, F.; Huggel, C.; Kaeaeb, A.; Maisch, M.

    The new Swiss Glacier Inventory for the year 2000 (SGI 2000) is presently derived from Landsat TM data. Glacier areas were obtained by segmentation of a ratio image from TM band 4 and 5. This method has proven to be very simple and highly accurate - an essential requirement for world-wide application within the project GLIMS (Global Land Ice Measurements from Space). Mis-classification using TM4 / TM 5 results for lakes, forests and areas with vegetation in cloud shadows. Digital image processing techniques are used to classify these regions separately and eliminate them from the glacier map. Automatic mapping of debris-covered glacier ice is difficult due to the spectral similarity with the surrounding terrain. For the SGI 2000, an attempt has been made to obtain the debris-covered area on glaciers by a combination of pixel- based image classification, digital terrain modelling, an object-oriented procedure and change detection analysis. First results of these improvements are presented. The accuracy of the TM derived glacier outlines is assessed by a comparison with manually derived outlines of higher resolution data sets (pan bands from SPOT, IRS- 1C and Ikonos). The overlay of outlines show very good correspondence (within the georeferencing accuracy) and the comparison of glacier areas reveals differences smaller than 5% for debris-free ice. Since acquisition of IRS-1C and Ikonos imagery is one year before and after the TM scene, respectively, small differences are also a result of glacier retreat. The automatically mapped debris-covered glacier areas are compared to the areas assigned manually on the TM image by visual interpretation. For most glaciers only a few pixels have to be corrected, for some others larger modi- fications are required.

  2. An Effective Approach to Improving Low-Cost GPS Positioning Accuracy in Real-Time Navigation

    PubMed Central

    Islam, Md. Rashedul; Kim, Jong-Myon

    2014-01-01

    Positioning accuracy is a challenging issue for location-based applications using a low-cost global positioning system (GPS). This paper presents an effective approach to improving the positioning accuracy of a low-cost GPS receiver for real-time navigation. The proposed method precisely estimates position by combining vehicle movement direction, velocity averaging, and distance between waypoints using coordinate data (latitude, longitude, time, and velocity) of the GPS receiver. The previously estimated precious reference point, coordinate translation, and invalid data check also improve accuracy. In order to evaluate the performance of the proposed method, we conducted an experiment using a GARMIN GPS 19xHVS receiver attached to a car and used Google Maps to plot the processed data. The proposed method achieved improvement of 4–10 meters in several experiments. In addition, we compared the proposed approach with two other state-of-the-art methods: recursive averaging and ARMA interpolation. The experimental results show that the proposed approach outperforms other state-of-the-art methods in terms of positioning accuracy. PMID:25136679

  3. An effective approach to improving low-cost GPS positioning accuracy in real-time navigation.

    PubMed

    Islam, Md Rashedul; Kim, Jong-Myon

    2014-01-01

    Positioning accuracy is a challenging issue for location-based applications using a low-cost global positioning system (GPS). This paper presents an effective approach to improving the positioning accuracy of a low-cost GPS receiver for real-time navigation. The proposed method precisely estimates position by combining vehicle movement direction, velocity averaging, and distance between waypoints using coordinate data (latitude, longitude, time, and velocity) of the GPS receiver. The previously estimated precious reference point, coordinate translation, and invalid data check also improve accuracy. In order to evaluate the performance of the proposed method, we conducted an experiment using a GARMIN GPS 19xHVS receiver attached to a car and used Google Maps to plot the processed data. The proposed method achieved improvement of 4-10 meters in several experiments. In addition, we compared the proposed approach with two other state-of-the-art methods: recursive averaging and ARMA interpolation. The experimental results show that the proposed approach outperforms other state-of-the-art methods in terms of positioning accuracy. PMID:25136679

  4. Accuracy of Self-Reported College GPA: Gender-Moderated Differences by Achievement Level and Academic Self-Efficacy

    ERIC Educational Resources Information Center

    Caskie, Grace I. L.; Sutton, MaryAnn C.; Eckhardt, Amanda G.

    2014-01-01

    Assessments of college academic achievement tend to rely on self-reported GPA values, yet evidence is limited regarding the accuracy of those values. With a sample of 194 undergraduate college students, the present study examined whether accuracy of self-reported GPA differed based on level of academic performance or level of academic…

  5. Accuracy Improvement on the Measurement of Human-Joint Angles.

    PubMed

    Meng, Dai; Shoepe, Todd; Vejarano, Gustavo

    2016-03-01

    A measurement technique that decreases the root mean square error (RMSE) of measurements of human-joint angles using a personal wireless sensor network is reported. Its operation is based on virtual rotations of wireless sensors worn by the user, and it focuses on the arm, whose position is measured on 5 degree of freedom (DOF). The wireless sensors use inertial magnetic units that measure the alignment of the arm with the earth's gravity and magnetic fields. Due to the biomechanical properties of human tissue (e.g., skin's elasticity), the sensors' orientation is shifted, and this shift affects the accuracy of measurements. In the proposed technique, the change of orientation is first modeled from linear regressions of data collected from 15 participants at different arm positions. Then, out of eight body indices measured with dual-energy X-ray absorptiometry, the percentage of body fat is found to have the greatest correlation with the rate of change in sensors' orientation. This finding enables us to estimate the change in sensors' orientation from the user's body fat percentage. Finally, an algorithm virtually rotates the sensors using quaternion theory with the objective of reducing the error. The proposed technique is validated with experiments on five different participants. In the DOF, whose error decreased the most, the RMSE decreased from 2.20(°) to 0.87(°). This is an improvement of 60%, and in the DOF whose error decreased the least, the RMSE decreased from 1.64(°) to 1.37(°). This is an improvement of 16%. On an average, the RMSE improved by 44%. PMID:25622331

  6. Measurement of Phospholipids May Improve Diagnostic Accuracy in Ovarian Cancer

    PubMed Central

    Davis, Lorelei; Han, Gang; Zhu, Weiwei; Molina, Ashley D.; Arango, Hector; LaPolla, James P.; Hoffman, Mitchell S.; Sellers, Thomas; Kirby, Tyler; Nicosia, Santo V.; Sutphen, Rebecca

    2012-01-01

    Background More than two-thirds of women who undergo surgery for suspected ovarian neoplasm do not have cancer. Our previous results suggest phospholipids as potential biomarkers of ovarian cancer. In this study, we measured the serum levels of multiple phospholipids among women undergoing surgery for suspected ovarian cancer to identify biomarkers that better predict whether an ovarian mass is malignant. Methodology/Principal Findings We obtained serum samples preoperatively from women with suspected ovarian cancer enrolled through a prospective, population-based rapid ascertainment system. Samples were analyzed from all women in whom a diagnosis of epithelial ovarian cancer (EOC) was confirmed and from benign disease cases randomly selected from the remaining (non-EOC) samples. We measured biologically relevant phospholipids using liquid chromatography/electrospray ionization mass spectrometry. We applied a powerful statistical and machine learning approach, Hybrid huberized support vector machine (HH-SVM) to prioritize phospholipids to enter the biomarker models, and used cross-validation to obtain conservative estimates of classification error rates. Results The HH-SVM model using the measurements of specific combinations of phospholipids supplements clinical CA125 measurement and improves diagnostic accuracy. Specifically, the measurement of phospholipids improved sensitivity (identification of cases with preoperative CA125 levels below 35) among two types of cases in which CA125 performance is historically poor - early stage cases and those of mucinous histology. Measurement of phospholipids improved the identification of early stage cases from 65% (based on CA125) to 82%, and mucinous cases from 44% to 88%. Conclusions/Significance Levels of specific serum phospholipids differ between women with ovarian cancer and those with benign conditions. If validated by independent studies in the future, these biomarkers may serve as an adjunct at the time of clinical

  7. Improving the Accuracy of Stamping Analyses Including Springback Deformations

    NASA Astrophysics Data System (ADS)

    Firat, Mehmet; Karadeniz, Erdal; Yenice, Mustafa; Kaya, Mesut

    2013-02-01

    An accurate prediction of sheet metal deformation including springback is one of the main issues in an efficient finite element (FE) simulation in automotive and stamping industries. Considering tooling design for newer class of high-strength steels, in particular, this requirement became an important aspect for springback compensation practices today. The sheet deformation modeling accounting Bauschinger effect is considered to be a key factor affecting the accuracy of FE simulations in this context. In this article, a rate-independent cyclic plasticity model is presented and implemented into LS-Dyna software for an accurate modeling of sheet metal deformation in stamping simulations. The proposed model uses Hill's orthotropic yield surface in the description of yield loci of planar and transversely anisotropic sheets. The strain-hardening behavior is calculated based on an additive backstress form of the nonlinear kinematic hardening rule. The proposed model is applied in stamping simulations of a dual-phase steel automotive part, and comparisons are presented in terms of part strain and thickness distributions calculated with isotropic plasticity and the proposed model. It is observed that both models produce similar plastic strain and thickness distributions; however, there appeared to be considerable differences in computed springback deformations. Part shapes computed with both plasticity models were evaluated with surface scanning of manufactured parts. A comparison of FE computed geometries with manufactured parts proved the improved performance of proposed model over isotropic plasticity for this particular stamping application.

  8. Forecasting space weather: Can new econometric methods improve accuracy?

    NASA Astrophysics Data System (ADS)

    Reikard, Gordon

    2011-06-01

    Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.

  9. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  10. Science Achievement for All: Improving Science Performance and Closing Achievement Gaps

    ERIC Educational Resources Information Center

    Jackson, Julie K.; Ash, Gwynne

    2012-01-01

    This article addresses the serious and growing need to improve science instruction and science achievement for all students. We will describe the results of a 3-year study that transformed science instruction and student achievement at two high-poverty ethnically diverse public elementary schools in Texas. The school-wide intervention included…

  11. Accuracy of pitch matching significantly improved by live voice model.

    PubMed

    Granot, Roni Y; Israel-Kolatt, Rona; Gilboa, Avi; Kolatt, Tsafrir

    2013-05-01

    Singing is, undoubtedly, the most fundamental expression of our musical capacity, yet an estimated 10-15% of Western population sings "out-of-tune (OOT)." Previous research in children and adults suggests, albeit inconsistently, that imitating a human voice can improve pitch matching. In the present study, we focus on the potentially beneficial effects of the human voice and especially the live human voice. Eighteen participants varying in their singing abilities were required to imitate in singing a set of nine ascending and descending intervals presented to them in five different randomized blocked conditions: live piano, recorded piano, live voice using optimal voice production, recorded voice using optimal voice production, and recorded voice using artificial forced voice production. Pitch and interval matching in singing were much more accurate when participants repeated sung intervals as compared with intervals played to them on the piano. The advantage of the vocal over the piano stimuli was robust and emerged clearly regardless of whether piano tones were played live and in full view or were presented via recording. Live vocal stimuli elicited higher accuracy than recorded vocal stimuli, especially when the recorded vocal stimuli were produced in a forced vocal production. Remarkably, even those who would be considered OOT singers on the basis of their performance when repeating piano tones were able to pitch match live vocal sounds, with deviations well within the range of what is considered accurate singing (M=46.0, standard deviation=39.2 cents). In fact, those participants who were most OOT gained the most from the live voice model. Results are discussed in light of the dual auditory-motor encoding of pitch analogous to that found in speech. PMID:23528675

  12. Science Achievement for All: Improving Science Performance and Closing Achievement Gaps

    NASA Astrophysics Data System (ADS)

    Jackson, Julie K.; Ash, Gwynne

    2012-11-01

    This article addresses the serious and growing need to improve science instruction and science achievement for all students. We will describe the results of a 3-year study that transformed science instruction and student achievement at two high-poverty ethnically diverse public elementary schools in Texas. The school-wide intervention included purposeful planning, inquiry science instruction, and contextually rich academic science vocabulary development. In combination, these instructional practices rapidly improved student-science learning outcomes and narrowed achievement gaps across diverse student populations.

  13. Finishing the Job: Improving the Achievement of Vocational Students.

    ERIC Educational Resources Information Center

    Bottoms, Gene; Presson, Alice

    The Southern Regional Education Board (SREB)-State Vocational Education Consortium's commitment to higher standards for and greater achievement of vocational students has brought substantial gains in student performance. SREB's High Schools That Work (HSTW), a school improvement initiative, documents achievement gains by vocational students. Data…

  14. How Much Can Spatial Training Improve STEM Achievement?

    ERIC Educational Resources Information Center

    Stieff, Mike; Uttal, David

    2015-01-01

    Spatial training has been indicated as a possible solution for improving Science, Technology, Engineering, and Mathematics (STEM) achievement and degree attainment. Advocates for this approach have noted that the correlation between spatial ability and several measures of STEM achievement suggests that spatial training should focus on improving…

  15. Improvements in Interval Time Tracking and Effects on Reading Achievement

    ERIC Educational Resources Information Center

    Taub, Gordon E.; McGrew, Kevin S.; Keith, Timothy Z.

    2007-01-01

    This study examined the effect of improvements in timing/rhythmicity on students' reading achievement. 86 participants completed pre- and post-test measures of reading achievement (i.e., Woodcock-Johnson III, Comprehensive Test of Phonological Processing, Test of Word Reading Efficiency, and Test of Silent Word Reading Fluency). Students in the…

  16. Explicit numerical formulas of improved stability and accuracy for the solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Olstad, W. B.

    1979-01-01

    A class of explicit numerical formulas which involve next nearest neighbor as well as nearest neighbor points are explored in this paper. These formulas are formal approximations to the linear parabolic partial-differential equation of first order in time and second order in distance. It was found that some of these formulas can employ time steps as much as four times that for the conventional explicit technique without becoming unstable. Others showed improved accuracy for a given time step and spatial grid spacing. One formula achieved a steady-state solution of specified accuracy for an example problem in less than 4 percent of the total computational time required by the conventional explicit technique.

  17. Achieving sub-pixel geolocation accuracy in support of MODIS land science

    USGS Publications Warehouse

    Wolfe, R.E.; Nishihama, M.; Fleig, A.J.; Kuyper, J.A.; Roy, D.P.; Storey, J.C.; Patt, F.S.

    2002-01-01

    The Moderate Resolution Imaging Spectroradiometer (MODIS) was launched in December 1999 on the polar orbiting Terra spacecraft and since February 2000 has been acquiring daily global data in 36 spectral bands-29 with 1 km, five with 500 m, and two with 250 m nadir pixel dimensions. The Terra satellite has on-board exterior orientation (position and attitude) measurement systems designed to enable geolocation of MODIS data to approximately 150 m (1??) at nadir. A global network of ground control points is being used to determine biases and trends in the sensor orientation. Biases have been removed by updating models of the spacecraft and instrument orientation in the MODIS geolocation software several times since launch and have improved the MODIS geolocation to approximately 50 m (1??) at nadir. This paper overviews the geolocation approach, summarizes the first year of geolocation analysis, and overviews future work. The approach allows an operational characterization of the MODIS geolocation errors and enables individual MODIS observations to be geolocated to the sub-pixel accuracies required for terrestrial global change applications. ?? 2002 Elsevier Science Inc. All rights reserved.

  18. A computational approach for prediction of donor splice sites with improved accuracy.

    PubMed

    Meher, Prabina Kumar; Sahu, Tanmaya Kumar; Rao, A R; Wahi, S D

    2016-09-01

    Identification of splice sites is important due to their key role in predicting the exon-intron structure of protein coding genes. Though several approaches have been developed for the prediction of splice sites, further improvement in the prediction accuracy will help predict gene structure more accurately. This paper presents a computational approach for prediction of donor splice sites with higher accuracy. In this approach, true and false splice sites were first encoded into numeric vectors and then used as input in artificial neural network (ANN), support vector machine (SVM) and random forest (RF) for prediction. ANN and SVM were found to perform equally and better than RF, while tested on HS3D and NN269 datasets. Further, the performance of ANN, SVM and RF were analyzed by using an independent test set of 50 genes and found that the prediction accuracy of ANN was higher than that of SVM and RF. All the predictors achieved higher accuracy while compared with the existing methods like NNsplice, MEM, MDD, WMM, MM1, FSPLICE, GeneID and ASSP, using the independent test set. We have also developed an online prediction server (PreDOSS) available at http://cabgrid.res.in:8080/predoss, for prediction of donor splice sites using the proposed approach. PMID:27302911

  19. Does Children's Academic Achievement Improve when Single Mothers Marry?

    ERIC Educational Resources Information Center

    Wagmiller, Robert L., Jr.; Gershoff, Elizabeth; Veliz, Philip; Clements, Margaret

    2010-01-01

    Promoting marriage, especially among low-income single mothers with children, is increasingly viewed as a promising public policy strategy for improving developmental outcomes for disadvantaged children. Previous research suggests, however, that children's academic achievement either does not improve or declines when single mothers marry. In this…

  20. An Action Plan for Improving Mediocre or Stagnant Student Achievement

    ERIC Educational Resources Information Center

    Redmond, Kimberley B.

    2013-01-01

    Although all of the schools in the target school system adhere to a school improvement process, achievement scores remain mediocre or stagnant within the overseas school in Italy that serves children of United States armed service members. To address this problem, this study explored the target school's improvement process to discover how…

  1. Professional Learning Communities That Initiate Improvement in Student Achievement

    ERIC Educational Resources Information Center

    Royer, Suzanne M.

    2012-01-01

    Quality teaching requires a strong practice of collaboration, an essential building block for educators to improve student achievement. Researchers have theorized that the implementation of a professional learning community (PLC) with resultant collaborative practices among teachers sustains academic improvement. The problem addressed specifically…

  2. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  3. Correcting Memory Improves Accuracy of Predicted Task Duration

    ERIC Educational Resources Information Center

    Roy, Michael M.; Mitten, Scott T.; Christenfeld, Nicholas J. S.

    2008-01-01

    People are often inaccurate in predicting task duration. The memory bias explanation holds that this error is due to people having incorrect memories of how long previous tasks have taken, and these biased memories cause biased predictions. Therefore, the authors examined the effect on increasing predictive accuracy of correcting memory through…

  4. Improving Accuracy of Sleep Self-Reports through Correspondence Training

    ERIC Educational Resources Information Center

    St. Peter, Claire C.; Montgomery-Downs, Hawley E.; Massullo, Joel P.

    2012-01-01

    Sleep insufficiency is a major public health concern, yet the accuracy of self-reported sleep measures is often poor. Self-report may be useful when direct measurement of nonverbal behavior is impossible, infeasible, or undesirable, as it may be with sleep measurement. We used feedback and positive reinforcement within a small-n multiple-baseline…

  5. On combining reference data to improve imputation accuracy.

    PubMed

    Chen, Jun; Zhang, Ji-Gang; Li, Jian; Pei, Yu-Fang; Deng, Hong-Wen

    2013-01-01

    Genotype imputation is an important tool in human genetics studies, which uses reference sets with known genotypes and prior knowledge on linkage disequilibrium and recombination rates to infer un-typed alleles for human genetic variations at a low cost. The reference sets used by current imputation approaches are based on HapMap data, and/or based on recently available next-generation sequencing (NGS) data such as data generated by the 1000 Genomes Project. However, with different coverage and call rates for different NGS data sets, how to integrate NGS data sets of different accuracy as well as previously available reference data as references in imputation is not an easy task and has not been systematically investigated. In this study, we performed a comprehensive assessment of three strategies on using NGS data and previously available reference data in genotype imputation for both simulated data and empirical data, in order to obtain guidelines for optimal reference set construction. Briefly, we considered three strategies: strategy 1 uses one NGS data as a reference; strategy 2 imputes samples by using multiple individual data sets of different accuracy as independent references and then combines the imputed samples with samples based on the high accuracy reference selected when overlapping occurs; and strategy 3 combines multiple available data sets as a single reference after imputing each other. We used three software (MACH, IMPUTE2 and BEAGLE) for assessing the performances of these three strategies. Our results show that strategy 2 and strategy 3 have higher imputation accuracy than strategy 1. Particularly, strategy 2 is the best strategy across all the conditions that we have investigated, producing the best accuracy of imputation for rare variant. Our study is helpful in guiding application of imputation methods in next generation association analyses. PMID:23383238

  6. Laser ranging with the MéO telescope to improve orbital accuracy of space debris

    NASA Astrophysics Data System (ADS)

    Hennegrave, L.; Pyanet, M.; Haag, H.; Blanchet, G.; Esmiller, B.; Vial, S.; Samain, E.; Paris, J.; Albanese, D.

    2013-05-01

    Improving orbital accuracy of space debris is one of the major prerequisite to performing reliable collision prediction in low earth orbit. The objective is to avoid false alarms and useless maneuvers for operational satellites. This paper shows how laser ranging on debris can improve the accuracy of orbit determination. In March 2012 a joint OCA-Astrium team had the first laser echoes from space debris using the MéO (Métrologie Optique) telescope of the Observatoire de la Côte d'Azur (OCA), upgraded with a nanosecond pulsed laser. The experiment was conducted in full compliance with the procedures dictated by the French Civil Aviation Authorities. To perform laser ranging measurement on space debris, the laser link budget needed to be improved. Related technical developments were supported by implementation of a 2J pulsed laser purchased by ASTRIUM and an adapted photo detection. To achieve acquisition of the target from low accuracy orbital data such as Two Lines Elements, a 2.3-degree field of view telescope was coupled to the original MéO telescope 3-arcmin narrow field of view. The wide field of view telescope aimed at pointing, adjusting and acquiring images of the space debris for astrometry measurement. The achieved set-up allowed performing laser ranging and angular measurements in parallel, on several rocket stages from past launches. After a brief description of the set-up, development issues and campaigns, the paper discusses added-value of laser ranging measurement when combined to angular measurement for accurate orbit determination. Comparison between different sets of experimental results as well as simulation results is given.

  7. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium

    PubMed Central

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619

  8. Accuracy of Genomic Prediction in Switchgrass (Panicum virgatum L.) Improved by Accounting for Linkage Disequilibrium.

    PubMed

    Ramstein, Guillaume P; Evans, Joseph; Kaeppler, Shawn M; Mitchell, Robert B; Vogel, Kenneth P; Buell, C Robin; Casler, Michael D

    2016-01-01

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height, and heading date. Marker data were produced for the families' parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs. PMID:26869619

  9. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  10. Improved time-domain accuracy standards for model gravitational waveforms

    SciTech Connect

    Lindblom, Lee; Baker, John G.

    2010-10-15

    Model gravitational waveforms must be accurate enough to be useful for detection of signals and measurement of their parameters, so appropriate accuracy standards are needed. Yet these standards should not be unnecessarily restrictive, making them impractical for the numerical and analytical modelers to meet. The work of Lindblom, Owen, and Brown [Phys. Rev. D 78, 124020 (2008)] is extended by deriving new waveform accuracy standards which are significantly less restrictive while still ensuring the quality needed for gravitational-wave data analysis. These new standards are formulated as bounds on certain norms of the time-domain waveform errors, which makes it possible to enforce them in situations where frequency-domain errors may be difficult or impossible to estimate reliably. These standards are less restrictive by about a factor of 20 than the previously published time-domain standards for detection, and up to a factor of 60 for measurement. These new standards should therefore be much easier to use effectively.

  11. The Effects of Individual or Group Guidelines on the Calibration Accuracy and Achievement of High School Biology Students

    ERIC Educational Resources Information Center

    Bol, Linda; Hacker, Douglas J.; Walck, Camilla C.; Nunnery, John A.

    2012-01-01

    A 2 x 2 factorial design was employed in a quasi-experiment to investigate the effects of guidelines in group or individual settings on the calibration accuracy and achievement of 82 high school biology students. Significant main effects indicated that calibration practice with guidelines and practice in group settings increased prediction and…

  12. Improved accuracy of measurements of complex permittivity and permeability using transmission lines

    NASA Astrophysics Data System (ADS)

    Shemelin, V.; Valles, N.

    2014-12-01

    Strong damping of Higher-Order-Modes (HOMs) excited by the beam in accelerating cavities is a necessary condition for achievement of high currents and low emittances in storage rings, electron-positron colliders, and high average power Energy Recovery Linacs (ERLs). Characterization of the electromagnetic properties of lossy ceramics and ferrites used in HOM loads is therefore an essential part of constructing these accelerators. Here we show how to improve these measurements beyond the state of the art. In the past, significant discrepancies have been typical between measured properties for different batches of the same material. Here we show that these can be explained not only by technological deviations in the material production but also by errors in the dimensions of the measured samples. We identify the main source of errors and show how to improve the accuracy of measuring the electromagnetic parameters of absorbing materials.

  13. Individualized menu slips improve the accuracy of patient food trays.

    PubMed

    Myers, E F; Knoz, S A; Gregoire, M B

    1991-11-01

    We evaluated the effect of five menu slip formats on worker preference and accuracy of food trays in a simulated hospital tray line. Menu slip formats were either individualized or preprinted, and various combinations of color coding, large type, and bold print were used to code the type of diet and the menu choices to be placed on the tray. Student volunteers who had not worked in hospital foodservice were used as tray line workers to reduce the possibility of prior preference for a menu slip format. Results indicate that menu slip format significantly affects both worker preference and the accuracy of assembled food trays. Errors were significantly lower with individualized formats that identified menu selections in bold print and type of diet in either large type or colored ink. The highest error rate was found with preprinted formats. An individualized menu slip that identified menu selections and diet orders with large type and bold print received the highest worker preference rating and resulted in the most accurate tray assembly. PMID:1939982

  14. Improving Estimation Accuracy of Aggregate Queries on Data Cubes

    SciTech Connect

    Pourabbas, Elaheh; Shoshani, Arie

    2008-08-15

    In this paper, we investigate the problem of estimation of a target database from summary databases derived from a base data cube. We show that such estimates can be derived by choosing a primary database which uses a proxy database to estimate the results. This technique is common in statistics, but an important issue we are addressing is the accuracy of these estimates. Specifically, given multiple primary and multiple proxy databases, that share the same summary measure, the problem is how to select the primary and proxy databases that will generate the most accurate target database estimation possible. We propose an algorithmic approach for determining the steps to select or compute the source databases from multiple summary databases, which makes use of the principles of information entropy. We show that the source databases with the largest number of cells in common provide the more accurate estimates. We prove that this is consistent with maximizing the entropy. We provide some experimental results on the accuracy of the target database estimation in order to verify our results.

  15. Improving the Teaching of Economics: Achievements and Aspirations.

    ERIC Educational Resources Information Center

    Bach, G. L.; Kelley, Allen C.

    Achievements and possible future projects of the American Economic Association's Committee on Economic Education (CEE), whose goal is to improve teaching in college and university economics, are discussed. The Teacher Training Program (TTP) was established by the CEE in the 1970's to develop programs to train economic educators. To date the…

  16. Using Students' Cultural Heritage to Improve Academic Achievement in Writing

    ERIC Educational Resources Information Center

    Mendez, Gilbert

    2006-01-01

    This article discusses an approach to teaching used at Calexico Unified School District, a California-Mexican border high school, by a group of teachers working to make teaching and learning more relevant to Chicano and Mexican students' lives and to improve their academic achievement in writing. An off-shoot of a training program for English…

  17. Improving Students' Social Skills and Achievement through Cooperative Learning.

    ERIC Educational Resources Information Center

    Caparos, Jennifer; Cetera, Colleen; Ogden, Lynn; Rossett, Kathryn

    This action research project evaluated a program designed to increase the use of appropriate social skills and improve academic achievement. The targeted population was comprised of first through third graders in four separate communities located in northeast Illinois. Evidence of the problem included teacher observational checklists denoting…

  18. Using Site-Based Budgeting To Improve Student Achievement.

    ERIC Educational Resources Information Center

    Warden, Christina

    2002-01-01

    Advances the use of school-based budgeting to improve student achievement. Describes four steps to implementing school-based budgeting: Access the current situation, set priorities and make decisions, plan action steps and benchmarks, and analyze choices and build the budget. Includes case study of the implementation of school-based budgeting at…

  19. Systems Thinking: A Skill to Improve Student Achievement

    ERIC Educational Resources Information Center

    Thornton, Bill; Peltier, Gary; Perreault, George

    2004-01-01

    This article examines how schools can avoid barriers to systems thinking in relation to improving student achievement. It then illustrates common errors associated with non-systems thinking and recommends solutions. Educators who understand that schools are complex interdependent social systems can move their organizations forward. Unfortunately,…

  20. New Directions in Social Psychological Interventions to Improve Academic Achievement

    ERIC Educational Resources Information Center

    Wilson, Timothy D.; Buttrick, Nicholas R.

    2016-01-01

    Attempts to improve student achievement typically focus on changing the educational environment (e.g., better schools, better teachers) or on personal characteristics of students (e.g., intelligence, self-control). The 6 articles in this special issue showcase an additional approach, emanating from social psychology, which focuses on students'…

  1. Understanding the Change Styles of Teachers to Improve Student Achievement

    ERIC Educational Resources Information Center

    Bigby, Arlene May Green

    2009-01-01

    The topic of this dissertation is the understanding of teacher change styles to improve student achievement. Teachers from public schools in a state located in the northern plains were surveyed regarding their Change Styles (preferred approaches to change) and flexibility scores. The results were statistically analyzed to determine if there were…

  2. Improving Student Achievement through the Use of Music Strategies.

    ERIC Educational Resources Information Center

    Brogla-Krupke, Cheryl

    This report describes a program to improve student achievement through the use of music strategies. The targeted population was fifth-grade students in a small Iowa community. The absence of music integration into the social studies area was observed through data that displayed the lack of motivation and in-depth learning by the students. Analysis…

  3. Improving the measurement accuracy of mixed gas by optimizing carbon nanotube sensor's electrode separation

    NASA Astrophysics Data System (ADS)

    Hao, Huimin; Zhang, Yong; Quan, Long

    2015-10-01

    Because of excellent superiorities, triple-electrode carbon nanotube sensor acts good in the detection of multi-component mixed gas. However, as one of the key factors affecting the accuracy of detection, the electrode separation of carbon nanotube gas sensor with triple-electrode structure is very difficult to decide. An optimization method is presented here to improve the mixed gas measurement accuracy. This method optimizes every separation between three electrodes of the carbon nanotube sensors in the sensor array when test the multi-component gas mixture. It collects the ionic current detected by sensor array composed of carbon nanotube sensors with different electrode separations, and creates the kernel partial least square regression (KPLSR) quantitative analysis model of detected gases. The optimum electrode separations come out when the root mean square error of prediction (RMSEP) of test samples reaches the minimum value. The gas mixtures of CO and NO2 are measured using sensor array composed of two carbon nanotube sensor with different electrode separations. And every electrode separation of two sensors is optimized by above-mentioned method. The experimental results show that the proposed method selects the optimal distances between electrodes effectively, and achieves higher measurement accuracy.

  4. Incorporating tracer-tracee differences into models to improve accuracy

    SciTech Connect

    Schoeller, D.A. )

    1991-05-01

    The ideal tracer for metabolic studies is one that behaves exactly like the tracee. Compounds labeled with isotopes come the closest to this ideal because they are chemically identical to the tracee except for the substitution of a stable or radioisotope at one or more positions. Even this substitution, however, can introduce a difference in metabolism that may be quantitatively important with regard to the development of the mathematical model used to interpret the kinetic data. The doubly labeled water method for the measurement of carbon dioxide production and hence energy expenditure in free-living subjects is a good example of how differences between the metabolism of the tracers and the tracee can influence the accuracy of the carbon dioxide production rate determined from the kinetic data.

  5. Accuracy of Teachers' Judgments of Students' Academic Achievement: A Meta-Analysis

    ERIC Educational Resources Information Center

    Sudkamp, Anna; Kaiser, Johanna; Moller, Jens

    2012-01-01

    This meta-analysis summarizes empirical results on the correspondence between teachers' judgments of students' academic achievement and students' actual academic achievement. The article further investigates theoretically and methodologically relevant moderators of the correlation between the two measures. Overall, 75 studies reporting…

  6. Design considerations for achieving high accuracy with the SHOALS bathymetric lidar system

    NASA Astrophysics Data System (ADS)

    Guenther, Gary C.; Thomas, Robert W. L.; LaRocque, Paul E.

    1996-11-01

    The ultimate accuracy of depths from an airborne laser hydrography system depends both on careful hardware design aimed at producing the best possible accuracy and precision of recorded data, along with insensitivity to environmental effects, and on post-flight data processing software which corrects for a number of unavoidable biases and provides for flexible operator interaction to handle special cases. The generic procedure for obtaining a depth from an airborne lidar pulse involves measurement of the time between the surface return and the bottom return. In practice, because both of these return times are biased due to a number of environmental and hardware effects, it is necessary to apply various correctors in order to obtain depth estimates which are sufficiently accurate to meet International Hydrographic Office standards. Potential false targets, also of both environmental and hardware origin, must be discriminated, and wave heights must be removed. It is important to have a depth confidence value matched to accuracy and to have warnings about or automatic deletion of pulses with questionable characteristics. Techniques, procedures, and algorithms developed for the SHOALS systems are detailed here.

  7. Nested uncertainties and hybrid metrology to improve measurement accuracy

    NASA Astrophysics Data System (ADS)

    Silver, R. M.; Zhang, N. F.; Barnes, B. M.; Zhou, H.; Qin, J.; Dixson, R.

    2011-03-01

    In this paper we present a method to combine measurement techniques that reduce uncertainties and improve measurement throughput. The approach has immediate utility when performing model-based optical critical dimension (OCD) measurements. When modeling optical measurements, a library of curves is assembled through the simulation of a multi-dimensional parameter space. Parametric correlation and measurement noise lead to measurement uncertainty in the fitting process resulting in fundamental limitations due to parametric correlations. We provide a strategy to decouple parametric correlation and reduce measurement uncertainties. We also develop the rigorous underlying Bayesian statistical model to apply this methodology to OCD metrology. These statistical methods use a priori information rigorously to reduce measurement uncertainty, improve throughput and develop an improved foundation for comprehensive reference metrology.

  8. Geometric accuracy improvement and verification of remote sensing image product for the ZY-3 surveying and mapping satellite

    NASA Astrophysics Data System (ADS)

    Wang, Xia; Zhou, Ping; Guo, Li

    2015-12-01

    Based on the geometric characteristic of ZY3 surveying and mapping satellite, this paper analyses the main error sources of the geometric accuracy of ZY3 satellite image product, and proposes a key technique to improve the accuracy of geometric positioning of ZY-3 satellite image products without the Ground Control Points. Firstly, 556 ZY-3 satellite images distributed in the central western China, with an area of 350 million km2, were used for the planar positioning accuracy verification. The results show that the planar accuracy of ZY-3 image without the GCPs is about 10.8 meters (1σ), and more than 96.9% of experimental image without the GCPs have the planar accuracy higher than 25 meters. Subsequently, the Digital Surface Model (DSM) produced by the ZY-3 three linear array image in Shanxi without the GCPs and the high-precise Lidar-DEM were compared. The comparison shows that overall vertical accuracy of DSM is higher than 6 meters (1σ), and higher than 5.5 and 6.4 meters (1σ) in plane and mountainous area respectively. So the validation confirmed the overall accuracy of ZY-3 satellite images, indicating that ZY-3 satellite can achieve a higher geometric accuracy.

  9. Optimized diagnostic model combination for improving diagnostic accuracy

    NASA Astrophysics Data System (ADS)

    Kunche, S.; Chen, C.; Pecht, M. G.

    Identifying the most suitable classifier for diagnostics is a challenging task. In addition to using domain expertise, a trial and error method has been widely used to identify the most suitable classifier. Classifier fusion can be used to overcome this challenge and it has been widely known to perform better than single classifier. Classifier fusion helps in overcoming the error due to inductive bias of various classifiers. The combination rule also plays a vital role in classifier fusion, and it has not been well studied which combination rules provide the best performance during classifier fusion. Good combination rules will achieve good generalizability while taking advantage of the diversity of the classifiers. In this work, we develop an approach for ensemble learning consisting of an optimized combination rule. The generalizability has been acknowledged to be a challenge for training a diverse set of classifiers, but it can be achieved by an optimal balance between bias and variance errors using the combination rule in this paper. Generalizability implies the ability of a classifier to learn the underlying model from the training data and to predict the unseen observations. In this paper, cross validation has been employed during performance evaluation of each classifier to get an unbiased performance estimate. An objective function is constructed and optimized based on the performance evaluation to achieve the optimal bias-variance balance. This function can be solved as a constrained nonlinear optimization problem. Sequential Quadratic Programming based optimization with better convergence property has been employed for the optimization. We have demonstrated the applicability of the algorithm by using support vector machine and neural networks as classifiers, but the methodology can be broadly applicable for combining other classifier algorithms as well. The method has been applied to the fault diagnosis of analog circuits. The performance of the proposed

  10. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  11. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  12. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones.

    PubMed

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android's LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%-60%, thereby reducing the existing error of 3-4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  13. Singing Video Games May Help Improve Pitch-Matching Accuracy

    ERIC Educational Resources Information Center

    Paney, Andrew S.

    2015-01-01

    The purpose of this study was to investigate the effect of singing video games on the pitch-matching skills of undergraduate students. Popular games like "Rock Band" and "Karaoke Revolutions" rate players' singing based on the correctness of the frequency of their sung response. Players are motivated to improve their…

  14. SSA Sensor Tasking Approach for Improved Orbit Determination Accuracies and More Efficient Use of Ground Assets

    NASA Astrophysics Data System (ADS)

    Herz, A.; Stoner, F.

    2013-09-01

    Current SSA sensor tasking and scheduling is not centrally coordinated or optimized for either orbit determination quality or efficient use of sensor resources. By applying readily available capabilities for determining optimal tasking times and centrally generating de-conflicted schedules for all available sensors, both the quality of determined orbits (and thus situational awareness) and the use of sensor resources may be measurably improved. This paper provides an approach that is logically separated into two main sections. Part 1 focuses on the science of orbit determination based on tracking data and the approaches to tracking that result in improved orbit prediction quality (such as separating limited tracking passes in inertial space as much as possible). This part of the paper defines the goals for Part 2 of the paper which focuses on the details of an improved tasking and scheduling approach for sensor tasking. Centralized tasking and scheduling of sensor tracking assignments eliminates conflicting tasking requests up front and coordinates tasking to achieve (as much as possible within the physics of the problem and limited resources) the tracking goals defined in Part I. The effectivity of the proposed approach will be assessed based on improvements in the overall accuracy of the space catalog. Systems Tool Kit (STK) from Analytical Graphics and STK Scheduler from Orbit Logic are used for computations and to generate schedules for the existing and improved approaches.

  15. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  16. Does naming accuracy improve through self-monitoring of errors?

    PubMed

    Schwartz, Myrna F; Middleton, Erica L; Brecher, Adelyn; Gagliardi, Maureen; Garvey, Kelly

    2016-04-01

    This study examined spontaneous self-monitoring of picture naming in people with aphasia. Of primary interest was whether spontaneous detection or repair of an error constitutes an error signal or other feedback that tunes the production system to the desired outcome. In other words, do acts of monitoring cause adaptive change in the language system? A second possibility, not incompatible with the first, is that monitoring is indicative of an item's representational strength, and strength is a causal factor in language change. Twelve PWA performed a 615-item naming test twice, in separate sessions, without extrinsic feedback. At each timepoint, we scored the first complete response for accuracy and error type and the remainder of the trial for verbalizations consistent with detection (e.g., "no, not that") and successful repair (i.e., correction). Data analysis centered on: (a) how often an item that was misnamed at one timepoint changed to correct at the other timepoint, as a function of monitoring; and (b) how monitoring impacted change scores in the Forward (Time 1 to Time 2) compared to Backward (Time 2 to Time 1) direction. The Strength hypothesis predicts significant effects of monitoring in both directions. The Learning hypothesis predicts greater effects in the Forward direction. These predictions were evaluated for three types of errors--Semantic errors, Phonological errors, and Fragments--using mixed-effects regression modeling with crossed random effects. Support for the Strength hypothesis was found for all three error types. Support for the Learning hypothesis was found for Semantic errors. All effects were due to error repair, not error detection. We discuss the theoretical and clinical implications of these novel findings. PMID:26863091

  17. Improving the Accuracy of Estimation of Climate Extremes

    NASA Astrophysics Data System (ADS)

    Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.

    2010-12-01

    Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.

  18. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    DOE PAGESBeta

    Ramstein, Guillaume P.; Evans, Joseph; Kaeppler, Shawn M.; Mitchell, Robert B.; Vogel, Kenneth P.; Buell, C. Robin; Casler, Michael D.

    2016-02-11

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection (GS) is an attractive technology to generate rapid genetic gains in switchgrass, and meet the goals of a substantial displacement of petroleum use with biofuels in the near future. In this study, we empirically assessed prediction procedures for genomic selection in two different populations, consisting of 137 and 110 half-sib families of switchgrass, tested in two locations in the United States for three agronomic traits: dry matter yield, plant height,more » and heading date. Marker data were produced for the families’ parents by exome capture sequencing, generating up to 141,030 polymorphic markers with available genomic-location and annotation information. We evaluated prediction procedures that varied not only by learning schemes and prediction models, but also by the way the data were preprocessed to account for redundancy in marker information. More complex genomic prediction procedures were generally not significantly more accurate than the simplest procedure, likely due to limited population sizes. Nevertheless, a highly significant gain in prediction accuracy was achieved by transforming the marker data through a marker correlation matrix. Our results suggest that marker-data transformations and, more generally, the account of linkage disequilibrium among markers, offer valuable opportunities for improving prediction procedures in GS. Furthermore, some of the achieved prediction accuracies should motivate implementation of GS in switchgrass breeding programs.« less

  19. Novel FBG interrogation technique for achieving < 100 nɛ accuracies at remote distances > 70 km

    NASA Astrophysics Data System (ADS)

    Farrell, Tom; O'Connor, Peter; Levins, John; McDonald, David

    2005-06-01

    Due to the development of Fibre Bragg Grating sensors for the measurement of temperature, strain and pressure many markets can benefit from optical technology. These markets are the oil and gas industry, structural and civil engineering, rail and aerospace to name a few. The advantages of using optical sensing technology are that high accuracy measurements can be performed with a passive optical system. By running one fibre along the structure or down the well, multiple points along the fibre can be tested to measure strain, temperature and pressure. Of importance with these systems is the reach that can be obtained while maintaining accuracy. A major problem with long reach system is the back reflection due to SBS and Rayleigh scattering processes which reflect part of the laser light back into the receiver which affect the sensitivity of system. This paper shows a technique to enable a reach of >70km by using a tunable laser and receiver. Techniques for the suppression of receiver noise from SBS and Raleigh scattering are implemented. In addition polarisation dependence of the FBG is considered and results of techniques to limit the effect of polarisation at long and short reaches are shown.

  20. You are so beautiful... to me: seeing beyond biases and achieving accuracy in romantic relationships.

    PubMed

    Solomon, Brittany C; Vazire, Simine

    2014-09-01

    Do romantic partners see each other realistically, or do they have overly positive perceptions of each other? Research has shown that realism and positivity co-exist in romantic partners' perceptions (Boyes & Fletcher, 2007). The current study takes a novel approach to explaining this seemingly paradoxical effect when it comes to physical attractiveness--a highly evaluative trait that is especially relevant to romantic relationships. Specifically, we argue that people are aware that others do not see their partners as positively as they do. Using both mean differences and correlational approaches, we test the hypothesis that despite their own biased and idiosyncratic perceptions, people have 2 types of partner-knowledge: insight into how their partners see themselves (i.e., identity accuracy) and insight into how others see their partners (i.e., reputation accuracy). Our results suggest that romantic partners have some awareness of each other's identity and reputation for physical attractiveness, supporting theories that couple members' perceptions are driven by motives to fulfill both esteem- and epistemic-related needs (i.e., to see their partners positively and realistically). PMID:25133729

  1. Improved accuracy of 3D-printed navigational template during complicated tibial plateau fracture surgery.

    PubMed

    Huang, Huajun; Hsieh, Ming-Fa; Zhang, Guodong; Ouyang, Hanbin; Zeng, Canjun; Yan, Bin; Xu, Jing; Yang, Yang; Wu, Zhanglin; Huang, Wenhua

    2015-03-01

    This study was aimed to improve the surgical accuracy of plating and screwing for complicated tibial plateau fracture assisted by 3D implants library and 3D-printed navigational template. Clinical cases were performed whereby complicated tibial plateau fractures were imaged using computed tomography and reconstructed into 3D fracture prototypes. The preoperative planning of anatomic matching plate with appropriate screw trajectories was performed with the help of the library of 3D models of implants. According to the optimal planning, patient-specific navigational templates produced by 3D printer were used to accurately guide the real surgical implantation. The fixation outcomes in term of the deviations of screw placement between preoperative and postoperative screw trajectories were measured and compared, including the screw lengths, entry point locations and screw directions. With virtual preoperative planning, we have achieved optimal and accurate fixation outcomes in the real clinical surgeries. The deviations of screw length was 1.57 ± 5.77 mm, P > 0.05. The displacements of the entry points in the x-, y-, and z-axis were 0.23 ± 0.62, 0.83 ± 1.91, and 0.46 ± 0.67 mm, respectively, P > 0.05. The deviations of projection angle in the coronal (x-y) and transverse (x-z) planes were 6.34 ± 3.42° and 4.68 ± 3.94°, respectively, P > 0.05. There was no significant difference in the deviations of screw length, entry point and projection angle between the ideal and real screw trajectories. The ideal and accurate preoperative planning of plating and screwing can be achieved in the real surgery assisted by the 3D models library of implants and the patient-specific navigational template. This technology improves the accuracy and efficiency of personalized internal fixation surgery and we have proved this in our clinical applications. PMID:25663390

  2. Antibiotic Resistome: Improving Detection and Quantification Accuracy for Comparative Metagenomics.

    PubMed

    Elbehery, Ali H A; Aziz, Ramy K; Siam, Rania

    2016-04-01

    The unprecedented rise of life-threatening antibiotic resistance (AR), combined with the unparalleled advances in DNA sequencing of genomes and metagenomes, has pushed the need for in silico detection of the resistance potential of clinical and environmental metagenomic samples through the quantification of AR genes (i.e., genes conferring antibiotic resistance). Therefore, determining an optimal methodology to quantitatively and accurately assess AR genes in a given environment is pivotal. Here, we optimized and improved existing AR detection methodologies from metagenomic datasets to properly consider AR-generating mutations in antibiotic target genes. Through comparative metagenomic analysis of previously published AR gene abundance in three publicly available metagenomes, we illustrate how mutation-generated resistance genes are either falsely assigned or neglected, which alters the detection and quantitation of the antibiotic resistome. In addition, we inspected factors influencing the outcome of AR gene quantification using metagenome simulation experiments, and identified that genome size, AR gene length, total number of metagenomics reads and selected sequencing platforms had pronounced effects on the level of detected AR. In conclusion, our proposed improvements in the current methodologies for accurate AR detection and resistome assessment show reliable results when tested on real and simulated metagenomic datasets. PMID:27031878

  3. SPHGal: smoothed particle hydrodynamics with improved accuracy for galaxy simulations

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Yu; Naab, Thorsten; Walch, Stefanie; Moster, Benjamin P.; Oser, Ludwig

    2014-09-01

    We present the smoothed particle hydrodynamics (SPH) implementation SPHGal, which combines some recently proposed improvements in GADGET. This includes a pressure-entropy formulation with a Wendland kernel, a higher order estimate of velocity gradients, a modified artificial viscosity switch with a modified strong limiter, and artificial conduction of thermal energy. With a series of idealized hydrodynamic tests, we show that the pressure-entropy formulation is ideal for resolving fluid mixing at contact discontinuities but performs conspicuously worse at strong shocks due to the large entropy discontinuities. Including artificial conduction at shocks greatly improves the results. In simulations of Milky Way like disc galaxies a feedback-induced instability develops if too much artificial viscosity is introduced. Our modified artificial viscosity scheme prevents this instability and shows efficient shock capturing capability. We also investigate the star formation rate and the galactic outflow. The star formation rates vary slightly for different SPH schemes while the mass loading is sensitive to the SPH scheme and significantly reduced in our favoured implementation. We compare the accretion behaviour of the hot halo gas. The formation of cold blobs, an artefact of simple SPH implementations, can be eliminated efficiently with proper fluid mixing, either by conduction and/or by using a pressure-entropy formulation.

  4. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  5. Strain mapping accuracy improvement using super-resolution techniques.

    PubMed

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Fernández-Reyes, D; González, D; Mayoral, A; Utrilla, A D; Ulloa, J M; Galindo, P L

    2016-04-01

    Super-resolution (SR) software-based techniques aim at generating a final image by combining several noisy frames with lower resolution from the same scene. A comparative study on high-resolution high-angle annular dark field images of InAs/GaAs QDs has been carried out in order to evaluate the performance of the SR technique. The obtained SR images present enhanced resolution and higher signal-to-noise (SNR) ratio and sharpness regarding the experimental images. In addition, SR is also applied in the field of strain analysis using digital image processing applications such as geometrical phase analysis and peak pairs analysis. The precision of the strain mappings can be improved when SR methodologies are applied to experimental images. PMID:26501744

  6. Resolution and quantitative accuracy improvements in ultrasound transmission imaging

    NASA Astrophysics Data System (ADS)

    Chenevert, T. L.

    The type of ultrasound transmission imaging, referred to as ultrasonic computed tomography (UCT), reconstructs distributions of tissue speed of sound and sound attenuation properties from measurements of acoustic pulse time of flight (TCF) and energy received through tissue. Although clinical studies with experimental UCT scanners have demonstrated UCT is sensitive to certain tissue pathologies not easily detected with conventional ultrasound imaging, they have also shown UCT to suffer from artifacts due to physical differences between the acoustic beam and its ray model implicit in image reconstruction algorithms. Artifacts are expressed as large quantitative errors in attenuation images, and poor spatial resolution and size distortion (exaggerated size of high speed of sound regions) in speed of sound images. Methods are introduced and investigated which alleviate these problems in UCT imaging by providing improved measurements of pulse TCF and energy.

  7. On achieving sufficient dual station range accuracy for deep space navigation at zero declination

    NASA Technical Reports Server (NTRS)

    Siegel, H. L.; Christensen, C. S.; Green, D. W.; Winn, F. B.

    1977-01-01

    Since the Voyager Mission will encounter Saturn at a time when the planet will be nearly in the earth's equatorial plane, earth-based orbit determination will be more difficult than usual because of the so-called zero-declination singularity associated with conventional radiometric observations. Simulation studies show that in order to meet the required delivery accuracy at Saturn, a relative range measurement between the Goldstone and Canberra Deep Space Stations must be accurate to 4.5 times the square root of two meters. Topics discussed include the nature of error sources, the methodology and technology required for calibration, the verification process concerning the nearly simultaneous range capability, a description of the ranging system, and tracking strategy.

  8. Improving the accuracy of canal seepage detection through geospatial techniques

    NASA Astrophysics Data System (ADS)

    Arshad, Muhammad

    With climatic change, many western states in the United States are experiencing drought conditions. Numerous irrigation districts are losing significant amount of water from their canal systems due to leakage. Every year, on the average 2 million acres of prime cropland in the US is lost to soil erosion, waterlogging and salinity. Lining of canals could save enormous amount of water for irrigating crops but in present time due to soaring costs of construction and environmental mitigation, adopting such program on a large scale would be excessive. Conventional techniques of seepage detection are expensive, time consuming and labor intensive besides being not very accurate. Technological advancements in remote sensing have made it possible to investigate irrigation canals for seepage sites identification. In this research, band-9 in the [NIR] region and band-45 in the [TIR] region of an airborne MASTER data has been utilized to highlight anomalies along irrigation canal at Phoenix, Arizona. High resolution (1 to 4 meter pixels) satellite images provided by private companies for scientific research and made available by Google to the public on Google Earth is then successfully used to separate those anomalies into water activity sites, natural vegetation, and man-made structures and thereby greatly improving the seepage detection ability of airborne remote sensing. This innovative technique is much faster and cost effective as compared to conventional techniques and past airborne remote sensing techniques for verification of anomalies along irrigation canals. This technique also solves one of the long standing problems of discriminating false impression of seepage sites due to dense natural vegetation, terrain relief and low depressions of natural drainages from true water related activity sites.

  9. FISM 2.0: Improved Spectral Range, Resolution, and Accuracy

    NASA Technical Reports Server (NTRS)

    Chamberlin, Phillip C.

    2012-01-01

    The Flare Irradiance Spectral Model (FISM) was first released in 2005 to provide accurate estimates of the solar VUV (0.1-190 nm) irradiance to the Space Weather community. This model was based on TIMED SEE as well as UARS and SORCE SOLSTICE measurements, and was the first model to include a 60 second temporal variation to estimate the variations due to solar flares. Along with flares, FISM also estimates the tradition solar cycle and solar rotational variations over months and decades back to 1947. This model has been highly successful in providing driving inputs to study the affect of solar irradiance variations on the Earth's ionosphere and thermosphere, lunar dust charging, as well as the Martian ionosphere. The second version of FISM, FISM2, is currently being updated to be based on the more accurate SDO/EVE data, which will provide much more accurate estimations in the 0.1-105 nm range, as well as extending the 'daily' model variation up to 300 nm based on the SOLSTICE measurements. with the spectral resolution of SDO/EVE along with SOLSTICE and the TIMED and SORCE XPS 'model' products, the entire range from 0.1-300 nm will also be available at 0.1 nm, allowing FISM2 to be improved a similar 0.1nm spectral bins. FISM also will have a TSI component that will estimate the total radiated energy during flares based on the few TSI flares observed to date. Presented here will be initial results of the FISM2 modeling efforts, as well as some challenges that will need to be overcome in order for FISM2 to accurately model the solar variations on time scales of seconds to decades.

  10. Improving medical diagnostic accuracy of ultrasound Doppler signals by combining neural network models.

    PubMed

    Ubeyli, Elif Derya; Güler, Inan

    2005-07-01

    There are a number of different quantitative models that can be used in a medical diagnostic decision support system including parametric methods (linear discriminant analysis or logistic regression), nonparametric models (k nearest neighbor or kernel density) and several neural network models. The complexity of the diagnostic task is thought to be one of the prime determinants of model selection. Unfortunately, there is no theory available to guide model selection. This paper illustrates the use of combined neural network models to guide model selection for diagnosis of ophthalmic and internal carotid arterial disorders. The ophthalmic and internal carotid arterial Doppler signals were decomposed into time-frequency representations using discrete wavelet transform and statistical features were calculated to depict their distribution. The first-level networks were implemented for the diagnosis of ophthalmic and internal carotid arterial disorders using the statistical features as inputs. To improve diagnostic accuracy, the second-level networks were trained using the outputs of the first-level networks as input data. The combined neural network models achieved accuracy rates which were higher than that of the stand-alone neural network models. PMID:15780863

  11. Two-step FEM-based Liver-CT registration: improving internal and external accuracy

    NASA Astrophysics Data System (ADS)

    Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2014-03-01

    To know the exact location of the internal structures of the organs, especially the vasculature, is of great importance for the clinicians. This information allows them to know which structures/vessels will be affected by certain therapy and therefore to better treat the patients. However the use of internal structures for registration is often disregarded especially in physical based registration methods. In this paper we propose an algorithm that uses finite element methods to carry out a registration of liver volumes that will not only have accuracy in the boundaries of the organ but also in the interior. Therefore a graph matching algorithm is used to find correspondences between the vessel trees of the two livers to be registered. In addition to this an adaptive volumetric mesh is generated that contains nodes in the locations in which correspondences were found. The displacements derived from those correspondences are the input for the initial deformation of the model. The first deformation brings the internal structures to their final deformed positions and the surfaces close to it. Finally, thin plate splines are used to refine the solution at the boundaries of the organ achieving an improvement in the accuracy of 71%. The algorithm has been evaluated in CT clinical images of the abdomen.

  12. Improving the Accuracy of Software-Based Energy Analysis for Residential Buildings (Presentation)

    SciTech Connect

    Polly, B.

    2011-09-01

    This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

  13. Prediction of soil properties using imaging spectroscopy: Considering fractional vegetation cover to improve accuracy

    NASA Astrophysics Data System (ADS)

    Franceschini, M. H. D.; Demattê, J. A. M.; da Silva Terra, F.; Vicente, L. E.; Bartholomeus, H.; de Souza Filho, C. R.

    2015-06-01

    Spectroscopic techniques have become attractive to assess soil properties because they are fast, require little labor and may reduce the amount of laboratory waste produced when compared to conventional methods. Imaging spectroscopy (IS) can have further advantages compared to laboratory or field proximal spectroscopic approaches such as providing spatially continuous information with a high density. However, the accuracy of IS derived predictions decreases when the spectral mixture of soil with other targets occurs. This paper evaluates the use of spectral data obtained by an airborne hyperspectral sensor (ProSpecTIR-VS - Aisa dual sensor) for prediction of physical and chemical properties of Brazilian highly weathered soils (i.e., Oxisols). A methodology to assess the soil spectral mixture is adapted and a progressive spectral dataset selection procedure, based on bare soil fractional cover, is proposed and tested. Satisfactory performances are obtained specially for the quantification of clay, sand and CEC using airborne sensor data (R2 of 0.77, 0.79 and 0.54; RPD of 2.14, 2.22 and 1.50, respectively), after spectral data selection is performed; although results obtained for laboratory data are more accurate (R2 of 0.92, 0.85 and 0.75; RPD of 3.52, 2.62 and 2.04, for clay, sand and CEC, respectively). Most importantly, predictions based on airborne-derived spectra for which the bare soil fractional cover is not taken into account show considerable lower accuracy, for example for clay, sand and CEC (RPD of 1.52, 1.64 and 1.16, respectively). Therefore, hyperspectral remotely sensed data can be used to predict topsoil properties of highly weathered soils, although spectral mixture of bare soil with vegetation must be considered in order to achieve an improved prediction accuracy.

  14. Improved accuracy of radar WPMM estimated rainfall upon application of objective classification criteria

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Daniel; Amitai, Eyal; Wolff, David B.

    1995-01-01

    Application of the window probability matching method to radar and rain gauge data that have been objectively classified into different rain types resulted in distinctly different Z(sub e)-R relationships for the various classifications. These classification parameters, in addition to the range from the radar, are (a) the horizontal radial reflectivity gradients (dB/km); (b) the cloud depth, as scaled by the effective efficiency; (c) the brightband fraction within the radar field window; and (d) the height of the freezing level. Combining physical parameters to identify the type of precipitation and statistical relations most appropriate to the precipitation types results in considerable improvement of both point and areal rainfall measurements. A limiting factor in the assessment of the improved accuracy is the inherent variance between the true rain intensity at the radar measured volume and the rain intensity at the mouth of the rain guage. Therefore, a very dense rain gauge network is required to validate most of the suggested realized improvement. A rather small sample size is required to achieve a stable Z(sub e)-R relationship (standard deviation of 15% of R for a given Z(sub e)) -- about 200 mm of rainfall accumulated in all guages combined for each classification.

  15. Improved Motor-Timing: Effects of Synchronized Metro-Nome Training on Golf Shot Accuracy

    PubMed Central

    Sommer, Marius; Rönnqvist, Louise

    2009-01-01

    This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. Twenty-six experienced male golfers participated (mean age 27 years; mean golf handicap 12.6) in this study. Pre- and post-test investigations of golf shots made by three different clubs were conducted by use of a golf simulator. The golfers were randomized into two groups: a SMT group and a Control group. After the pre-test, the golfers in the SMT group completed a 4-week SMT program designed to improve their motor timing, the golfers in the Control group were merely training their golf-swings during the same time period. No differences between the two groups were found from the pre-test outcomes, either for motor timing scores or for golf shot accuracy. However, the post-test results after the 4-weeks SMT showed evident motor timing improvements. Additionally, significant improvements for golf shot accuracy were found for the SMT group and with less variability in their performance. No such improvements were found for the golfers in the Control group. As with previous studies that used a SMT program, this study’s results provide further evidence that motor timing can be improved by SMT and that such timing improvement also improves golf accuracy. Key points This study investigates the effect of synchronized metronome training (SMT) on motor timing and how this training might affect golf shot accuracy. A randomized control group design was used. The 4 week SMT intervention showed significant improvements in motor timing, golf shot accuracy, and lead to less variability. We conclude that this study’s results provide further evidence that motor timing can be improved by SMT training and that such timing improvement also improves golf accuracy. PMID:24149608

  16. AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization and multithreading

    PubMed Central

    Trott, Oleg; Olson, Arthur J.

    2011-01-01

    AutoDock Vina, a new program for molecular docking and virtual screening, is presented. AutoDock Vina achieves an approximately two orders of magnitude speed-up compared to the molecular docking software previously developed in our lab (AutoDock 4), while also significantly improving the accuracy of the binding mode predictions, judging by our tests on the training set used in AutoDock 4 development. Further speed-up is achieved from parallelism, by using multithreading on multi-core machines. AutoDock Vina automatically calculates the grid maps and clusters the results in a way transparent to the user. PMID:19499576

  17. The use of imprecise processing to improve accuracy in weather and climate prediction

    SciTech Connect

    Düben, Peter D.; McNamara, Hugh; Palmer, T.N.

    2014-08-15

    The use of stochastic processing hardware and low precision arithmetic in atmospheric models is investigated. Stochastic processors allow hardware-induced faults in calculations, sacrificing bit-reproducibility and precision in exchange for improvements in performance and potentially accuracy of forecasts, due to a reduction in power consumption that could allow higher resolution. A similar trade-off is achieved using low precision arithmetic, with improvements in computation and communication speed and savings in storage and memory requirements. As high-performance computing becomes more massively parallel and power intensive, these two approaches may be important stepping stones in the pursuit of global cloud-resolving atmospheric modelling. The impact of both hardware induced faults and low precision arithmetic is tested using the Lorenz '96 model and the dynamical core of a global atmosphere model. In the Lorenz '96 model there is a natural scale separation; the spectral discretisation used in the dynamical core also allows large and small scale dynamics to be treated separately within the code. Such scale separation allows the impact of lower-accuracy arithmetic to be restricted to components close to the truncation scales and hence close to the necessarily inexact parametrised representations of unresolved processes. By contrast, the larger scales are calculated using high precision deterministic arithmetic. Hardware faults from stochastic processors are emulated using a bit-flip model with different fault rates. Our simulations show that both approaches to inexact calculations do not substantially affect the large scale behaviour, provided they are restricted to act only on smaller scales. By contrast, results from the Lorenz '96 simulations are superior when small scales are calculated on an emulated stochastic processor than when those small scales are parametrised. This suggests that inexact calculations at the small scale could reduce computation and

  18. A Method to Improve Mineral Identification Accuracy Based on Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Wang, Y. J.; Lin, Q. Z.; Wang, Q. J.; Chen, Y.

    2014-03-01

    To improve the mineral identification accuracy of the rapid quantificational identification model, the noise was filtered in fragment based on the wavelength of altered mineral absorption peak and the regional spectral library that fitted for the study area was established. The filtered spectra were analyzed by the method with regional spectral library. Compared with the originally mineral identification result, the average efficiency rate was improved by 5.1%; the average accuracy rate was improved by 17.7%. The results were optimized by the method based on the position of the altered mineral absorption peak. The average efficiency rate would be improved in the future to identify more accurate minerals.

  19. Techniques for improving the accuracy of cyrogenic temperature measurement in ground test programs

    NASA Technical Reports Server (NTRS)

    Dempsey, Paula J.; Fabik, Richard H.

    1993-01-01

    The performance of a sensor is often evaluated by determining to what degree of accuracy a measurement can be made using this sensor. The absolute accuracy of a sensor is an important parameter considered when choosing the type of sensor to use in research experiments. Tests were performed to improve the accuracy of cryogenic temperature measurements by calibration of the temperature sensors when installed in their experimental operating environment. The calibration information was then used to correct for temperature sensor measurement errors by adjusting the data acquisition system software. This paper describes a method to improve the accuracy of cryogenic temperature measurements using corrections in the data acquisition system software such that the uncertainty of an individual temperature sensor is improved from plus or minus 0.90 deg R to plus or minus 0.20 deg R over a specified range.

  20. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    PubMed

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-01

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. PMID:27164998

  1. Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF

    NASA Astrophysics Data System (ADS)

    Hutson, Malcolm; Reiners, Dirk

    2010-01-01

    Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.

  2. Integration of classification methods for improvement of land-cover map accuracy

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Hua; Skidmore, A. K.; Van Oosten, H.

    Classifiers, which are used to recognize patterns in remotely sensing images, have complementary capabilities. This study tested whether integrating the results from individual classifiers improves classification accuracy. Two integrated approaches were undertaken. One approach used a consensus builder (CSB) to adjust classification output in the case of disagreement in classification between maximum likelihood classifier (MLC), expert system classifier (ESC) and neural network classifier (NNC). If the output classes for each individual pixel differed, the producer accuracies for each class were compared and the class with the highest producer accuracy was assigned to the pixel. The consensus builder approach resulted in a classification with a slightly lower accuracy (72%) when compared with the neural network classifier (74%), but it did significantly better than the maximum likelihood (62%) and expert system (59%) classifiers. The second approach integrated a rule-based expert system classifier and a neural network classifier. The output of the expert system classifier was used as one additional new input layer of the neural network classifier. A postprocessing using the producer accuracies and some additional expert rules was applied to improve the output of the integrated classifier. This is a relatively new approach in the field of image processing. This second approach produced the highest overall accuracy (80%). Thus, incorporating correct, complete and relevant expert knowledge in a neural network classifier leads to higher classification accuracy.

  3. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  4. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-05-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  5. Improved Accuracy of the Inherent Shrinkage Method for Fast and More Reliable Welding Distortion Calculations

    NASA Astrophysics Data System (ADS)

    Mendizabal, A.; González-Díaz, J. B.; San Sebastián, M.; Echeverría, A.

    2016-07-01

    This paper describes the implementation of a simple strategy adopted for the inherent shrinkage method (ISM) to predict welding-induced distortion. This strategy not only makes it possible for the ISM to reach accuracy levels similar to the detailed transient analysis method (considered the most reliable technique for calculating welding distortion) but also significantly reduces the time required for these types of calculations. This strategy is based on the sequential activation of welding blocks to account for welding direction and transient movement of the heat source. As a result, a significant improvement in distortion prediction is achieved. This is demonstrated by experimentally measuring and numerically analyzing distortions in two case studies: a vane segment subassembly of an aero-engine, represented with 3D-solid elements, and a car body component, represented with 3D-shell elements. The proposed strategy proves to be a good alternative for quickly estimating the correct behaviors of large welded components and may have important practical applications in the manufacturing industry.

  6. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers

    PubMed Central

    Thompson, Clarissa A.; Opfer, John E.

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688

  7. Alaska Case Study: Scientists Venturing Into Field with Journalists Improves Accuracy

    NASA Astrophysics Data System (ADS)

    Ekwurzel, B.; Detjen, J.; Hayes, R.; Nurnberger, L.; Pavangadkar, A.; Poulson, D.

    2008-12-01

    Issues such as climate change, stem cell research, public health vaccination, etc., can be fraught with public misunderstanding, myths, as well as deliberate distortions of the fundamental science. Journalists are adept at creating print, radio, and video content that can be both compelling and informative to the public. Yet most scientists have little time or training to devote to developing media content for the public and spend little time with journalists who cover science stories. We conducted a case study to examine whether the time and funding invested in exposing journalists to scientists in the field over several days would improve accuracy of media stories about complex scientific topics. Twelve journalists were selected from the 70 who applied for a four-day environmental journalism fellowship in Alaska. The final group achieved the goal of a broad geographic spectrum of the media outlets (small regional to large national organizations), medium (print, radio, online), and experience (early career to senior producers). Reporters met with a diverse group of scientists. The lessons learned and successful techniques will be presented. Initial results demonstrate that stories were highly accurate and rich with audio or visual content for lay audiences. The journalists have also maintained contact with the scientists, asking for leads on emerging stories and seeking new experts that can assist in their reporting. Science-based institutions should devote more funding to foster direct journalist-scientist interactions in the lab and field. These positive goals can be achieved: (1) more accurate dissemination of science information to the public; (2) a broader portion of the scientific community will become a resource to journalists instead of the same eloquent few in the community; (3) scientists will appreciate the skill and pressures of those who survive the media downsizing and provide media savvy content; and (4) the public may incorporate science evidence

  8. Peaks, plateaus, numerical instabilities, and achievable accuracy in Galerkin and norm minimizing procedures for solving Ax=b

    SciTech Connect

    Cullum, J.

    1994-12-31

    Plots of the residual norms generated by Galerkin procedures for solving Ax = b often exhibit strings of irregular peaks. At seemingly erratic stages in the iterations, peaks appear in the residual norm plot, intervals of iterations over which the norms initially increase and then decrease. Plots of the residual norms generated by related norm minimizing procedures often exhibit long plateaus, sequences of iterations over which reductions in the size of the residual norm are unacceptably small. In an earlier paper the author discussed and derived relationships between such peaks and plateaus within corresponding Galerkin/Norm Minimizing pairs of such methods. In this paper, through a set of numerical experiments, the author examines connections between peaks, plateaus, numerical instabilities, and the achievable accuracy for such pairs of iterative methods. Three pairs of methods, GMRES/Arnoldi, QMR/BCG, and two bidiagonalization methods are studied.

  9. Improving forest cover classification accuracy from Landsat by incorporating topographic information

    NASA Technical Reports Server (NTRS)

    Strahler, A. H.; Logan, T. L.; Bryant, N. A.

    1978-01-01

    The paper shows that accuracies of computer classification of species-specific forest cover types from Landsat imagery can be improved by 27% or more through the incorporation of topographic information from digital terrain tapes registered to multidate Landsat imagery. The topographic information improves classification accuracies because many common forest tree species have preferred elevation ranges and slope aspects. These preferences allow the separation of forest cover types which have similar spectral signatures but different species compositions. It is noted that the development of a classification system which uses prior probabilities and sets of prior probabilities conditioned by one or two external variables represents a significant increase in classification power.

  10. Verification and Improving Planimetric Accuracy of Airborne Laser Scanning Data with Using Photogrammetric Data

    NASA Astrophysics Data System (ADS)

    Bakuła, K.; Dominik, W.; Ostrowski, W.

    2014-03-01

    In this study results of planimetric accuracy of LIDAR data were verified with application of intensity of laser beam reflection and point cloud modelling results. Presented research was the basis for improving the accuracy of the products from the processing of LIDAR data, what is particularly important in issues related to surveying measurements. In the experiment, the true-ortho from the large-format aerial images with known exterior orientation were used to check the planimetric accuracy of LIDAR data in two proposed approaches. First analysis was carried out by comparison the position of the selected points identifiable on true-ortho from aerial images with corresponding points in the raster of reflection intensity reflection. Second method to verify planimetric accuracy used roof ridges from 3D building models automatically created from LIDAR data being intersections of surfaces from point cloud. Both analyses were carried out for 3 fragments of LIDAR strips. Detected systematic planimetric error in size of few centimetres enabled an implementation of appropriate correction for analyzed data locally. The presented problem and proposed solutions provide an opportunity to improve the accuracy of the LiDAR data. Such methods allowed for efficient use by specialists in other fields not directly related to the issues of orientation and accuracy of photogrammetric data during their acquisition and pre-processing

  11. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  12. Improving the accuracy of brain tumor surgery via Raman-based technology.

    PubMed

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W; Sunney Xie, X; Orringer, Daniel A

    2016-03-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors. PMID:26926067

  13. Improving the accuracy of brain tumor surgery via Raman-based technology

    PubMed Central

    Hollon, Todd; Lewis, Spencer; Freudiger, Christian W.; Xie, X. Sunney; Orringer, Daniel A.

    2016-01-01

    Despite advances in the surgical management of brain tumors, achieving optimal surgical results and identification of tumor remains a challenge. Raman spectroscopy, a laser-based technique that can be used to nondestructively differentiate molecules based on the inelastic scattering of light, is being applied toward improving the accuracy of brain tumor surgery. Here, the authors systematically review the application of Raman spectroscopy for guidance during brain tumor surgery. Raman spectroscopy can differentiate normal brain from necrotic and vital glioma tissue in human specimens based on chemical differences, and has recently been shown to differentiate tumor-infiltrated tissues from noninfiltrated tissues during surgery. Raman spectroscopy also forms the basis for coherent Raman scattering (CRS) microscopy, a technique that amplifies spontaneous Raman signals by 10,000-fold, enabling real-time histological imaging without the need for tissue processing, sectioning, or staining. The authors review the relevant basic and translational studies on CRS microscopy as a means of providing real-time intraoperative guidance. Recent studies have demonstrated how CRS can be used to differentiate tumor-infiltrated tissues from noninfiltrated tissues and that it has excellent agreement with traditional histology. Under simulated operative conditions, CRS has been shown to identify tumor margins that would be undetectable using standard bright-field microscopy. In addition, CRS microscopy has been shown to detect tumor in human surgical specimens with near-perfect agreement to standard H & E microscopy. The authors suggest that as the intraoperative application and instrumentation for Raman spectroscopy and imaging matures, it will become an essential component in the neurosurgical armamentarium for identifying residual tumor and improving the surgical management of brain tumors. PMID:26926067

  14. Design of a platinum resistance thermometer temperature measuring transducer and improved accuracy of linearizing the output voltage

    SciTech Connect

    Malygin, V.M.

    1995-06-01

    An improved method is presented for designing a temperature measuring transducer, the electrical circuit of which comprises an unbalanced bridge, in one arm of which is a platinum resistance thermometer, and containing a differential amplifier with feedback. Values are given for the coefficients, the minimum linearization error is determined, and an example is also given of the practical design of the transducer, using the given coefficients. A determination is made of the limiting achievable accuracy in linearizing the output voltage of the measuring transducer, as a function of the range of measured temperature.

  15. Improvement of Accuracy in Environmental Dosimetry by TLD Cards Using Three-dimensional Calibration Method

    PubMed Central

    HosseiniAliabadi, S. J.; Hosseini Pooya, S. M.; Afarideh, H.; Mianji, F.

    2015-01-01

    Introduction The angular dependency of response for TLD cards may cause deviation from its true value on the results of environmental dosimetry, since TLDs may be exposed to radiation at different angles of incidence from the surrounding area. Objective A 3D setting of TLD cards has been calibrated isotropically in a standard radiation field to evaluate the improvement of the accuracy of measurement for environmental dosimetry. Method Three personal TLD cards were rectangularly placed in a cylindrical holder, and calibrated using 1D and 3D calibration methods. Then, the dosimeter has been used simultaneously with a reference instrument in a real radiation field measuring the accumulated dose within a time interval. Result The results show that the accuracy of measurement has been improved by 6.5% using 3D calibration factor in comparison with that of normal 1D calibration method. Conclusion This system can be utilized in large scale environmental monitoring with a higher accuracy. PMID:26157729

  16. Evidence that Smaller Schools Do Not Improve Student Achievement

    ERIC Educational Resources Information Center

    Wainer, Howard; Zwerling, Harris L.

    2006-01-01

    If more small schools than "expected" are among the high achievers, then creating more small schools would raise achievement across the board, many proponents of small schools have argued. In this article, the authors challenge the faulty logic of such inferences. Many claims have been made about the advantages of smaller schools. One is that,…

  17. Improved accuracy with 3D planning and patient-specific instruments during simulated pelvic bone tumor surgery.

    PubMed

    Cartiaux, Olivier; Paul, Laurent; Francq, Bernard G; Banse, Xavier; Docquier, Pierre-Louis

    2014-01-01

    In orthopaedic surgery, resection of pelvic bone tumors can be inaccurate due to complex geometry, limited visibility and restricted working space of the pelvis. The present study investigated accuracy of patient-specific instrumentation (PSI) for bone-cutting during simulated tumor surgery within the pelvis. A synthetic pelvic bone model was imaged using a CT-scanner. The set of images was reconstructed in 3D and resection of a simulated periacetabular tumor was defined with four target planes (ischium, pubis, anterior ilium, and posterior ilium) with a 10-mm desired safe margin. Patient-specific instruments for bone-cutting were designed and manufactured using rapid-prototyping technology. Twenty-four surgeons (10 senior and 14 junior) were asked to perform tumor resection. After cutting, ISO1101 location and flatness parameters, achieved surgical margins and the time were measured. With PSI, the location accuracy of the cut planes with respect to the target planes averaged 1 and 1.2 mm in the anterior and posterior ilium, 2 mm in the pubis and 3.7 mm in the ischium (p < 0.0001). Results in terms of the location of the cut planes and the achieved surgical margins did not reveal any significant difference between senior and junior surgeons (p = 0.2214 and 0.8449, respectively). The maximum differences between the achieved margins and the 10-mm desired safe margin were found in the pubis (3.1 and 5.1 mm for senior and junior surgeons respectively). Of the 24 simulated resection, there was no intralesional tumor cutting. This study demonstrates that using PSI technology during simulated bone cuts of the pelvis can provide good cutting accuracy. Compared to a previous report on computer assistance for pelvic bone cutting, PSI technology clearly demonstrates an equivalent value-added for bone cutting accuracy than navigation technology. When in vivo validated, PSI technology may improve pelvic bone tumor surgery by providing clinically acceptable margins. PMID

  18. Accuracy Feedback Improves Word Learning from Context: Evidence from a Meaning-Generation Task

    ERIC Educational Resources Information Center

    Frishkoff, Gwen A.; Collins-Thompson, Kevyn; Hodges, Leslie; Crossley, Scott

    2016-01-01

    The present study asked whether accuracy feedback on a meaning generation task would lead to improved contextual word learning (CWL). Active generation can facilitate learning by increasing task engagement and memory retrieval, which strengthens new word representations. However, forced generation results in increased errors, which can be…

  19. Recommendations to improve the accuracy of estimates of physical activity derived from self report

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. To provide recommendations to improve the accuracy of physical activity derived from self report. We provide an ov...

  20. Improving the accuracy and reliability of remote system-calibration-free eye-gaze tracking.

    PubMed

    Hennessey, Craig A; Lawrence, Peter D

    2009-07-01

    Remote eye-gaze tracking provides a means for nonintrusive tracking of the point-of-gaze (POG) of a user. For application as a user interface for the disabled, a remote system that is noncontact, reliable, and permits head motion is very desirable. The system-calibration-free pupil-corneal reflection (P-CR) vector technique for POG estimation is a popular method due to its simplicity, however, accuracy has been shown to be degraded with head displacement. Model-based POG-estimation methods were developed, which improve system accuracy during head displacement, however, these methods require complex system calibration in addition to user calibration. In this paper, the use of multiple corneal reflections and point-pattern matching allows for a scaling correction of the P-CR vector for head displacements as well as an improvement in system robustness to corneal reflection distortion, leading to improved POG-estimation accuracy. To demonstrate the improvement in performance, the enhanced multiple corneal reflection P-CR method is compared to the monocular and binocular accuracy of the traditional single corneal reflection P-CR method, and a model-based method of POG estimation for various head displacements. PMID:19272975

  1. On the use of Numerical Weather Models for improving SAR geolocation accuracy

    NASA Astrophysics Data System (ADS)

    Nitti, D. O.; Chiaradia, M.; Nutricato, R.; Bovenga, F.; Refice, A.; Bruno, M. F.; Petrillo, A. F.; Guerriero, L.

    2013-12-01

    Precise estimation and correction of the Atmospheric Path Delay (APD) is needed to ensure sub-pixel accuracy of geocoded Synthetic Aperture Radar (SAR) products, in particular for the new generation of high resolution side-looking SAR satellite sensors (TerraSAR-X, COSMO/SkyMED). The present work aims to assess the performances of operational Numerical Weather Prediction (NWP) Models as tools to routinely estimate the APD contribution, according to the specific acquisition beam of the SAR sensor for the selected scene on ground. The Regional Atmospheric Modeling System (RAMS) has been selected for this purpose. It is a finite-difference, primitive equation, three-dimensional non-hydrostatic mesoscale model, originally developed at Colorado State University [1]. In order to appreciate the improvement in target geolocation when accounting for APD, we need to rely on the SAR sensor orbital information. In particular, TerraSAR-X data are well-suited for this experiment, since recent studies have confirmed the few centimeter accuracy of their annotated orbital records (Science level data) [2]. A consistent dataset of TerraSAR-X stripmap images (Pol.:VV; Look side: Right; Pass Direction: Ascending; Incidence Angle: 34.0÷36.6 deg) acquired in Daunia in Southern Italy has been hence selected for this study, thanks also to the availability of six trihedral corner reflectors (CR) recently installed in the area covered by the imaged scenes and properly directed towards the TerraSAR-X satellite platform. The geolocation of CR phase centers is surveyed with cm-level accuracy using differential GPS (DGPS). The results of the analysis are shown and discussed. Moreover, the quality of the APD values estimated through NWP models will be further compared to those annotated in the geolocation grid (GEOREF.xml), in order to evaluate whether annotated corrections are sufficient for sub-pixel geolocation quality or not. Finally, the analysis will be extended to a limited number of

  2. Incorporating the effect of DEM resolution and accuracy for improved flood inundation mapping

    NASA Astrophysics Data System (ADS)

    Saksena, Siddharth; Merwade, Venkatesh

    2015-11-01

    Topography plays a major role in determining the accuracy of flood inundation areas. However, many areas in the United States and around the world do not have access to high quality topographic data in the form of Digital Elevation Models (DEM). For such areas, an improved understanding of the effects of DEM properties such as horizontal resolution and vertical accuracy on flood inundation maps may eventually lead to improved flood inundation modeling and mapping. This study attempts to relate the errors arising from DEM properties such as spatial resolution and vertical accuracy to flood inundation maps, and then use this relationship to create improved flood inundation maps from coarser resolution DEMs with low accuracy. The results from the five stream reaches used in this study show that water surface elevations (WSE) along the stream and the flood inundation area have a linear relationship with both DEM resolution and accuracy. This linear relationship is then used to extrapolate the water surface elevations from coarser resolution DEMs to get water surface elevations corresponding to a finer resolution DEM. Application of this approach show that improved results can be obtained from flood modeling by using coarser and less accurate DEMs, including public domain datasets such as the National Elevation Dataset and Shuttle Radar Topography Mission (SRTM) DEMs. The improvement in the WSE and its application to obtain better flood inundation maps is dependent on the study reach characteristics such as land use, valley shape, reach length and width. Application of the approach presented in this study on more reaches may lead to development of guidelines for flood inundation mapping using coarser resolution and less accurate topographic datasets.

  3. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging.

    PubMed

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J; Lu, Yang; Sellke, Eric W; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm’s burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z -test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm’s accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities. PMID:26305321

  4. Outlier detection and removal improves accuracy of machine learning approach to multispectral burn diagnostic imaging

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Squiers, John J.; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2015-12-01

    Multispectral imaging (MSI) was implemented to develop a burn tissue classification device to assist burn surgeons in planning and performing debridement surgery. To build a classification model via machine learning, training data accurately representing the burn tissue was needed, but assigning raw MSI data to appropriate tissue classes is prone to error. We hypothesized that removing outliers from the training dataset would improve classification accuracy. A swine burn model was developed to build an MSI training database and study an algorithm's burn tissue classification abilities. After the ground-truth database was generated, we developed a multistage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data. Test accuracy was improved from 63% to 76%, matching the accuracy of clinical judgment of expert burn surgeons, the current gold standard in burn injury assessment. Given that there are few surgeons and facilities specializing in burn care, this technology may improve the standard of burn care for patients without access to specialized facilities.

  5. Improved speed and accuracy of calculations with a programmable calculator in pediatric emergency scenarios.

    PubMed

    Melzer-Lange, M; Wyatt, D; Walsh-Kelly, C; Smith, D; Hegenbarth, M A; Eisenberg, C S

    1991-03-01

    Both mathematical and selection errors may occur when ordering drug or fluid therapy in a busy emergency department. In an attempt to improve the speed and accuracy of such calculations, we programmed a hand-held calculator to assist in drug and intravenous fluid therapy dosages and rates for three emergency situations: diabetic ketoacidosis, asthma, and asystole. Performance by 58 subjects at various levels of training was compared when using either the programmable calculator or standard materials and methods. When standard methods were used, an average of 30.6 minutes was needed to complete the three scenarios, with an accuracy of 73%; by contrast, use of programmable calculator resulted in a significant decline in time needed to calculate doses (an average of only 8.5 minutes), with an improved accuracy of 98%. The use of a programmable calculator can result in a significant improvement in both speed and accuracy of drug and fluid selection and dosage and rate calculations, regardless of the level of the subject's medical training. PMID:1900657

  6. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage.

    PubMed

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user

  7. Convergence, Divergence, and Reconvergence in a Feedforward Network Improves Neural Speed and Accuracy.

    PubMed

    Jeanne, James M; Wilson, Rachel I

    2015-12-01

    One of the proposed canonical circuit motifs employed by the brain is a feedforward network where parallel signals converge, diverge, and reconverge. Here we investigate a network with this architecture in the Drosophila olfactory system. We focus on a glomerulus whose receptor neurons converge in an all-to-all manner onto six projection neurons that then reconverge onto higher-order neurons. We find that both convergence and reconvergence improve the ability of a decoder to detect a stimulus based on a single neuron's spike train. The first transformation implements averaging, and it improves peak detection accuracy but not speed; the second transformation implements coincidence detection, and it improves speed but not peak accuracy. In each case, the integration time and threshold of the postsynaptic cell are matched to the statistics of convergent spike trains. PMID:26586183

  8. Can simultaneously acquired electrodermal activity improve accuracy of fMRI detection of deception?

    PubMed

    Kozel, F Andrew; Johnson, Kevin A; Laken, Steven J; Grenesko, Emily L; Smith, Joshua A; Walker, John; George, Mark S

    2009-01-01

    Observation of changes in autonomic arousal was one of the first methodologies used to detect deception. Electrodermal activity (EDA) is a peripheral measure of autonomic arousal and one of the primary channels used in polygraph exams. In an attempt to develop a more central measure to identify lies, the use of functional magnetic resonance imaging (fMRI) to detect deception is being investigated. We wondered if adding EDA to our fMRI analysis would improve our diagnostic ability. For our approach, however, adding EDA did not improve the accuracy in a laboratory-based deception task. In testing for brain regions that replicated as correlates of EDA, we did find significant associations in right orbitofrontal and bilateral anterior cingulate regions. Further work is required to test whether EDA improves accuracy in other testing formats or with higher levels of jeopardy. PMID:18633826

  9. HEAT: High accuracy extrapolated ab initio thermochemistry. III. Additional improvements and overview.

    SciTech Connect

    Harding, M. E.; Vazquez, J.; Ruscic, B.; Wilson, A. K.; Gauss, J.; Stanton, J. F.; Chemical Sciences and Engineering Division; Univ. t Mainz; The Univ. of Texas; Univ. of North Texas

    2008-01-01

    Effects of increased basis-set size as well as a correlated treatment of the diagonal Born-Oppenheimer approximation are studied within the context of the high-accuracy extrapolated ab initio thermochemistry (HEAT) theoretical model chemistry. It is found that the addition of these ostensible improvements does little to increase the overall accuracy of HEAT for the determination of molecular atomization energies. Fortuitous cancellation of high-level effects is shown to give the overall HEAT strategy an accuracy that is, in fact, higher than most of its individual components. In addition, the issue of core-valence electron correlation separation is explored; it is found that approximate additive treatments of the two effects have limitations that are significant in the realm of <1 kJ mol{sup -1} theoretical thermochemistry.

  10. Healthcare systems engineering: an interdisciplinary approach to achieving continuous improvement.

    PubMed

    Wu, Bin; Klein, Cerry; Stone, Tamara T

    2006-01-01

    This paper argues that a systems approach can significantly enhance healthcare improvement efforts in patient safety, service quality and healthcare cost containment. The application of systems thinking to healthcare improvement encompasses three key principles: the systems perspective of healthcare processes, structured problem solving and the closed loop of continuous system improvement. These are encapsulated in a conceptual framework of continuous system improvement, which includes a reference architecture model and an analysis and design process model. Combined into a closed-loop, this framework allows users to understand and appropriately apply relevant functions, issues and analytical techniques. Practical applications of the framework are presented. PMID:18048245

  11. Cognitive Processing Profiles of School-Age Children Who Meet Low-Achievement, IQ-Discrepancy, or Dual Criteria for Underachievement in Oral Reading Accuracy

    ERIC Educational Resources Information Center

    Van Santen, Frank W.

    2012-01-01

    The purpose of this study was to compare the cognitive processing profiles of school-age children (ages 7 to 17) who met criteria for underachievement in oral reading accuracy based on three different methods: 1) use of a regression-based IQ-achievement discrepancy only (REGonly), 2) use of a low-achievement cutoff only (LAonly), and 3) use of a…

  12. A New Regional 3-D Velocity Model of the India-Pakistan Region for Improved Event Location Accuracy

    NASA Astrophysics Data System (ADS)

    Reiter, D.; Vincent, C.; Johnson, M.

    2001-05-01

    A 3-D velocity model for the crust and upper mantle (WINPAK3D) has been developed to improve regional event location in the India-Pakistan region. Results of extensive testing demonstrate that the model improves location accuracy for this region, specifically for the case of small regionally recorded events, for which teleseismic data may not be available. The model was developed by integrating the results of more than sixty previous studies related to crustal velocity structure in the region. We evaluated the validity of the 3-D model using the following methods: (1) cross validation analysis for a variety of events, (2) comparison of model determined hypocenters with known event location, and (3) comparison of model-derived and empirically-derived source-specific station corrections (SSSC) generated for the International Monitoring System (IMS) auxiliary seismic station located at Nilore. The 3-D model provides significant improvement in regional location compared to both global and regional 1-D models in this area of complex structural variability. For example, the epicenter mislocation for an event with a well known location was only 6.4 km using the 3-D model, compared with a mislocation of 13.0 km using an average regional 1-D model and 15.1 km for the IASPEI91 model. We will present these and other results to demonstrate that 3-D velocity models are essential to improving event location accuracy in regions with complicated crustal geology and structures. Such 3-D models will be a prerequisite for achieving improved location accuracies for regions of high monitoring interest.

  13. Analysis and improvement of accuracy, sensitivity, and resolution of the coherent gradient sensing method.

    PubMed

    Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin

    2016-06-10

    The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035

  14. Partnering through Training and Practice to Achieve Performance Improvement

    ERIC Educational Resources Information Center

    Lyons, Paul R.

    2010-01-01

    This article presents a partnership effort among managers, trainers, and employees to spring to life performance improvement using the performance templates (P-T) approach. P-T represents a process model as well as a method of training leading to performance improvement. Not only does it add to our repertoire of training and performance management…

  15. Capacity Building for a School Improvement Program, Achievement Directed Leadership.

    ERIC Educational Resources Information Center

    Graeber, Anna O.; And Others

    This report describes and evaluates efforts to enhance school districts' capacity to implement and institutionalize the monitoring and management system for an instructional leadership program called Achievement Directed Leadership (ADL). Chapter one introduces the report's methodology, limitations, and structure. Chapter two first states the…

  16. Improving Secondary School Students' Achievement using Intrinsic Motivation

    ERIC Educational Resources Information Center

    Albrecht, Erik; Haapanen, Rebecca; Hall, Erin; Mantonya, Michelle

    2009-01-01

    This report describes a program for increasing students' intrinsic motivation in an effort to increase academic achievement. The targeted population consisted of secondary level students in a middle to upper-middle class suburban area. The students of the targeted secondary level classes appeared to be disengaged from learning due to a lack of…

  17. Does Video-Autotutorial Instruction Improve College Student Achievement?

    ERIC Educational Resources Information Center

    Fisher, K. M.; And Others

    1977-01-01

    Compares student achievement in an upper-division college introductory course taught by the video-autotutorial method with that in two comparable courses taught by the lecture-discussion method. Pre-post tests of 623 students reveal that video-autotutorial students outperform lecture/discussion participants at all ability levels and that in…

  18. Middle School Practices Improve Student Achievement in High Poverty Schools.

    ERIC Educational Resources Information Center

    Mertens, Steven B.; Flowers, Nancy

    2003-01-01

    Examined how interdisciplinary team practices and classroom instructional practices affected student achievement in high poverty middle schools in Arkansas, Louisiana, and Mississippi. Found that when the combined effects of family poverty level, teaming and common planning time, and duration of teaming were considered, there was a relationship…

  19. Helping Students Improve Academic Achievement and School Success Behavior

    ERIC Educational Resources Information Center

    Brigman, Greg; Campbell, Chari

    2003-01-01

    This article describes a study evaluating the impact of school-counselor-led interventions on student academic achievement and school success behavior. A group counseling and classroom guidance model called student success skills (SSS) was the primary intervention. The focus of the SSS model was on three sets of skills identified in several…

  20. Instruction and Achievement in Chicago Elementary Schools. Improving Chicago's Schools.

    ERIC Educational Resources Information Center

    Smith, Julia B.; Lee, Valerie E.; Newmann, Fred M.

    This study focused on the link between different forms of instruction and learning in Chicago, Illinois, elementary schools. It used teachers' survey reports about their instruction in the 1997-1997 school year and linked these reports with achievement gains. The study tested the common assumption that the nature of standardized assessments…

  1. An Effective Way to Improve Mathematics Achievement in Urban Schools

    ERIC Educational Resources Information Center

    Kim, Taik

    2010-01-01

    The local Gaining Early Awareness and Readiness for Undergraduate Programs (GEARUP) partnership serves 11 K-8 schools with the lowest achievement scores and the highest poverty rates in a large Midwestern urban district. Recently, GEARUP launched a specially designed teaching program, Mathematics Enhancement Group (MEG), for underachievers in…

  2. Improved accuracy for finite element structural analysis via a new integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  3. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  4. Improve the ZY-3 Height Accuracy Using Icesat/glas Laser Altimeter Data

    NASA Astrophysics Data System (ADS)

    Li, Guoyuan; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-06-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What's more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  5. Has the use of computers in radiation therapy improved the accuracy in radiation dose delivery?

    NASA Astrophysics Data System (ADS)

    Van Dyk, J.; Battista, J.

    2014-03-01

    Purpose: It is well recognized that computer technology has had a major impact on the practice of radiation oncology. This paper addresses the question as to how these computer advances have specifically impacted the accuracy of radiation dose delivery to the patient. Methods: A review was undertaken of all the key steps in the radiation treatment process ranging from machine calibration to patient treatment verification and irradiation. Using a semi-quantitative scale, each stage in the process was analysed from the point of view of gains in treatment accuracy. Results: Our critical review indicated that computerization related to digital medical imaging (ranging from target volume localization, to treatment planning, to image-guided treatment) has had the most significant impact on the accuracy of radiation treatment. Conversely, the premature adoption of intensity-modulated radiation therapy has actually degraded the accuracy of dose delivery compared to 3-D conformal radiation therapy. While computational power has improved dose calibration accuracy through Monte Carlo simulations of dosimeter response parameters, the overall impact in terms of percent improvement is relatively small compared to the improvements accrued from 3-D/4-D imaging. Conclusions: As a result of computer applications, we are better able to see and track the internal anatomy of the patient before, during and after treatment. This has yielded the most significant enhancement to the knowledge of "in vivo" dose distributions in the patient. Furthermore, a much richer set of 3-D/4-D co-registered dose-image data is thus becoming available for retrospective analysis of radiobiological and clinical responses.

  6. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  7. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    PubMed

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  8. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  9. Improving the accuracy of gene expression profile classification with Lorenz curves and Gini ratios.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    Microarrays are a new technology with great potential to provide accurate medical diagnostics, help to find the right treatment for many diseases such as cancers, and provide a detailed genome-wide molecular portrait of cellular states. In this chapter, we show how Lorenz Curves and Gini Ratios can be modified to improve the accuracy of gene expression profile classification. Experimental results with different classification algorithms using additional techniques and strategies for improving the accuracy such as the principal component analysis, the correlation-based feature subset selection, and the consistency subset evaluation technique for the task of classifying lung adenocarcinomas from gene expression show that our method find more optimal genes than SAM. PMID:21431549

  10. Natural language processing with dynamic classification improves P300 speller accuracy and bit rate

    NASA Astrophysics Data System (ADS)

    Speier, William; Arnold, Corey; Lu, Jessica; Taira, Ricky K.; Pouratian, Nader

    2012-02-01

    The P300 speller is an example of a brain-computer interface that can restore functionality to victims of neuromuscular disorders. Although the most common application of this system has been communicating language, the properties and constraints of the linguistic domain have not to date been exploited when decoding brain signals that pertain to language. We hypothesized that combining the standard stepwise linear discriminant analysis with a Naive Bayes classifier and a trigram language model would increase the speed and accuracy of typing with the P300 speller. With integration of natural language processing, we observed significant improvements in accuracy and 40-60% increases in bit rate for all six subjects in a pilot study. This study suggests that integrating information about the linguistic domain can significantly improve signal classification.

  11. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.; Jiang, N.; Dunn, A. K.; Yeh, H.-C.

    2015-10-01

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.

  12. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    SciTech Connect

    Liu, C.; Liu, Y.-L.; Perillo, E. P.; Jiang, N.; Dunn, A. K. E-mail: tim.yeh@austin.utexas.edu; Yeh, H.-C. E-mail: tim.yeh@austin.utexas.edu

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.

  13. Optical fiber spectral attenuation measurement by using tunable laser sources to improve accuracy and uncertainty

    NASA Astrophysics Data System (ADS)

    Seah, Chee Hwee; Zhang, Jing; Xiang, Ning

    2015-07-01

    With reference to the IEC 60793-1-140 international standard of optical fibre measurement methods and test procedures in attenuation, we studied the optical fibre attenuation measurement by cut-back method using tuneable lasers source. By using a power stabilised laser source, we measured the fibre attenuation in the wavelength range from 1270 nm to 1350nm and from 1520 nm to 1620 nm using `cut-back' technique. The power measurement before and after cut-back have better repeatability. Besides, the evaluation of the splicing losses before and after cut-back as well as the evaluation of effective refractive index (Neff) will improve the accuracy in calculating the fibre attenuation. Our method will improve accuracy and reduce uncertainties in the measurement and thus enable us to establish our own optical fibre spectral attenuation standard.

  14. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  15. Improving the Accuracy and Efficiency of Respiratory Rate Measurements in Children Using Mobile Devices

    PubMed Central

    Chiu, Michelle; Dunsmuir, Dustin; Zhou, Guohai; Dumont, Guy A.; Ansermino, J. Mark

    2014-01-01

    The recommended method for measuring respiratory rate (RR) is counting breaths for 60 s using a timer. This method is not efficient in a busy clinical setting. There is an urgent need for a robust, low-cost method that can help front-line health care workers to measure RR quickly and accurately. Our aim was to develop a more efficient RR assessment method. RR was estimated by measuring the median time interval between breaths obtained from tapping on the touch screen of a mobile device. The estimation was continuously validated by measuring consistency (% deviation from the median) of each interval. Data from 30 subjects estimating RR from 10 standard videos with a mobile phone application were collected. A sensitivity analysis and an optimization experiment were performed to verify that a RR could be obtained in less than 60 s; that the accuracy improves when more taps are included into the calculation; and that accuracy improves when inconsistent taps are excluded. The sensitivity analysis showed that excluding inconsistent tapping and increasing the number of tap intervals improved the RR estimation. Efficiency (time to complete measurement) was significantly improved compared to traditional methods that require counting for 60 s. There was a trade-off between accuracy and efficiency. The most balanced optimization result provided a mean efficiency of 9.9 s and a normalized root mean square error of 5.6%, corresponding to 2.2 breaths/min at a respiratory rate of 40 breaths/min. The obtained 6-fold increase in mean efficiency combined with a clinically acceptable error makes this approach a viable solution for further clinical testing. The sensitivity analysis illustrating the trade-off between accuracy and efficiency will be a useful tool to define a target product profile for any novel RR estimation device. PMID:24919062

  16. Burn injury diagnostic imaging device's accuracy improved by outlier detection and removal

    NASA Astrophysics Data System (ADS)

    Li, Weizhi; Mo, Weirong; Zhang, Xu; Lu, Yang; Squiers, John J.; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffery E.

    2015-05-01

    Multispectral imaging (MSI) was implemented to develop a burn diagnostic device that will assist burn surgeons in planning and performing burn debridement surgery by classifying burn tissue. In order to build a burn classification model, training data that accurately represents the burn tissue is needed. Acquiring accurate training data is difficult, in part because the labeling of raw MSI data to the appropriate tissue classes is prone to errors. We hypothesized that these difficulties could be surmounted by removing outliers from the training dataset, leading to an improvement in the classification accuracy. A swine burn model was developed to build an initial MSI training database and study an algorithm's ability to classify clinically important tissues present in a burn injury. Once the ground-truth database was generated from the swine images, we then developed a multi-stage method based on Z-test and univariate analysis to detect and remove outliers from the training dataset. Using 10-fold cross validation, we compared the algorithm's accuracy when trained with and without the presence of outliers. The outlier detection and removal method reduced the variance of the training data from wavelength space, and test accuracy was improved from 63% to 76%. Establishing this simple method of conditioning for the training data improved the accuracy of the algorithm to match the current standard of care in burn injury assessment. Given that there are few burn surgeons and burn care facilities in the United States, this technology is expected to improve the standard of burn care for burn patients with less access to specialized facilities.

  17. Improving Ocean Color Data Products using a Purely Empirical Approach: Reducing the Requirement for Radiometric Calibration Accuracy

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2008-01-01

    Radiometric calibration is the foundation upon which ocean color remote sensing is built. Quality derived geophysical products, such as chlorophyll, are assumed to be critically dependent upon the quality of the radiometric calibration. Unfortunately, the goals of radiometric calibration are not typically met in global and large-scale regional analyses, and are especially deficient in coastal regions. The consequences of the uncertainty in calibration are very large in terms of global and regional ocean chlorophyll estimates. In fact, stability in global chlorophyll requires calibration uncertainty much greater than the goals, and outside of modern capabilities. Using a purely empirical approach, we show that stable and consistent global chlorophyll values can be achieved over very wide ranges of uncertainty. Furthermore, the approach yields statistically improved comparisons with in situ data, suggesting improved quality. The results suggest that accuracy requirements for radiometric calibration cab be reduced if alternative empirical approaches are used.

  18. Improved spatial accuracy of functional maps in the rat olfactory bulb using supervised machine learning approach.

    PubMed

    Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro

    2016-08-15

    Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. PMID:27236085

  19. Error compensation method for improving the accuracy of biomodels obtained from CBCT data.

    PubMed

    Santolaria, J; Jiménez, R; Rada, M; Loscos, F

    2014-03-01

    This paper presents a method of improving the accuracy of the tridimensional reconstruction of human bone biomodels by means of tomography, with a view to finite element modelling or surgical planning, and the subsequent manufacturing using rapid prototyping technologies. It is focused on the analysis and correction of the results obtained by means of cone beam computed tomography (CBCT), which is used to digitalize non-superficial biological parts along with a gauge part with calibrated dimensions. A correction of both the threshold and the voxel size in the tomographic images and the final reconstruction is proposed. Finally, a comparison between a reconstruction of a gauge part using the proposed method and the reconstruction of that same gauge part using a standard method is shown. The increase in accuracy in the biomodel allows an improvement in medical applications based on image diagnosis, more accurate results in computational modelling, and improvements in surgical planning in situations in which the required accuracy directly affects the procedure's results. Thus, the subsequent constructed biomodel will be affected mainly by dimensional errors due to the additive manufacturing technology utilized, not because of the 3D reconstruction or the image acquisition technology. PMID:24080232

  20. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    PubMed Central

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  1. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    PubMed

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873

  2. Organizational management practices for achieving software process improvement

    NASA Technical Reports Server (NTRS)

    Kandt, Ronald Kirk

    2004-01-01

    The crisis in developing software has been known for over thirty years. Problems that existed in developing software in the early days of computing still exist today. These problems include the delivery of low-quality products, actual development costs that exceed expected development costs, and actual development time that exceeds expected development time. Several solutions have been offered to overcome out inability to deliver high-quality software, on-time and within budget. One of these solutions involves software process improvement. However, such efforts often fail because of organizational management issues. This paper discusses business practices that organizations should follow to improve their chances of initiating and sustaining successful software process improvement efforts.

  3. In search of improving the numerical accuracy of the k - ɛ model by a transformation to the k - τ model

    NASA Astrophysics Data System (ADS)

    Dijkstra, Yoeri M.; Uittenbogaard, Rob E.; van Kester, Jan A. Th. M.; Pietrzak, Julie D.

    2016-08-01

    This study presents a detailed comparison between the k - ɛ and k - τ turbulence models. It is demonstrated that the numerical accuracy of the k - ɛ turbulence model can be improved in geophysical and environmental high Reynolds number boundary layer flows. This is achieved by transforming the k - ɛ model to the k - τ model, so that both models use the same physical parametrisation. The models therefore only differ in numerical aspects. A comparison between the two models is carried out using four idealised one-dimensional vertical (1DV) test cases. The advantage of a 1DV model is that it is feasible to carry out convergence tests with grids containing 5 to several thousands of vertical layers. It is shown hat the k - τ model is more accurate than the k - ɛ model in stratified and non-stratified boundary layer flows for grid resolutions between 10 and 100 layers. The k - τ model also shows a more monotonous convergence behaviour than the k - ɛ model. The price for the improved accuracy is about 20% more computational time for the k - τ model, which is due to additional terms in the model equations. The improved performance of the k - τ model is explained by the linearity of τ in the boundary layer and the better defined boundary condition.

  4. Achieving process control through improved grinding techniques for ferrite materials

    SciTech Connect

    Bruce, J.

    1995-09-01

    In manufacturing soft ferrite materials the particle size of the raw material has a significant impact on the reactivity of calcination. The control of particle size distribution and final formulation at wet milling after calcining impacts the reactivity during sintering and the magnetic properties of the final product. This paper will deal with steps taken to improve process control during the grinding operations of raw material and calcine in soft ferrite production. Equipment modifications as well as changes to the grinding and material handling techniques will be included. All examples of process control and improvements will be supported by data.

  5. Achieving improved cycle efficiency via pressure gain combustors

    SciTech Connect

    Gemmen, R.S.; Janus, M.C.; Richards, G.A.; Norton, T.S.; Rogers, W.A.

    1995-04-01

    As part of the Department of Energy`s Advanced Gas Turbine Systems Program, an investigation is being performed to evaluate ``pressure gain`` combustion systems for gas turbine applications. This paper presents experimental pressure gain and pollutant emission data from such combustion systems. Numerical predictions for certain combustor geometries are also presented. It is reported that for suitable aerovalved pulse combustor geometries studied experimentally, an overall combustor pressure gain of nearly 1 percent can be achieved. It is also shown that for one combustion system operating under typical gas turbine conditions, NO{sub x} and CO emmissions, are about 30 ppmv and 8 ppmv, respectively.

  6. Feedback to achieve improved sign-out technique.

    PubMed

    Doers, Matthew E; Beniwal-Patel, Poonam; Kuester, Jessica; Fletcher, Kathlyn E

    2015-01-01

    To maximize the quality of sign-out documents within the internal medicine residency, a quality improvement intervention was developed and implemented. Written sign-outs were collected from general medicine ward teams and graded using an 11-point checklist; in-person feedback was then given directly to the ward teams. Documentation of many of the 11 elements improved: mental status (22% to 66%, P < .0001), decisionality (40% to 66%, P < .0001), lab/test results (63% to 69%, P < .0001), level of acuity (34% to 50%, P < .0001), anticipatory guidance (69% to 82%, P < .0001), and future plans (35% to 38%, P < .0005). The use of vague language declined (41% to 26%, P < .0001). The mean total scores improved from 7.0 to 8.2 out of a possible 11 (P < .0001). As new house staff rotated onto the services, improvement over time was sustained with 1 feedback session per team, per month. Similar interventions could be made in other programs and other institutions. PMID:24878514

  7. ACHIEVING IRRIGATION RETURN FLOW QUALITY CONTROL THROUGH IMPROVED LEGAL SYSTEMS

    EPA Science Inventory

    The key to irrigated agricultural return flow quality control is proper utilization and management of the resource itself, and an accepted tool in out society is the law. This project is designed to develop legal alternatives that will facilitate the implementation of improved wa...

  8. Community Schools Seek to Improve High School Achievement, College Readiness

    ERIC Educational Resources Information Center

    Gilroy, Marilyn

    2011-01-01

    The Coalition for Community Schools, an alliance of more than 150 national, state, and local organizations, is bringing public schools in partnership with community resources to improve student success. While that might seem like an abstract idea, it has very concrete goals, such as boosting high school graduation rates and college readiness.…

  9. Achieving Continuous Improvement: Theories that Support a System Change.

    ERIC Educational Resources Information Center

    Armel, Donald

    Focusing on improvement is different than focusing on quality, quantity, customer satisfaction, and productivity. This paper discusses Open System Theory, and suggests ways to change large systems. Changing a system (meaning the way all the parts are connected) requires a considerable amount of data gathering and analysis. Choosing the proper…

  10. Using Curriculum-Based Measurement to Improve Achievement

    ERIC Educational Resources Information Center

    Clarke, Suzanne

    2009-01-01

    Response to intervention (RTI) is on the radar screen of most principals these days--finding out what it is, how it can improve teaching and learning, and what needs to be done to implement it effectively. One critical component of RTI that will require particular attention from principals is student progress monitoring, which is required in every…

  11. Improving Student Achievement in Solving Mathematical Word Problems.

    ERIC Educational Resources Information Center

    Roti, Joan; Trahey, Carol; Zerafa, Susan

    This report describes a program for improving students' comprehension of the language of mathematical problems. The targeted population consists of 5th and 6th grade multi-age students and multi-age learners with special needs at a middle school located outside a major city in a Midwestern community. Evidence for the existence of this problem…

  12. Improving International Research with Clinical Specimens: 5 Achievable Objectives

    PubMed Central

    LaBaer, Joshua

    2012-01-01

    Our increased interest in translational research has created a large demand for blood, tissue and other clinical samples, which find use in a broad variety of research including genomics, proteomics, and metabolomics. Hundreds of millions of dollars have been invested internationally on the collection, storage and distribution of samples. Nevertheless, many researchers complain in frustration about their inability to obtain relevant and/or useful samples for their research. Lack of access to samples, poor condition of samples, and unavailability of appropriate control samples have slowed our progress in the study of diseases and biomarkers. In this editorial, I focus on five major challenges that thwart clinical sample use for translational research and propose near term objectives to address them. They include: (1) defining our biobanking needs; (2) increasing the use of and access to standard operating procedures; (3) mapping inter-observer differences for use in normalizing diagnoses; (4) identifying natural internal protein controls; and (5) redefining the clinical sample paradigm by building partnerships with the public. In each case, I believe that we have the tools at hand required to achieve the objective within 5 years. Potential paths to achieve these objectives are explored. However we solve these problems, the future of proteomics depends on access to high quality clinical samples, collected under standardized conditions, accurately annotated and shared under conditions that promote the research we need to do. PMID:22998582

  13. Improving Decision Speed, Accuracy and Group Cohesion through Early Information Gathering in House-Hunting Ants

    PubMed Central

    Stroeymeyt, Nathalie; Giurfa, Martin; Franks, Nigel R.

    2010-01-01

    Background Successful collective decision-making depends on groups of animals being able to make accurate choices while maintaining group cohesion. However, increasing accuracy and/or cohesion usually decreases decision speed and vice-versa. Such trade-offs are widespread in animal decision-making and result in various decision-making strategies that emphasize either speed or accuracy, depending on the context. Speed-accuracy trade-offs have been the object of many theoretical investigations, but these studies did not consider the possible effects of previous experience and/or knowledge of individuals on such trade-offs. In this study, we investigated how previous knowledge of their environment may affect emigration speed, nest choice and colony cohesion in emigrations of the house-hunting ant Temnothorax albipennis, a collective decision-making process subject to a classical speed-accuracy trade-off. Methodology/Principal Findings Colonies allowed to explore a high quality nest site for one week before they were forced to emigrate found that nest and accepted it faster than emigrating naïve colonies. This resulted in increased speed in single choice emigrations and higher colony cohesion in binary choice emigrations. Additionally, colonies allowed to explore both high and low quality nest sites for one week prior to emigration remained more cohesive, made more accurate decisions and emigrated faster than emigrating naïve colonies. Conclusions/Significance These results show that colonies gather and store information about available nest sites while their nest is still intact, and later retrieve and use this information when they need to emigrate. This improves colony performance. Early gathering of information for later use is therefore an effective strategy allowing T. albipennis colonies to improve simultaneously all aspects of the decision-making process – i.e. speed, accuracy and cohesion – and partly circumvent the speed-accuracy trade-off classically

  14. Improvement in the accuracy of respiratory-gated radiation therapy using a respiratory guiding system

    NASA Astrophysics Data System (ADS)

    Kang, Seong-Hee; Kim, Dong-Su; Kim, Tae-Ho; Suh, Tae-Suk; Yoon, Jai-Woong

    2013-01-01

    The accuracy of respiratory-gated radiation therapy (RGRT) depends on the respiratory regularity because external respiratory signals are used for gating the radiation beam at particular phases. Many studies have applied a respiratory guiding system to improve the respiratory regularity. This study aims to evaluate the effect of an in-house-developed respiratory guiding system to improve the respiratory regularity for RGRT. To verify the effectiveness of this system, we acquired respiratory signals from five volunteers. The improvement in respiratory regularity was analyzed by comparing the standard deviations of the amplitudes and the periods between free and guided breathing. The reduction in residual motion at each phase was analyzed by comparing the standard deviations of sorted data within each corresponding phase bin as obtained from free and guided breathing. The results indicate that the respiratory guiding system improves the respiratory regularity, and that most of the volunteers showed significantly less average residual motion at each phase. The average residual motion measured at phases of 40, 50, and 60%, which showed lower variation than other phases, were, respectively, reduced by 41, 45, and 44% during guided breathing. The results show that the accuracy of RGRT can be improved by using the in-house-developed respiratory guiding system. Furthermore, this system should reduce artifacts caused by respiratory motion in 4D CT imaging.

  15. SDOCKER: a method utilizing existing X-ray structures to improve docking accuracy.

    PubMed

    Wu, Guosheng; Vieth, Michal

    2004-06-01

    This paper introduces a new strategy for structure-based drug design that combines high-quality docking with data from existing ligand-protein cocrystal X-ray structures. The main goal of SDOCKER, a new algorithm that implements this strategy, is docking accuracy improvement. In this new paradigm, simulated annealing molecular dynamics is used for conformational sampling and optimization and an additional similarity force is applied on the basis of the positions of ligands from X-ray data that focus the sampling on relevant regions of the active site. Because the structural information from both the ligand and protein active site is included, this approach is more effective in finding the optimal conformation for a ligand-protein complex than the classical docking or similarity overlays. Interestingly, it was found that a 3D similarity-only approach gives comparable docking accuracy to the regular force field approach used in classical docking, given the final structures are minimized in the presence of the protein. The combination of both, as implemented in SDOCKER, is shown here to be more accurate. A significant improvement in docking accuracy has been observed for three different test systems. Specifically an improvement of 10%, 17.5%, and 10% is seen for 37 HIV-1 protease, 32 thrombin, and 23 CDK2 ligands, respectively, compared to docking using the force field alone. In addition, SDOCKER's accuracy performance dependence on the similarity template is discussed. The strategy of utilizing existing ligand X-ray information should prove effective in light of the multitude of structures available from structural genomics approaches. PMID:15163194

  16. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-11-01

    There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  17. Dynamic sea surface topography, gravity and improved orbit accuracies from the direct evaluation of SEASAT altimeter data

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Lerch, F.; Koblinsky, C. J.; Klosko, S. M.; Robbins, J. W.; Williamson, R. G.; Patel, G. B.

    1989-01-01

    A method for the simultaneous solution of dynamic ocean topography, gravity and orbits using satellite altimeter data is described. A GEM-T1 based gravitational model called PGS-3337 that incorporates Seasat altimetry, surface gravimetry and satellite tracking data has been determined complete to degree and order 50. The altimeter data is utilized as a dynamic observation of the satellite's height above the sea surface with a degree 10 model of dynamic topography being recovered simultaneously with the orbit parameters, gravity and tidal terms in this model. PGS-3337 has a geoid uncertainty of 60 cm root-mean-square (RMS) globally, with the uncertainty over the altimeter tracked ocean being in the 25 cm range. Doppler determined orbits for Seasat, show large improvements, with the sub-30 cm radial accuracies being achieved. When altimeter data is used in orbit determination, radial orbital accuracies of 20 cm are achieved. The RMS of fit to the altimeter data directly gives 30 cm fits for Seasat when using PGS-3337 and its geoid and dynamic topography model. This performance level is two to three times better than that achieved with earlier Goddard earth models (GEM) using the dynamic topography from long-term oceanographic averages. The recovered dynamic topography reveals the global long wavelength circulation of the oceans with a resolution of 1500 km. The power in the dynamic topography recovery is now found to be closer to that of oceanographic studies than for previous satellite solutions. This is attributed primarily to the improved modeling of the geoid which has occurred. Study of the altimeter residuals reveals regions where tidal models are poor and sea state effects are major limitations.

  18. Improving geolocation and spatial accuracies with the modular integrated avionics group (MIAG)

    NASA Astrophysics Data System (ADS)

    Johnson, Einar; Souter, Keith

    1996-05-01

    The modular integrated avionics group (MIAG) is a single unit approach to combining position, inertial and baro-altitude/air data sensors to provide optimized navigation, guidance and control performance. Lear Astronics Corporation is currently working within the navigation community to upgrade existing MIAG performance with precise GPS positioning mechanization tightly integrated with inertial, baro and other sensors. Among the immediate benefits are the following: (1) accurate target location in dynamic conditions; (2) autonomous launch and recovery using airborne avionics only; (3) precise flight path guidance; and (4) improved aircraft and payload stability information. This paper will focus on the impact of using the MIAG with its multimode navigation accuracies on the UAV targeting mission. Gimbaled electro-optical sensors mounted on a UAV can be used to determine ground coordinates of a target at the center of the field of view by a series of vector rotation and scaling computations. The accuracy of the computed target coordinates is dependent on knowing the UAV position and the UAV-to-target offset computation. Astronics performed a series of simulations to evaluate the effects that the improved angular and position data available from the MIAG have on target coordinate accuracy.

  19. Student Achievement Goal Setting: Using Data to Improve Teaching and Learning

    ERIC Educational Resources Information Center

    Stronge, James H.; Grant, Leslie W.

    2009-01-01

    The first book in the James H. Stronge Research-to-Practice series focuses on improving student achievement through academic goal setting. It offers the tools and plan of action to use performance data to improve instructional practice and increase student achievement. The book is divided into three parts: (1) How Student Achievement Data Can Be…

  20. [Improving the Care Accuracy of Percutaneously Inserted Central Catheters Using Objective Structured Clinical Examination].

    PubMed

    Yang, Pei-Hsin; Hsu, Hsin-Chieh; Chiang, Chia-Chin; Tseng, Yun-Shan

    2016-06-01

    Approximately 9,800 adverse events related to medical tubing are reported in Taiwan every year. Most neonates in critical condition and premature infants acquire fluid, nutrition, and infusion solution using percutaneously inserted central catheters (PICCs). Objective structured clinical examination (OSCE) is an objective evaluative tool that may be used to measure the clinical competence of healthcare professionals. Very little is known about the effects of OSCE in Taiwan in terms of improving the accuracy of use of PICCs in nursing care and of reducing unexpected medical tubing removals. The present project aimed to explore the effects of an OSCE course on these two issues in the realms of standard operating procedures, care protocols, and training equipment at a neonatal intermediate unit in Taiwan. The duration of the present study ran from 2/20/2013 to 10/30/2013. The results showed that nurses' knowledge of PICCs improved from 87% to 91.5%; nurses' skill-care accuracy related to PICCs improved from 59.1% to 97.3%; and incidents of unexpected tube removals declined from 63.6% to 16.7%. This project demonstrated that OSCE courses improve the quality of PICC nursing care. Additionally, the instant feedback mechanism within the OSCE course benefited both teachers and students. PMID:27250965

  1. Improving the accuracy of convexity splitting methods for gradient flow equations

    NASA Astrophysics Data System (ADS)

    Glasner, Karl; Orizaga, Saulo

    2016-06-01

    This paper introduces numerical time discretization methods which significantly improve the accuracy of the convexity-splitting approach of Eyre (1998) [7], while retaining the same numerical cost and stability properties. A first order method is constructed by iteration of a semi-implicit method based upon decomposing the energy into convex and concave parts. A second order method is also presented based on backwards differentiation formulas. Several extrapolation procedures for iteration initialization are proposed. We show that, under broad circumstances, these methods have an energy decreasing property, leading to good numerical stability. The new schemes are tested using two evolution equations commonly used in materials science: the Cahn-Hilliard equation and the phase field crystal equation. We find that our methods can increase accuracy by many orders of magnitude in comparison to the original convexity-splitting algorithm. In addition, the optimal methods require little or no iteration, making their computation cost similar to the original algorithm.

  2. A calibration procedure for load cells to improve accuracy of mini-lysimeters in monitoring evapotranspiration

    NASA Astrophysics Data System (ADS)

    Misra, R. K.; Padhi, J.; Payero, J. O.

    2011-08-01

    SummaryWe used twelve load cells (20 kg capacity) in a mini-lysimeter system to measure evapotranspiration simultaneously from twelve plants growing in separate pots in a glasshouse. A data logger combined with a multiplexer was used to connect all load cells with the full-bridge excitation mode to acquire load-cell signal. Each load cell was calibrated using fixed load within the range of 0-0.8 times the full load capacity of load cells. Performance of all load cells was assessed on the basis of signal settling time, excitation compensation, hysteresis and temperature. Final calibration of load cells included statistical consideration of these effects to allow prediction of lysimeter weights and evapotranspiration over short-time intervals for improved accuracy and sustained performance. Analysis of the costs for the mini-lysimeter system indicates that evapotranspiration can be measured economically at a reasonable accuracy and sufficient resolution with robust method of load-cell calibration.

  3. An Initial Study of Airport Arrival Heinz Capacity Benefits Due to Improved Scheduling Accuracy

    NASA Technical Reports Server (NTRS)

    Meyn, Larry; Erzberger, Heinz

    2005-01-01

    The long-term growth rate in air-traffic demand leads to future air-traffic densities that are unmanageable by today's air-traffic control system. I n order to accommodate such growth, new technology and operational methods will be needed in the next generation air-traffic control system. One proposal for such a system is the Automated Airspace Concept (AAC). One of the precepts of AAC is to direct aircraft using trajectories that are sent via an air-ground data link. This greatly improves the accuracy in directing aircraft to specific waypoints at specific times. Studies of the Center-TRACON Automation System (CTAS) have shown that increased scheduling accuracy enables increased arrival capacity at CTAS equipped airports.

  4. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  5. Improving image accuracy of region-of-interest in cone-beam CT using prior image.

    PubMed

    Lee, Jiseoc; Kim, Jin Sung; Cho, Seungryong

    2014-01-01

    In diagnostic follow-ups of diseases, such as calcium scoring in kidney or fat content assessment in liver using repeated CT scans, quantitatively accurate and consistent CT values are desirable at a low cost of radiation dose to the patient. Region of-interest (ROI) imaging technique is considered a reasonable dose reduction method in CT scans for its shielding geometry outside the ROI. However, image artifacts in the reconstructed images caused by missing data outside the ROI may degrade overall image quality and, more importantly, can decrease image accuracy of the ROI substantially. In this study, we propose a method to increase image accuracy of the ROI and to reduce imaging radiation dose via utilizing the outside ROI data from prior scans in the repeated CT applications. We performed both numerical and experimental studies to validate our proposed method. In a numerical study, we used an XCAT phantom with its liver and stomach changing their sizes from one scan to another. Image accuracy of the liver has been improved as the error decreased from 44.4 HU to -0.1 HU by the proposed method, compared to an existing method of data extrapolation to compensate for the missing data outside the ROI. Repeated cone-beam CT (CBCT) images of a patient who went through daily CBCT scans for radiation therapy were also used to demonstrate the performance of the proposed method experimentally. The results showed improved image accuracy inside the ROI. The magnitude of error decreased from -73.2 HU to 18 HU, and effectively reduced image artifacts throughout the entire image. PMID:24710451

  6. Improving CID, HCD, and ETD FT MS/MS degradome-peptidome identifications using high accuracy mass information

    SciTech Connect

    Shen, Yufeng; Tolic, Nikola; Purvine, Samuel O.; Smith, Richard D.

    2011-11-07

    The peptidome (i.e. processed and degraded forms of proteins) of e.g. blood can potentially provide insights into disease processes, as well as a source of candidate biomarkers that are unobtainable using conventional bottom-up proteomics approaches. MS dissociation methods, including CID, HCD, and ETD, can each contribute distinct identifications using conventional peptide identification methods (Shen et al. J. Proteome Res. 2011), but such samples still pose significant analysis and informatics challenges. In this work, we explored a simple approach for better utilization of high accuracy fragment ion mass measurements provided e.g. by FT MS/MS and demonstrate significant improvements relative to conventional descriptive and probabilistic scores methods. For example, at the same FDR level we identified 20-40% more peptides than SEQUEST and Mascot scoring methods using high accuracy fragment ion information (e.g., <10 mass errors) from CID, HCD, and ETD spectra. Species identified covered >90% of all those identified from SEQUEST, Mascot, and MS-GF scoring methods. Additionally, we found that the merging the different fragment spectra provided >60% more species using the UStags method than achieved previously, and enabled >1000 peptidome components to be identified from a single human blood plasma sample with a 0.6% peptide-level FDR, and providing an improved basis for investigation of potentially disease-related peptidome components.

  7. Improving Neural Network Prediction Accuracy for PM10 Individual Air Quality Index Pollution Levels

    PubMed Central

    Feng, Qi; Wu, Shengjun; Du, Yun; Xue, Huaiping; Xiao, Fei; Ban, Xuan; Li, Xiaodong

    2013-01-01

    Abstract Fugitive dust deriving from construction sites is a serious local source of particulate matter (PM) that leads to air pollution in cities undergoing rapid urbanization in China. In spite of this fact, no study has yet been published relating to prediction of high levels of PM with diameters <10 μm (PM10) as adjudicated by the Individual Air Quality Index (IAQI) on fugitive dust from nearby construction sites. To combat this problem, the Construction Influence Index (Ci) is introduced in this article to improve forecasting models based on three neural network models (multilayer perceptron, Elman, and support vector machine) in predicting daily PM10 IAQI one day in advance. To obtain acceptable forecasting accuracy, measured time series data were decomposed into wavelet representations and wavelet coefficients were predicted. Effectiveness of these forecasters were tested using a time series recorded between January 1, 2005, and December 31, 2011, at six monitoring stations situated within the urban area of the city of Wuhan, China. Experimental trials showed that the improved models provided low root mean square error values and mean absolute error values in comparison to the original models. In addition, these improved models resulted in higher values of coefficients of determination and AHPC (the accuracy rate of high PM10 IAQI caused by nearby construction activity) compared to the original models when predicting high PM10 IAQI levels attributable to fugitive dust from nearby construction sites. PMID:24381481

  8. Application of Digital Image Correlation Method to Improve the Accuracy of Aerial Photo Stitching

    NASA Astrophysics Data System (ADS)

    Tung, Shih-Heng; Jhou, You-Liang; Shih, Ming-Hsiang; Hsiao, Han-Wei; Sung, Wen-Pei

    2016-04-01

    Satellite images and traditional aerial photos have been used in remote sensing for a long time. However, there are some problems with these images. For example, the resolution of satellite image is insufficient, the cost to obtain traditional images is relatively high and there is also human safety risk in traditional flight. These result in the application limitation of these images. In recent years, the control technology of unmanned aerial vehicle (UAV) is rapidly developed. This makes unmanned aerial vehicle widely used in obtaining aerial photos. Compared to satellite images and traditional aerial photos, these aerial photos obtained using UAV have the advantages of higher resolution, low cost. Because there is no crew in UAV, it is still possible to take aerial photos using UAV under unstable weather conditions. Images have to be orthorectified and their distortion must be corrected at first. Then, with the help of image matching technique and control points, these images can be stitched or used to establish DEM of ground surface. These images or DEM data can be used to monitor the landslide or estimate the volume of landslide. For the image matching, we can use such as Harris corner method, SIFT or SURF to extract and match feature points. However, the accuracy of these methods for matching is about pixel or sub-pixel level. The accuracy of digital image correlation method (DIC) during image matching can reach about 0.01pixel. Therefore, this study applies digital image correlation method to match extracted feature points. Then the stitched images are observed to judge the improvement situation. This study takes the aerial photos of a reservoir area. These images are stitched under the situations with and without the help of DIC. The results show that the misplacement situation in the stitched image using DIC to match feature points has been significantly improved. This shows that the use of DIC to match feature points can actually improve the accuracy of

  9. A simple method for improving the time-stepping accuracy in atmosphere and ocean models

    NASA Astrophysics Data System (ADS)

    Williams, P. D.

    2012-12-01

    In contemporary numerical simulations of the atmosphere and ocean, evidence suggests that time-stepping errors may be a significant component of total model error, on both weather and climate time-scales. This presentation will review the available evidence, and will then suggest a simple but effective method for substantially improving the time-stepping numerics at no extra computational expense. A common time-stepping method in atmosphere and ocean models is the leapfrog scheme combined with the Robert-Asselin (RA) filter. This method is used in the following models (and many more): ECHAM, MAECHAM, MM5, CAM, MESO-NH, HIRLAM, KMCM, LIMA, SPEEDY, IGCM, PUMA, COSMO, FSU-GSM, FSU-NRSM, NCEP-GFS, NCEP-RSM, NSEAM, NOGAPS, RAMS, and CCSR/NIES-AGCM. Although the RA filter controls the time-splitting instability, it also introduces non-physical damping and reduces the accuracy. This presentation proposes a simple modification to the RA filter, which has become known as the RAW filter (Williams 2009, 2011). When used in conjunction with the leapfrog scheme, the RAW filter eliminates the non-physical damping and increases the amplitude accuracy by two orders, yielding third-order accuracy. (The phase accuracy remains second-order.) The RAW filter can easily be incorporated into existing models, typically via the insertion of just a single line of code. Better simulations are obtained at no extra computational expense. Results will be shown from recent implementations of the RAW filter in various models, including SPEEDY and COSMO. For example, in SPEEDY, the skill of weather forecasts is found to be significantly improved. In particular, in tropical surface pressure predictions, five-day forecasts made using the RAW filter have approximately the same skill as four-day forecasts made using the RA filter (Amezcua, Kalnay & Williams 2011). These improvements are encouraging for the use of the RAW filter in other atmosphere and ocean models. References PD Williams (2009) A

  10. Accuracy improvement in peak positioning of spectrally distorted fiber Bragg grating sensors by Gaussian curve fitting

    SciTech Connect

    Lee, Hyun-Wook; Park, Hyoung-Jun; Lee, June-Ho; Song, Minho

    2007-04-20

    To improve measurement accuracy of spectrally distorted fiber Bragg grating temperature sensors, reflection profiles were curve fitted to Gaussian shapes, of which center positions were transformed into temperature information.By applying the Gaussian curve-fitting algorithm in a tunable bandpass filter demodulation scheme,{approx}0.3 deg. C temperature resolution was obtained with a severely distorted grating sensor, which was much better than that obtained using the highest peak search algorithm. A binary search was also used to retrieve the optimal fitting curves with the least amount of processing time.

  11. Color camera computed tomography imaging spectrometer for improved spatial-spectral image accuracy

    NASA Technical Reports Server (NTRS)

    Wilson, Daniel W. (Inventor); Bearman, Gregory H. (Inventor); Johnson, William R. (Inventor)

    2011-01-01

    Computed tomography imaging spectrometers ("CTIS"s) having color focal plane array detectors are provided. The color FPA detector may comprise a digital color camera including a digital image sensor, such as a Foveon X3.RTM. digital image sensor or a Bayer color filter mosaic. In another embodiment, the CTIS includes a pattern imposed either directly on the object scene being imaged or at the field stop aperture. The use of a color FPA detector and the pattern improves the accuracy of the captured spatial and spectral information.

  12. Processing data, for improved, accuracy, from device for measuring speed of sound in a gas

    DOEpatents

    Owen, Thomas E.

    2006-09-19

    A method, used in connection with a pulse-echo type sensor for determining the speed of sound in a gas, for improving the accuracy of speed of sound measurements. The sensor operates on the principle that speed of sound can be derived from the difference between the two-way travel time of signals reflected from two different target faces of the sensor. This time difference is derived by computing the cross correlation between the two reflections. The cross correlation function may be fitted to a parabola whose vertex represents the optimum time coordinate of the coherence peak, thereby providing an accurate measure of the two-way time diffference.

  13. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  14. Accounting for filter bandwidth improves the quantitative accuracy of bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Mason, Suzannah K. G.; Glinton, Sophie L.; Cobbold, Mark; Dehghani, Hamid

    2015-09-01

    Bioluminescence imaging is a noninvasive technique whereby surface weighted images of luminescent probes within animals are used to characterize cell count and function. Traditionally, data are collected over the entire emission spectrum of the source using no filters and are used to evaluate cell count/function over the entire spectrum. Alternatively, multispectral data over several wavelengths can be incorporated to perform tomographic reconstruction of source location and intensity. However, bandpass filters used for multispectral data acquisition have a specific bandwidth, which is ignored in the reconstruction. In this work, ignoring the bandwidth is shown to introduce a dependence of the recovered source intensity on the bandwidth of the filters. A method of accounting for the bandwidth of filters used during multispectral data acquisition is presented and its efficacy in increasing the quantitative accuracy of bioluminescence tomography is demonstrated through simulation and experiment. It is demonstrated that while using filters with a large bandwidth can dramatically decrease the data acquisition time, if not accounted for, errors of up to 200% in quantitative accuracy are introduced in two-dimensional planar imaging, even after normalization. For tomographic imaging, the use of this method to account for filter bandwidth dramatically improves the quantitative accuracy.

  15. Improved photomask accuracy with a high-productivity DUV laser pattern generator

    NASA Astrophysics Data System (ADS)

    Öström, Thomas; Måhlén, Jonas; Karawajczyk, Andrzej; Rosling, Mats; Carlqvist, Per; Askebjer, Per; Karlin, Tord; Sallander, Jesper; Österberg, Anders

    2006-10-01

    A strategy for sub-100 nm technology nodes is to maximize the use of high-speed deep-UV laser pattern generators, reserving e-beam tools for the most critical photomask layers. With a 248 nm excimer laser and 0.82 NA projection optics, the Sigma7500 increases the application space of laser pattern generators. A programmable spatial light modulator (SLM) is imaged with partially coherent optics to compose the photomask pattern. Image profiles are enhanced with phase shifting in the pattern generator, and features below 200 nm are reliably printed. The Sigma7500 extends the SLM-based architecture with improvements to CD uniformity and placement accuracy, resulting from an error budget-based methodology. Among these improvements is a stiffer focus stage design with digital servos, resulting in improved focus stability. Tighter climate controls and improved dose control reduce drift during mask patterning. As a result, global composite CD uniformity below 5 nm (3σ) has been demonstrated, with placement accuracy below 10 nm (3σ) across the mask. Self-calibration methods are used to optimize and monitor system performance, reducing the need to print test plates. The SLM calibration camera views programmed test patterns, making it possible to evaluate image metrics such as CD uniformity and line edge roughness. The camera is also used to characterize image placement over the optical field. A feature called ProcessEqualizer TM has been developed to correct long-range CD errors arising from process effects on production photomasks. Mask data is sized in real time to compensate for pattern-dependent errors related to local pattern density, as well as for systematic pattern-independent errors such as radial CD signatures. Corrections are made in the pixel domain in the advanced adjustments processor, which also performs global biasing, stamp distortion compensation, and corner enhancement. In the Sigma7500, the mask pattern is imaged with full edge addressability in each

  16. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Ryan B.; Bell, James F., III; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-04-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ~ 3 wt.%. The statistical significance of these improvements was ~ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically fabricated

  17. SU-E-J-133: Autosegmentation of Linac CBCT: Improved Accuracy Via Penalized Likelihood Reconstruction

    SciTech Connect

    Chen, Y

    2015-06-15

    Purpose: To improve the quality of kV X-ray cone beam CT (CBCT) for use in radiotherapy delivery assessment and re-planning by using penalized likelihood (PL) iterative reconstruction and auto-segmentation accuracy of the resulting CBCTs as an image quality metric. Methods: Present filtered backprojection (FBP) CBCT reconstructions can be improved upon by PL reconstruction with image formation models and appropriate regularization constraints. We use two constraints: 1) image smoothing via an edge preserving filter, and 2) a constraint minimizing the differences between the reconstruction and a registered prior image. Reconstructions of prostate therapy CBCTs were computed with constraint 1 alone and with both constraints. The prior images were planning CTs(pCT) deformable-registered to the FBP reconstructions. Anatomy segmentations were done using atlas-based auto-segmentation (Elekta ADMIRE). Results: We observed small but consistent improvements in the Dice similarity coefficients of PL reconstructions over the FBP results, and additional small improvements with the added prior image constraint. For a CBCT with anatomy very similar in appearance to the pCT, we observed these changes in the Dice metric: +2.9% (prostate), +8.6% (rectum), −1.9% (bladder). For a second CBCT with a very different rectum configuration, we observed +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). For a third case with significant lateral truncation of the field of view, we observed: +0.8% (prostate), +8.9% (rectum), −1.2% (bladder). Adding the prior image constraint raised Dice measures by about 1%. Conclusion: Efficient and practical adaptive radiotherapy requires accurate deformable registration and accurate anatomy delineation. We show here small and consistent patterns of improved contour accuracy using PL iterative reconstruction compared with FBP reconstruction. However, the modest extent of these results and the pattern of differences across CBCT cases suggest that

  18. Enhanced CT images by the wavelet transform improving diagnostic accuracy of chest nodules.

    PubMed

    Guo, Xiuhua; Liu, Xiangye; Wang, Huan; Liang, Zhigang; Wu, Wei; He, Qian; Li, Kuncheng; Wang, Wei

    2011-02-01

    The objective of this study was to compare the diagnostic accuracy in the interpretation of chest nodules using original CT images versus enhanced CT images based on the wavelet transform. The CT images of 118 patients with cancers and 60 with benign nodules were used in this study. All images were enhanced through an algorithm based on the wavelet transform. Two experienced radiologists interpreted all the images in two reading sessions. The reading sessions were separated by a minimum of 1 month in order to minimize the effect of observer's recall. The Mann-Whitney U nonparametric test was used to analyze the interpretation results between original and enhanced images. The Kruskal-Wallis H nonparametric test of K independent samples was used to investigate the related factors which could affect the diagnostic accuracy of observers. The area under the ROC curves for the original and enhanced images was 0.681 and 0.736, respectively. There is significant difference in diagnosing the malignant nodules between the original and enhanced images (z = 7.122, P < 0.001), whereas there is no significant difference in diagnosing the benign nodules (z = 0.894, P = 0.371). The results showed that there is significant difference between original and enhancement images when the size of nodules was larger than 2 cm (Z = -2.509, P = 0.012, indicating the size of the nodules is a critical evaluating factor of the diagnostic accuracy of observers). This study indicated that the image enhancement based on wavelet transform could improve the diagnostic accuracy of radiologists for the malignant chest nodules. PMID:19937084

  19. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    SciTech Connect

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-03-15

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  20. Motion correction for improving the accuracy of dual-energy myocardial perfusion CT imaging

    NASA Astrophysics Data System (ADS)

    Pack, Jed D.; Yin, Zhye; Xiong, Guanglei; Mittal, Priya; Dunham, Simon; Elmore, Kimberly; Edic, Peter M.; Min, James K.

    2016-03-01

    Coronary Artery Disease (CAD) is the leading cause of death globally [1]. Modern cardiac computed tomography angiography (CCTA) is highly effective at identifying and assessing coronary blockages associated with CAD. The diagnostic value of this anatomical information can be substantially increased in combination with a non-invasive, low-dose, correlative, quantitative measure of blood supply to the myocardium. While CT perfusion has shown promise of providing such indications of ischemia, artifacts due to motion, beam hardening, and other factors confound clinical findings and can limit quantitative accuracy. In this paper, we investigate the impact of applying a novel motion correction algorithm to correct for motion in the myocardium. This motion compensation algorithm (originally designed to correct for the motion of the coronary arteries in order to improve CCTA images) has been shown to provide substantial improvements in both overall image quality and diagnostic accuracy of CCTA. We have adapted this technique for application beyond the coronary arteries and present an assessment of its impact on image quality and quantitative accuracy within the context of dual-energy CT perfusion imaging. We conclude that motion correction is a promising technique that can help foster the routine clinical use of dual-energy CT perfusion. When combined, the anatomical information of CCTA and the hemodynamic information from dual-energy CT perfusion should facilitate better clinical decisions about which patients would benefit from treatments such as stent placement, drug therapy, or surgery and help other patients avoid the risks and costs associated with unnecessary, invasive, diagnostic coronary angiography procedures.

  1. COMPASS server for homology detection: improved statistical accuracy, speed and functionality

    PubMed Central

    Sadreyev, Ruslan I.; Tang, Ming; Kim, Bong-Hyun; Grishin, Nick V.

    2009-01-01

    COMPASS is a profile-based method for the detection of remote sequence similarity and the prediction of protein structure. Here we describe a recently improved public web server of COMPASS, http://prodata.swmed.edu/compass. The server features three major developments: (i) improved statistical accuracy; (ii) increased speed from parallel implementation; and (iii) new functional features facilitating structure prediction. These features include visualization tools that allow the user to quickly and effectively analyze specific local structural region predictions suggested by COMPASS alignments. As an application example, we describe the structural, evolutionary and functional analysis of a protein with unknown function that served as a target in the recent CASP8 (Critical Assessment of Techniques for Protein Structure Prediction round 8). URL: http://prodata.swmed.edu/compass PMID:19435884

  2. COMPASS server for homology detection: improved statistical accuracy, speed and functionality.

    PubMed

    Sadreyev, Ruslan I; Tang, Ming; Kim, Bong-Hyun; Grishin, Nick V

    2009-07-01

    COMPASS is a profile-based method for the detection of remote sequence similarity and the prediction of protein structure. Here we describe a recently improved public web server of COMPASS, http://prodata.swmed.edu/compass. The server features three major developments: (i) improved statistical accuracy; (ii) increased speed from parallel implementation; and (iii) new functional features facilitating structure prediction. These features include visualization tools that allow the user to quickly and effectively analyze specific local structural region predictions suggested by COMPASS alignments. As an application example, we describe the structural, evolutionary and functional analysis of a protein with unknown function that served as a target in the recent CASP8 (Critical Assessment of Techniques for Protein Structure Prediction round 8). URL: http://prodata.swmed.edu/compass. PMID:19435884

  3. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    PubMed

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715

  4. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  5. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  6. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGESBeta

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  7. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes.

    PubMed

    Makeyev, Oleksandr; Besio, Walter G

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  8. Case studies on forecasting for innovative technologies: frequent revisions improve accuracy.

    PubMed

    Lerner, Jeffrey C; Robertson, Diane C; Goldstein, Sara M

    2015-02-01

    Health technology forecasting is designed to provide reliable predictions about costs, utilization, diffusion, and other market realities before the technologies enter routine clinical use. In this article we address three questions central to forecasting's usefulness: Are early forecasts sufficiently accurate to help providers acquire the most promising technology and payers to set effective coverage policies? What variables contribute to inaccurate forecasts? How can forecasters manage the variables to improve accuracy? We analyzed forecasts published between 2007 and 2010 by the ECRI Institute on four technologies: single-room proton beam radiation therapy for various cancers; digital breast tomosynthesis imaging technology for breast cancer screening; transcatheter aortic valve replacement for serious heart valve disease; and minimally invasive robot-assisted surgery for various cancers. We then examined revised ECRI forecasts published in 2013 (digital breast tomosynthesis) and 2014 (the other three topics) to identify inaccuracies in the earlier forecasts and explore why they occurred. We found that five of twenty early predictions were inaccurate when compared with the updated forecasts. The inaccuracies pertained to two technologies that had more time-sensitive variables to consider. The case studies suggest that frequent revision of forecasts could improve accuracy, especially for complex technologies whose eventual use is governed by multiple interactive factors. PMID:25646112

  9. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations.

    PubMed

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-01-01

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364

  10. Improving Kinematic Accuracy of Soft Wearable Data Gloves by Optimizing Sensor Locations

    PubMed Central

    Kim, Dong Hyun; Lee, Sang Wook; Park, Hyung-Soon

    2016-01-01

    Bending sensors enable compact, wearable designs when used for measuring hand configurations in data gloves. While existing data gloves can accurately measure angular displacement of the finger and distal thumb joints, accurate measurement of thumb carpometacarpal (CMC) joint movements remains challenging due to crosstalk between the multi-sensor outputs required to measure the degrees of freedom (DOF). To properly measure CMC-joint configurations, sensor locations that minimize sensor crosstalk must be identified. This paper presents a novel approach to identifying optimal sensor locations. Three-dimensional hand surface data from ten subjects was collected in multiple thumb postures with varied CMC-joint flexion and abduction angles. For each posture, scanned CMC-joint contours were used to estimate CMC-joint flexion and abduction angles by varying the positions and orientations of two bending sensors. Optimal sensor locations were estimated by the least squares method, which minimized the difference between the true CMC-joint angles and the joint angle estimates. Finally, the resultant optimal sensor locations were experimentally validated. Placing sensors at the optimal locations, CMC-joint angle measurement accuracies improved (flexion, 2.8° ± 1.9°; abduction, 1.9° ± 1.2°). The proposed method for improving the accuracy of the sensing system can be extended to other types of soft wearable measurement devices. PMID:27240364

  11. Hyperspectral image preprocessing with bilateral filter for improving the classification accuracy of support vector machines

    NASA Astrophysics Data System (ADS)

    Sahadevan, Anand S.; Routray, Aurobinda; Das, Bhabani S.; Ahmad, Saquib

    2016-04-01

    Bilateral filter (BF) theory is applied to integrate spatial contextual information into the spectral domain for improving the accuracy of the support vector machine (SVM) classifier. The proposed classification framework is a two-stage process. First, an edge-preserved smoothing is carried out on a hyperspectral image (HSI). Then, the SVM multiclass classifier is applied on the smoothed HSI. One of the advantages of the BF-based implementation is that it considers the spatial as well as spectral closeness for smoothing the HSI. Therefore, the proposed method provides better smoothing in the homogeneous region and preserves the image details, which in turn improves the separability between the classes. The performance of the proposed method is tested using benchmark HSIs obtained from the airborne-visible-infrared-imaging-spectrometer (AVIRIS) and the reflective-optics-system-imaging-spectrometer (ROSIS) sensors. Experimental results demonstrate the effectiveness of the edge-preserved filtering in the classification of the HSI. Average accuracies (with 10% training samples) of the proposed classification framework are 99.04%, 98.11%, and 96.42% for AVIRIS-Salinas, ROSIS-Pavia University, and AVIRIS-Indian Pines images, respectively. Since the proposed method follows a combination of BF and the SVM formulations, it will be quite simple and practical to implement in real applications.

  12. Optimizing stepwise rotation of dodecahedron sound source to improve the accuracy of room acoustic measures.

    PubMed

    Martellotta, Francesco

    2013-09-01

    Dodecahedron sound sources are widely used for acoustical measurement purposes as they produce a good approximation of omnidirectional radiation. Evidence shows that such an assumption is acceptable only in the low-frequency range (namely below 1 kHz), while at higher frequencies sound radiation is far from being uniform. In order to improve the accuracy of acoustical measurements obtained from dodecahedron sources, international standard ISO 3382 suggests an averaging of results after a source rotation. This paper investigates the effects of such rotations, both in terms of variations in acoustical parameters and spatial distribution of sound reflections. Taking advantage of a spherical microphone array, the different reflection patterns were mapped as a function of source rotation, showing that some reflections may be considerably attenuated for different aiming directions. This paper investigates the concept of averaging results while changing rotation angles and the minimum number of rotations required to improve the accuracy of the average value. Results show that averages of three measurements carried out at 30° angular steps are closer to actual values and show much less fluctuation. In addition, an averaging of the directional intensity components of the selected responses stabilizes the spatial distribution of the reflections. PMID:23967936

  13. Improving the Accuracy of Laplacian Estimation with Novel Variable Inter-Ring Distances Concentric Ring Electrodes

    PubMed Central

    Makeyev, Oleksandr; Besio, Walter G.

    2016-01-01

    Noninvasive concentric ring electrodes are a promising alternative to conventional disc electrodes. Currently, the superiority of tripolar concentric ring electrodes over disc electrodes, in particular, in accuracy of Laplacian estimation, has been demonstrated in a range of applications. In our recent work, we have shown that accuracy of Laplacian estimation can be improved with multipolar concentric ring electrodes using a general approach to estimation of the Laplacian for an (n + 1)-polar electrode with n rings using the (4n + 1)-point method for n ≥ 2. This paper takes the next step toward further improving the Laplacian estimate by proposing novel variable inter-ring distances concentric ring electrodes. Derived using a modified (4n + 1)-point method, linearly increasing and decreasing inter-ring distances tripolar (n = 2) and quadripolar (n = 3) electrode configurations are compared to their constant inter-ring distances counterparts. Finite element method modeling and analytic results are consistent and suggest that increasing inter-ring distances electrode configurations may decrease the truncation error resulting in more accurate Laplacian estimates compared to respective constant inter-ring distances configurations. For currently used tripolar electrode configuration, the truncation error may be decreased more than two-fold, while for the quadripolar configuration more than a six-fold decrease is expected. PMID:27294933

  14. Improving accuracy of markerless tracking of lung tumours in fluoroscopic video by incorporating diaphragm motion

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Teske, H.; Stoll, M.; Bendl, Rolf

    2014-03-01

    Purpose: Conformal radiation of moving tumours is a challenging task in radiotherapy. Tumour motion induced by respiration can be visualized in fluoroscopic images recorded during patients breathing. Markerless methods making use of registration techniques can be used to estimate tumour motion. However, registration methods might fail when the tumour is hidden by ribs. Using motion of anatomical surrogates, like the diaphragm, is promising to model tumour motion. Methods: A sequence of 116 fluoroscopic images was analyzed and the tumour positions were manually defined by three experts. A block matching (BM) technique is used to calculate the displacement vector relatively to a selected reference image of the first breathing cycle. An enhanced method was developed: Positions, when the tumour is not located behind a rib, are taken as valid estimations of the tumour position. Furthermore, these valid estimations are used to establish a linear model of tumour position and diaphragm motion. For invalid estimations the calculated tumour positions are not taken into consideration, and instead the model is used to determine tumour motion. Results: Enhancing BM with a model of tumour motion from diaphragm motion improves the tracking accuracy when the tumour moves behind a rib. The error (mean ± SD) in longitudinal dimension was 2.0 ± 1.5mm using only BM and 1.0 ± 1.1mm when the enhanced approach was used. Conclusion: The enhanced tracking technique is capable to improve tracking accuracy compared to BM in the case that the tumour is occluded by ribs.

  15. Improvement Accuracy of Assessment of Total Equivalent Dose Rate during Air Travel

    NASA Astrophysics Data System (ADS)

    Dorenskiy, Sergey; Minligareev, Vladimir

    For radiation safety on the classic flight altitudes 8-11 km is necessary to develop a methodology for calculating the total equivalent dose rate (EDR) to prevent excess exposure of passengers and crews of airliners. During development it became necessary to assess all components affecting the calculation of EDR Comprehensive analysis of the solution to this problem, based on the developed program basis, allowing to automate calculations , as well as on the assessment of the statistical data is introduced. The results have shown that: 1) Limiting accuracy of error of geomagnetic cutoff rigidity (GCR) in the period from 2005 to 2010 was 5% This error is not significant within the considered problems. 2) It is necessary to take into account seasonal variations of atmospheric parameters in the calculation of the EDR. The difference in the determination of dose rate can reach 31% Diurnal variations of atmospheric parameters are offered to consider to improve reliability of EDR estimates. 3) Introduction in the GCR calculations of additional parameters is necessary for reliability improvement and estimation accuracy of EDR on flight routs (Kp index of geomagnetic activity , etc.).

  16. Leveraging Improvements in Precipitation Measuring from GPM Mission to Achieve Prediction Improvements in Climate, Weather and Hydrometeorology

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.

    2002-01-01

    the way for what ultimately is expected to become an internationally-organized operational global precipitation observing system. Notably, the broad societal applications of GPM are reflected in the United Nation s identification of this mission as a foremost candidate for its Peaceful Uses of Space Program. In this presentation, an overview of the GPM mission design will be presented, followed by an explanation of its scientific agenda as an outgrowth of making improvements in rain retrieval accuracy, microphysics dexterity, sampling frequency, and global coverage. All of these improvements offer new means to observe variability in precipitation and water cycle fluxes and to achieve improved predictability of weather, climate, and hydrometeorology. Specifically, the scientific agenda of GPM has been designed to leverage the measurement improvements to improve prognostic model performance, particularly quantitative precipitation forecasting and its linked phenomena at short, intermediate, and extended time scales. The talk will address how GPM measurements will enable better detection of accelerations and decelerations in regional and global water cycle processes and their relationship to climate variability, better impacts of precipitation data assimilation on numerical weather prediction and global climate reanalysis, and better performance from basin scale hydrometeorological models for short and long term flood-drought forecasting and seasonal fresh water resource assessment. Improved hydrometeorological forecasting will be possible by using continuous global precipitation observations to obtain better closure in water budgets and to generate more realistic forcing of the models themselves to achieve more accurate estimates of interception, infiltration, evaporation/transpiration fluxes, storage, and runoff.

  17. Contrast and harmonic imaging improves accuracy and efficiency of novice readers for dobutamine stress echocardiography

    NASA Technical Reports Server (NTRS)

    Vlassak, Irmien; Rubin, David N.; Odabashian, Jill A.; Garcia, Mario J.; King, Lisa M.; Lin, Steve S.; Drinko, Jeanne K.; Morehead, Annitta J.; Prior, David L.; Asher, Craig R.; Klein, Allan L.; Thomas, James D.

    2002-01-01

    BACKGROUND: Newer contrast agents as well as tissue harmonic imaging enhance left ventricular (LV) endocardial border delineation, and therefore, improve LV wall-motion analysis. Interpretation of dobutamine stress echocardiography is observer-dependent and requires experience. This study was performed to evaluate whether these new imaging modalities would improve endocardial visualization and enhance accuracy and efficiency of the inexperienced reader interpreting dobutamine stress echocardiography. METHODS AND RESULTS: Twenty-nine consecutive patients with known or suspected coronary artery disease underwent dobutamine stress echocardiography. Both fundamental (2.5 MHZ) and harmonic (1.7 and 3.5 MHZ) mode images were obtained in four standard views at rest and at peak stress during a standard dobutamine infusion stress protocol. Following the noncontrast images, Optison was administered intravenously in bolus (0.5-3.0 ml), and fundamental and harmonic images were obtained. The dobutamine echocardiography studies were reviewed by one experienced and one inexperienced echocardiographer. LV segments were graded for image quality and function. Time for interpretation also was recorded. Contrast with harmonic imaging improved the diagnostic concordance of the novice reader to the expert reader by 7.1%, 7.5%, and 12.6% (P < 0.001) as compared with harmonic imaging, fundamental imaging, and fundamental imaging with contrast, respectively. For the novice reader, reading time was reduced by 47%, 55%, and 58% (P < 0.005) as compared with the time needed for fundamental, fundamental contrast, and harmonic modes, respectively. With harmonic imaging, the image quality score was 4.6% higher (P < 0.001) than for fundamental imaging. Image quality scores were not significantly different for noncontrast and contrast images. CONCLUSION: Harmonic imaging with contrast significantly improves the accuracy and efficiency of the novice dobutamine stress echocardiography reader. The use

  18. Does probabilistic modelling of linkage disequilibrium evolution improve the accuracy of QTL location in animal pedigree?

    PubMed Central

    2010-01-01

    Background Since 2001, the use of more and more dense maps has made researchers aware that combining linkage and linkage disequilibrium enhances the feasibility of fine-mapping genes of interest. So, various method types have been derived to include concepts of population genetics in the analyses. One major drawback of many of these methods is their computational cost, which is very significant when many markers are considered. Recent advances in technology, such as SNP genotyping, have made it possible to deal with huge amount of data. Thus the challenge that remains is to find accurate and efficient methods that are not too time consuming. The study reported here specifically focuses on the half-sib family animal design. Our objective was to determine whether modelling of linkage disequilibrium evolution improved the mapping accuracy of a quantitative trait locus of agricultural interest in these populations. We compared two methods of fine-mapping. The first one was an association analysis. In this method, we did not model linkage disequilibrium evolution. Therefore, the modelling of the evolution of linkage disequilibrium was a deterministic process; it was complete at time 0 and remained complete during the following generations. In the second method, the modelling of the evolution of population allele frequencies was derived from a Wright-Fisher model. We simulated a wide range of scenarios adapted to animal populations and compared these two methods for each scenario. Results Our results indicated that the improvement produced by probabilistic modelling of linkage disequilibrium evolution was not significant. Both methods led to similar results concerning the location accuracy of quantitative trait loci which appeared to be mainly improved by using four flanking markers instead of two. Conclusions Therefore, in animal half-sib designs, modelling linkage disequilibrium evolution using a Wright-Fisher model does not significantly improve the accuracy of the

  19. Effects of Improvements in Interval Timing on the Mathematics Achievement of Elementary School Students

    ERIC Educational Resources Information Center

    Taub, Gordon E.; McGrew, Kevin S.; Keith, Timothy Z.

    2015-01-01

    This article examines the effect of improvements in timing/rhythmicity on mathematics achievement. A total of 86 participants attending 1st through 4th grades completed pre- and posttest measures of mathematics achievement from the Woodcock-Johnson III Tests of Achievement. Students in the experimental group participated in a 4-week intervention…

  20. Finite element analysis of transonic flows in cascades: Importance of computational grids in improving accuracy and convergence

    NASA Technical Reports Server (NTRS)

    Ecer, A.; Akay, H. U.

    1981-01-01

    The finite element method is applied for the solution of transonic potential flows through a cascade of airfoils. Convergence characteristics of the solution scheme are discussed. Accuracy of the numerical solutions is investigated for various flow regions in the transonic flow configuration. The design of an efficient finite element computational grid is discussed for improving accuracy and convergence.

  1. On improving the accuracy of the M2 barotropic tides embedded in a high-resolution global ocean circulation model

    NASA Astrophysics Data System (ADS)

    Ngodock, Hans; Wallcraft, Alan; Souopgui, Innocent; Richman, James; Shriver, Jay; Arbic, Brian

    2016-04-01

    The ocean tidal velocity and elevation can be estimated concurrently with the ocean circulation by adding the astronomical tidal forcing, parameterized topographic internal wave drag, and self-attraction and loading to the general circulation physics. However, the accuracy of these tidal estimates does not yet match accuracies in the best data-assimilative barotropic tidal models. This paper investigates the application of an Augmented State Ensemble Kalman Filter (ASEnKF) to improve the accuracy of M2 barotropic tides embedded in a 1/12.5° three-dimensional ocean general circulation model. The ASEnKF is an alternative to the techniques typically used with linearized tide-only models; such techniques cannot be applied to the embedded tides in a nonlinear eddying circulation. An extra term, meant to correct for errors in the tide model due to imperfectly known topography and damping terms, is introduced into the tidal forcing. Ensembles of the model are created with stochastically generated forcing correction terms. The discrepancies for each ensemble member with TPXO, an existing data-assimilative tide model, are computed. The ASEnKF method yields an optimal estimate of the model forcing correction terms, that minimizes resultant root mean square (RMS) tidal sea surface elevation error with respect to TPXO, as well as an estimate of the tidal elevation. The deep-water, global area-averaged RMS sea surface elevation error of the principal lunar semidiurnal tide M2 is reduced from 4.4 cm in a best-case non-assimilative solution to 2.6 cm. The largest elevation errors in both the non-assimilative and ASEnKF solutions are in the North Atlantic, a highly resonant basin. Possible pathways for achieving further reductions in the RMS error are discussed.

  2. On improving the accuracy of the M2 barotropic tides embedded in a high-resolution global ocean circulation model

    NASA Astrophysics Data System (ADS)

    Ngodock, Hans E.; Souopgui, Innocent; Wallcraft, Alan J.; Richman, James G.; Shriver, Jay F.; Arbic, Brian K.

    2016-01-01

    The ocean tidal velocity and elevation can be estimated concurrently with the ocean circulation by adding the astronomical tidal forcing, parameterized topographic internal wave drag, and self-attraction and loading to the general circulation physics. However, the accuracy of these tidal estimates does not yet match accuracies in the best data-assimilative barotropic tidal models. This paper investigates the application of an augmented state ensemble Kalman Filter (ASEnKF) to improve the accuracy of M2 barotropic tides embedded in a 1/12.5° three-dimensional ocean general circulation model. The ASEnKF is an alternative to the techniques typically used with linearized tide-only models; such techniques cannot be applied to the embedded tides in a nonlinear eddying circulation. An extra term, meant to correct for errors in the tide model due to imperfectly known topography and damping terms, is introduced into the tidal forcing. Ensembles of the model are created with stochastically generated forcing correction terms. The discrepancies for each ensemble member with TPXO, an existing data-assimilative tide model, are computed. The ASEnKF method yields an optimal estimate of the model forcing correction terms, that minimizes resultant root mean square (RMS) tidal sea surface elevation error with respect to TPXO, as well as an estimate of the tidal elevation. The deep-water, global area-averaged RMS sea surface elevation error of the principal lunar semidiurnal tide M2 is reduced from 4.4 cm in a best-case non-assimilative solution to 2.6 cm. The largest elevation errors in both the non-assimilative and ASEnKF solutions are in the North Atlantic, a highly resonant basin. Possible pathways for achieving further reductions in the RMS error are discussed.

  3. Increasing cutaneous afferent feedback improves proprioceptive accuracy at the knee in patients with sensory ataxia.

    PubMed

    Macefield, Vaughan G; Norcliffe-Kaufmann, Lucy; Goulding, Niamh; Palma, Jose-Alberto; Fuente Mora, Cristina; Kaufmann, Horacio

    2016-02-01

    Hereditary sensory and autonomic neuropathy type III (HSAN III) features disturbed proprioception and a marked ataxic gait. We recently showed that joint angle matching error at the knee is positively correlated with the degree of ataxia. Using intraneural microelectrodes, we also documented that these patients lack functional muscle spindle afferents but have preserved large-diameter cutaneous afferents, suggesting that patients with better proprioception may be relying more on proprioceptive cues provided by tactile afferents. We tested the hypothesis that enhancing cutaneous sensory feedback by stretching the skin at the knee joint using unidirectional elasticity tape could improve proprioceptive accuracy in patients with a congenital absence of functional muscle spindles. Passive joint angle matching at the knee was used to assess proprioceptive accuracy in 25 patients with HSAN III and 9 age-matched control subjects, with and without taping. Angles of the reference and indicator knees were recorded with digital inclinometers and the absolute error, gradient, and correlation coefficient between the two sides calculated. Patients with HSAN III performed poorly on the joint angle matching test [mean matching error 8.0 ± 0.8° (±SE); controls 3.0 ± 0.3°]. Following application of tape bilaterally to the knee in an X-shaped pattern, proprioceptive performance improved significantly in the patients (mean error 5.4 ± 0.7°) but not in the controls (3.0 ± 0.2°). Across patients, but not controls, significant increases in gradient and correlation coefficient were also apparent following taping. We conclude that taping improves proprioception at the knee in HSAN III, presumably via enhanced sensory feedback from the skin. PMID:26655817

  4. SU-E-J-101: Improved CT to CBCT Deformable Registration Accuracy by Incorporating Multiple CBCTs

    SciTech Connect

    Godley, A; Stephans, K; Olsen, L Sheplan

    2015-06-15

    Purpose: Combining prior day CBCT contours with STAPLE was previously shown to improve automated prostate contouring. These accurate STAPLE contours are now used to guide the planning CT to pre-treatment CBCT deformable registration. Methods: Six IGRT prostate patients with daily kilovoltage CBCT had their original planning CT and 9 CBCTs contoured by the same physician. These physician contours for the planning CT and each prior CBCT are deformed to match the current CBCT anatomy, producing multiple contour sets. These sets are then combined using STAPLE into one optimal set (e.g. for day 3 CBCT, combine contours produced using the plan plus day 1 and 2 CBCTs). STAPLE computes a probabilistic estimate of the true contour from this collection of contours by maximizing sensitivity and specificity. The deformation field from planning CT to CBCT registration is then refined by matching its deformed contours to the STAPLE contours. ADMIRE (Elekta Inc.) was used for this. The refinement does not force perfect agreement of the contours, typically Dice’s Coefficient (DC) of > 0.9 is obtained, and the image difference metric remains in the optimization of the deformable registration. Results: The average DC between physician delineated CBCT contours and deformed planning CT contours for the bladder, rectum and prostate was 0.80, 0.79 and 0.75, respectively. The accuracy significantly improved to 0.89, 0.84 and 0.84 (P<0.001 for all) when using the refined deformation field. The average time to run STAPLE with five scans and refine the planning CT deformation was 66 seconds on a Telsa K20c GPU. Conclusion: Accurate contours generated from multiple CBCTs provided guidance for CT to CBCT deformable registration, significantly improving registration accuracy as measured by contour DC. A more accurate deformation field is now available for transferring dose or electron density to the CBCT for adaptive planning. Research grant from Elekta.

  5. High accuracy switched-current circuits using an improved dynamic mirror

    NASA Technical Reports Server (NTRS)

    Zweigle, G.; Fiez, T.

    1991-01-01

    The switched-current technique, a recently developed circuit approach to analog signal processing, has emerged as an alternative/compliment to the well established switched-capacitor circuit technique. High speed switched-current circuits offer potential cost and power savings over slower switched-capacitor circuits. Accuracy improvements are a primary concern at this stage in the development of the switched-current technique. Use of the dynamic current mirror has produced circuits that are insensitive to transistor matching errors. The dynamic current mirror has been limited by other sources of error including clock-feedthrough and voltage transient errors. In this paper we present an improved switched-current building block using the dynamic current mirror. Utilizing current feedback the errors due to current imbalance in the dynamic current mirror are reduced. Simulations indicate that this feedback can reduce total harmonic distortion by as much as 9 dB. Additionally, we have developed a clock-feedthrough reduction scheme for which simulations reveal a potential 10 dB total harmonic distortion improvement. The clock-feedthrough reduction scheme also significantly reduces offset errors and allows for cancellation with a constant current source. Experimental results confirm the simulated improvements.

  6. Knee joint secondary motion accuracy improved by quaternion-based optimizer with bony landmark constraints.

    PubMed

    Wang, Hongsheng; Zheng, Naiqaun Nigel

    2010-12-01

    Skin marker-based motion analysis has been widely used in biomechanical studies and clinical applications. Unfortunately, the accuracy of knee joint secondary motions is largely limited by the nonrigidity nature of human body segments. Numerous studies have investigated the characteristics of soft tissue movement. Utilizing these characteristics, we may improve the accuracy of knee joint motion measurement. An optimizer was developed by incorporating the soft tissue movement patterns at special bony landmarks into constraint functions. Bony landmark constraints were assigned to the skin markers at femur epicondyles, tibial plateau edges, and tibial tuberosity in a motion analysis algorithm by limiting their allowed position space relative to the underlying bone. The rotation matrix was represented by quaternion, and the constrained optimization problem was solved by Fletcher's version of the Levenberg-Marquardt optimization technique. The algorithm was validated by using motion data from both skin-based markers and bone-mounted markers attached to fresh cadavers. By comparing the results with the ground truth bone motion generated from the bone-mounted markers, the new algorithm had a significantly higher accuracy (root-mean-square (RMS) error: 0.7 ± 0.1 deg in axial rotation and 0.4 ± 0.1 deg in varus-valgus) in estimating the knee joint secondary rotations than algorithms without bony landmark constraints (RMS error: 1.7 ± 0.4 deg in axial rotation and 0.7 ± 0.1 deg in varus-valgus). Also, it predicts a more accurate medial-lateral translation (RMS error: 0.4 ± 0.1 mm) than the conventional techniques (RMS error: 1.2 ± 0.2 mm). The new algorithm, using bony landmark constrains, estimates more accurate secondary rotations and medial-lateral translation of the underlying bone. PMID:21142329

  7. METACOGNITIVE SCAFFOLDS IMPROVE SELF-JUDGMENTS OF ACCURACY IN A MEDICAL INTELLIGENT TUTORING SYSTEM

    PubMed Central

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S.

    2013-01-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were selected from the domain of Nodular and Diffuse Dermatitides. Both groups worked with a version of SlideTutor that provided immediate feedback on their actions for two hours before proceeding to solve cases in either the Considering Alternatives or Playback condition. No immediate feedback was provided on actions performed by participants in the scaffolding mode. Measurements included learning gains (pre-test and post-test), as well as metacognitive performance, including Goodman-Kruskal Gamma correlation, bias, and discrimination. Results showed that participants in both conditions improved significantly in terms of their diagnostic scores from pre-test to post-test. More importantly, participants in the Considering Alternatives condition outperformed those in the Playback condition in the accuracy of their confidence judgments and the discrimination of the correctness of their assertions while solving cases. The results suggested that presenting participants with their diagnostic decision paths and highlighting correct and incorrect paths helps them to become more metacognitively accurate in their confidence judgments. PMID:24532850

  8. Triarylmethyl Labels: Toward Improving the Accuracy of EPR Nanoscale Distance Measurements in DNAs.

    PubMed

    Shevelev, Georgiy Yu; Krumkacheva, Olesya A; Lomzov, Alexander A; Kuzhelev, Andrey A; Trukhin, Dmitry V; Rogozhnikova, Olga Yu; Tormyshev, Victor M; Pyshnyi, Dmitrii V; Fedin, Matvey V; Bagryanskaya, Elena G

    2015-10-29

    Triarylmethyl (trityl, TAM) based spin labels represent a promising alternative to nitroxides for EPR distance measurements in biomolecules. Herewith, we report synthesis and comparative study of series of model DNA duplexes, 5'-spin-labeled with TAMs and nitroxides. We have found that the accuracy (width) of distance distributions obtained by double electron-electron resonance (DEER/PELDOR) strongly depends on the type of radical. Replacement of both nitroxides by TAMs in the same spin-labeled duplex allows narrowing of the distance distributions by a factor of 3. Replacement of one nitroxide by TAM (orthogonal labeling) leads to a less pronounced narrowing but at the same time gains sensitivity in DEER experiment due to efficient pumping on the narrow EPR line of TAM. Distance distributions in nitroxide/nitroxide pairs are influenced by the structure of the linker: the use of a short amine-based linker improves the accuracy by a factor of 2. At the same time, a negligible dependence on the linker length is found for the distribution width in TAM/TAM pairs. Molecular dynamics calculations indicate greater conformational disorder of nitroxide labels compared to TAM ones, thus rationalizing the experimentally observed trends. Thereby, we conclude that double spin-labeling using TAMs allows obtaining narrower spin-spin distance distributions and potentially more precise distances between labeling sites compared to traditional nitroxides. PMID:26011022

  9. Improving the accuracy of skin elasticity measurement by using Q-parameters in Cutometer.

    PubMed

    Qu, Di; Seehra, G Paul

    2016-01-01

    The skin elasticity parameters (Ue, Uv, Uf, Ur, Ua, and R0 through R9) in the Cutometer are widely used for in vivo measurement of skin elasticity. Their accuracy, however, is impaired by the inadequacy of the definition of a key parameter, the time point of 0.1 s, which separates the elastic and viscoelastic responses of human skin. This study shows why an inflection point (t(IP)) should be calculated from each individual response curve to define skin elasticity, and how the Q-parameters are defined in the Cutometer. By analyzing the strain versus time curves of some pure elastic standards and of a population of 746 human volunteers, a method of determining the t(IP) from each mode 1 response curve was established. The results showed a wide distribution of this parameter ranging from 0.11 to 0.19 s, demonstrating that the current single-valued empirical parameter of 0.1 s was not adequate to represent this property of skin. A set of area-based skin viscoelastic parameters were also defined. The biological elasticity thus obtained correlated well with the study volunteers' chronological age which was statistically significant. We conclude that the Q-parameters are more accurate than the U and R parameters and should be used to improve measurement accuracy of human skin elasticity. PMID:27319059

  10. Does an Adolescent’s Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    PubMed Central

    Kerr, Deborah A.; Wright, Janine L.; Dhaliwal, Satvinder S.; Boushey, Carol J.

    2015-01-01

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents’ accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents’ accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days. PMID:25984743

  11. Does an Adolescent's Accuracy of Recall Improve with a Second 24-h Dietary Recall?

    PubMed

    Kerr, Deborah A; Wright, Janine L; Dhaliwal, Satvinder S; Boushey, Carol J

    2015-05-01

    The multiple-pass 24-h dietary recall is used in most national dietary surveys. Our purpose was to assess if adolescents' accuracy of recall improved when a 5-step multiple-pass 24-h recall was repeated. Participants (n = 24), were Chinese-American youths aged between 11 and 15 years and lived in a supervised environment as part of a metabolic feeding study. The 24-h recalls were conducted on two occasions during the first five days of the study. The four steps (quick list; forgotten foods; time and eating occasion; detailed description of the food/beverage) of the 24-h recall were assessed for matches by category. Differences were observed in the matching for the time and occasion step (p < 0.01), detailed description (p < 0.05) and portion size matching (p < 0.05). Omission rates were higher for the second recall (p < 0.05 quick list; p < 0.01 forgotten foods). The adolescents over-estimated energy intake on the first (11.3% ± 22.5%; p < 0.05) and second recall (10.1% ± 20.8%) compared with the known food and beverage items. These results suggest that the adolescents' accuracy to recall food items declined with a second 24-h recall when repeated over two non-consecutive days. PMID:25984743

  12. Does routine repeat testing of critical laboratory values improve their accuracy?

    PubMed Central

    Baradaran Motie, Pooya; Zare-Mirzaie, Ali; Shayanfar, Nasrin; Kadivar, Maryam

    2015-01-01

    Background: Routine repeat testing of critical laboratory values is very common these days to increase their accuracy and to avoid reporting false or infeasible results. We figure that repeat testing of critical laboratory values has any benefits or not. Methods: We examined 2233 repeated critical laboratory values in 13 different hematology and chemistry tests including: hemoglobin, white blood cell, platelet, international normalized ratio, partial thromboplastin time, glucose, potassium, sodium, phosphorus, magnesium, calcium, total bilirubin and direct bilirubin. The absolute difference and the percentage of change between the two tests for each critical value were calculated and then compared with the College of American Pathologists/Clinical Laboratory Improvement Amendments allowable error. Results: Repeat testing yielded results that were within the allowable error on 2213 of 2233 specimens (99.1%). There was only one outlier (0.2%) in the white blood cell test category, 9 (2.9%) in the platelet test category, 5 (4%) in the partial thromboplastin time test category, 5 (4.8%) in the international normalized ratio test category and none in other test categories. Conclusion: Routine, repeat testing of critical hemoglobin, white blood cell, platelet, international normalized ratio, partial thromboplastin time, glucose, potassium, sodium, phosphorus, magnesium, calcium, total bilirubin and direct bilirubin results does not have any benefits to increase their accuracy. PMID:26034729

  13. Technical Highlight: NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools

    SciTech Connect

    Ridouane, E.H.

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes.

  14. A Bayesian statistical model for hybrid metrology to improve measurement accuracy

    NASA Astrophysics Data System (ADS)

    Silver, R. M.; Zhang, N. F.; Barnes, B. M.; Qin, J.; Zhou, H.; Dixson, R.

    2011-05-01

    We present a method to combine measurements from different techniques that reduces uncertainties and can improve measurement throughput. The approach directly integrates the measurement analysis of multiple techniques that can include different configurations or platforms. This approach has immediate application when performing model-based optical critical dimension (OCD) measurements. When modeling optical measurements, a library of curves is assembled through the simulation of a multi-dimensional parameter space. Parametric correlation and measurement noise lead to measurement uncertainty in the fitting process with fundamental limitations resulting from the parametric correlations. A strategy to decouple parametric correlation and reduce measurement uncertainties is described. We develop the rigorous underlying Bayesian statistical model and apply this methodology to OCD metrology. We then introduce an approach to damp the regression process to achieve more stable and rapid regression fitting. These methods that use a priori information are shown to reduce measurement uncertainty and improve throughput while also providing an improved foundation for comprehensive reference metrology.

  15. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  16. The necessity of and strategies for improving confidence in the accuracy of western blots

    PubMed Central

    Ghosh, Rajeshwary; Gilda, Jennifer E.; Gomes, Aldrin V.

    2016-01-01

    Summary Western blotting is one of the most commonly used laboratory techniques for identifying proteins and semi-quantifying protein amounts, however, several recent findings suggest that western blots may not be as reliable as previously assumed. This is not surprising since many labs are unaware of the limitations of western blotting. In this manuscript we review essential strategies for improving confidence in the accuracy of western blots. These strategies include selecting the best normalization standard, proper sample preparation, determining the linear range for antibodies and protein stains relevant to the sample of interest, confirming the quality of the primary antibody, preventing signal saturation and accurately quantifying the signal intensity of the target protein. Although western blotting is a powerful and indispensable scientific technique that can be used to accurately quantify relative protein levels, it is necessary that proper experimental techniques and strategies are employed. PMID:25059473

  17. The necessity of and strategies for improving confidence in the accuracy of western blots.

    PubMed

    Ghosh, Rajeshwary; Gilda, Jennifer E; Gomes, Aldrin V

    2014-10-01

    Western blotting is one of the most commonly used laboratory techniques for identifying proteins and semi-quantifying protein amounts; however, several recent findings suggest that western blots may not be as reliable as previously assumed. This is not surprising since many labs are unaware of the limitations of western blotting. In this manuscript, we review essential strategies for improving confidence in the accuracy of western blots. These strategies include selecting the best normalization standard, proper sample preparation, determining the linear range for antibodies and protein stains relevant to the sample of interest, confirming the quality of the primary antibody, preventing signal saturation and accurately quantifying the signal intensity of the target protein. Although western blotting is a powerful and indispensable scientific technique that can be used to accurately quantify relative protein levels, it is necessary that proper experimental techniques and strategies are employed. PMID:25059473

  18. Speed and accuracy improvements in FLAASH atmospheric correction of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael W.; Berk, Alexander; Bernstein, Lawrence S.; Lee, Jamine; Fox, Marsha

    2012-11-01

    Remotely sensed spectral imagery of the earth's surface can be used to fullest advantage when the influence of the atmosphere has been removed and the measurements are reduced to units of reflectance. Here, we provide a comprehensive summary of the latest version of the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes atmospheric correction algorithm. We also report some new code improvements for speed and accuracy. These include the re-working of the original algorithm in C-language code parallelized with message passing interface and containing a new radiative transfer look-up table option, which replaces executions of the MODTRAN model. With computation times now as low as ~10 s per image per computer processor, automated, real-time, on-board atmospheric correction of hyper- and multi-spectral imagery is within reach.

  19. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage.

    PubMed

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  20. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  1. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    NASA Astrophysics Data System (ADS)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a

  2. Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.

    PubMed

    Pearce, John A

    2015-12-01

    The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented

  3. Evaluating an educational intervention to improve the accuracy of death certification among trainees from various specialties

    PubMed Central

    Villar, Jesús; Pérez-Méndez, Lina

    2007-01-01

    Background The inaccuracy of death certification can lead to the misallocation of resources in health care programs and research. We evaluated the rate of errors in the completion of death certificates among medical residents from various specialties, before and after an educational intervention which was designed to improve the accuracy in the certification of the cause of death. Methods A 90-min seminar was delivered to seven mixed groups of medical trainees (n = 166) from several health care institutions in Spain. Physicians were asked to read and anonymously complete a same case-scenario of death certification before and after the seminar. We compared the rates of errors and the impact of the educational intervention before and after the seminar. Results A total of 332 death certificates (166 completed before and 166 completed after the intervention) were audited. Death certificates were completed with errors by 71.1% of the physicians before the educational intervention. Following the seminar, the proportion of death certificates with errors decreased to 9% (p < 0.0001). The most common error in the completion of death certificates was the listing of the mechanism of death instead of the cause of death. Before the seminar, 56.8% listed respiratory or cardiac arrest as the immediate cause of death. None of the participants listed any mechanism of death after the educational intervention (p < 0.0001). Conclusion Major errors in the completion of the correct cause of death on death certificates are common among medical residents. A simple educational intervention can dramatically improve the accuracy in the completion of death certificates by physicians. PMID:18005414

  4. Improving the accuracy of flood forecasting with transpositions of ensemble NWP rainfall fields considering orographic effects

    NASA Astrophysics Data System (ADS)

    Yu, Wansik; Nakakita, Eiichi; Kim, Sunmin; Yamaguchi, Kosei

    2016-08-01

    The use of meteorological ensembles to produce sets of hydrological predictions increased the capability to issue flood warnings. However, space scale of the hydrological domain is still much finer than meteorological model, and NWP models have challenges with displacement. The main objective of this study to enhance the transposition method proposed in Yu et al. (2014) and to suggest the post-processing ensemble flood forecasting method for the real-time updating and the accuracy improvement of flood forecasts that considers the separation of the orographic rainfall and the correction of misplaced rain distributions using additional ensemble information through the transposition of rain distributions. In the first step of the proposed method, ensemble forecast rainfalls from a numerical weather prediction (NWP) model are separated into orographic and non-orographic rainfall fields using atmospheric variables and the extraction of topographic effect. Then the non-orographic rainfall fields are examined by the transposition scheme to produce additional ensemble information and new ensemble NWP rainfall fields are calculated by recombining the transposition results of non-orographic rain fields with separated orographic rainfall fields for a generation of place-corrected ensemble information. Then, the additional ensemble information is applied into a hydrologic model for post-flood forecasting with a 6-h interval. The newly proposed method has a clear advantage to improve the accuracy of mean value of ensemble flood forecasting. Our study is carried out and verified using the largest flood event by typhoon 'Talas' of 2011 over the two catchments, which are Futatsuno (356.1 km2) and Nanairo (182.1 km2) dam catchments of Shingu river basin (2360 km2), which is located in the Kii peninsula, Japan.

  5. Improvements in the accuracy and the repeatability of long trace profiler measurements

    SciTech Connect

    Takacs, P.Z.; Church, E.L.; Bresloff, C.J.; Assoufid, L.

    1999-09-01

    Modifications of the long trace profiler at the Advanced Photon Source at Argonne National Laboratory have significantly improved its accuracy and repeatability for measuring the figure of large flat and long-radius mirrors. Use of a Dove prism in the reference beam path corrects phasing problems between mechanical errors and thermally induced system errors. A single reference correction now completely removes both of these error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within {plus_minus}0.1&hthinsp;{degree}C over a 24-h period and has significantly improved the stability and the repeatability of the measurements. Long-radius surface curvatures can now be measured absolutely with a high degree of confidence. These improved capabilities are illustrated with a series of measurements of a 500-mm-long mirror with a 5-km radius of curvature. The standard deviation in the average of ten slope profile scans is 0.3 {mu}rad, and the corresponding standard deviation in the height error is 4.6 nm. {copyright} 1999 Optical Society of America

  6. Improvements in the accuracy and the repeatability of long trace profiler measurements.

    PubMed

    Takacs, P Z; Church, E L; Bresloff, C J; Assoufid, L

    1999-09-01

    Modifications of the long trace profiler at the Advanced Photon Source at Argonne National Laboratory have significantly improved its accuracy and repeatability for measuring the figure of large flat and long-radius mirrors. Use of a Dove prism in the reference beam path corrects phasing problems between mechanical errors and thermally induced system errors. A single reference correction now completely removes both of these error signals from the measured surface profile. The addition of a precision air conditioner keeps the temperature in the metrology enclosure constant to within +/-0.1 degrees C over a 24-h period and has significantly improved the stability and the repeatability of the measurements. Long-radius surface curvatures can now be measured absolutely with a high degree of confidence. These improved capabilities are illustrated with a series of measurements of a 500-mm-long mirror with a 5-km radius of curvature. The standard deviation in the average of ten slope profile scans is 0.3 microrad, and the corresponding standard deviation in the height error is 4.6 nm. PMID:18324056

  7. The Effects of Contingent Praise Upon the Achievement of a Deficit Junior High School Student in Oral Reading Accuracy in Probes Above Her Functional Grade Level.

    ERIC Educational Resources Information Center

    Proe, Susan; Wade, David

    Evaluated was the effectiveness of three training procedures (imitation training, imitation training with praise, and imitation training with points for an art supply contingency) in improving the oral reading accuracy and reading comprehension of a 13-year-old girl whose functional reading was at the second grade level. The procedures were…

  8. Accuracy of Suture Passage During Arthroscopic Remplissage—What Anatomic Landmarks Can Improve It?

    PubMed Central

    Garcia, Grant H.; Degen, Ryan M.; Liu, Joseph N.; Kahlenberg, Cynthia A.; Dines, Joshua S.

    2016-01-01

    Background: Recent data suggest that inaccurate suture passage during remplissage may contribute to a loss of external rotation, with the potential to cause posterior shoulder pain because of the proximity to the musculotendinous junction. Purpose: To evaluate the accuracy of suture passage during remplissage and identify surface landmarks to improve accuracy. Study Design: Descriptive laboratory study. Methods: Arthroscopic remplissage was performed on 6 cadaveric shoulder specimens. Two single-loaded suture anchors were used for each remplissage. After suture passage, position was recorded in reference to the posterolateral acromion (PLA), with entry perpendicular to the humeral surface. After these measurements, the location of posterior cuff penetration was identified by careful surgical dissection. Results: Twenty-four sutures were passed in 6 specimens: 6 sutures (25.0%) were correctly passed through the infraspinatus tendon, 12 (50%) were through the infraspinatus muscle or musculotendinous junction (MTJ), and 6 (25%) were through the teres minor. Suture passage through the infraspinatus were on average 25 ± 5.4 mm inferior to the PLA, while sutures passing through the teres minor were on average 35.8 ± 5.7 mm inferior to the PLA. There was an odds ratio of 25 (95% CI, 2.1-298.3; P < .001) that the suture would be through the infraspinatus if the passes were less than 3 cm inferior to the PLA. Sutures passing through muscle and the MTJ were significantly more medial than those passing through tendon, measuring on average 8.1 ± 5.1 mm lateral to the PLA compared with 14.5 ± 5.5 mm (P < .02). If suture passes were greater than 1 cm lateral to the PLA, it was significantly more likely to be in tendon (P = .013). Conclusion: We found remplissage suture passage was inaccurate, with only 25% of sutures penetrating the infraspinatus tendon. Passing sutures 1 cm lateral and within 3 cm inferior of the PLA improves the odds of successful infraspinatus tenodesis

  9. Improvement in Interobserver Accuracy in Delineation of the Lumpectomy Cavity Using Fiducial Markers

    SciTech Connect

    Shaikh, Talha; Chen Ting; Khan, Atif; Yue, Ning J.

    2010-11-15

    Purpose: To determine, whether the presence of gold fiducial markers would improve the inter- and intraphysician accuracy in the delineation of the surgical cavity compared with a matched group of patients who did not receive gold fiducial markers in the setting of accelerated partial-breast irradiation (APBI). Methods and Materials: Planning CT images of 22 lumpectomy cavities were reviewed in a cohort of 22 patients; 11 patients received four to six gold fiducial markers placed at the time of surgery. Three physicians categorized the seroma cavity according to cavity visualization score criteria and delineated each of the 22 seroma cavities and the clinical target volume. Distance between centers of mass, percentage overlap, and average surface distance for all patients were assessed. Results: The mean seroma volume was 36.9 cm{sup 3} and 34.2 cm{sup 3} for fiducial patients and non-fiducial patients, respectively (p = ns). Fiducial markers improved the mean cavity visualization score, to 3.6 {+-} 1.0 from 2.5 {+-} 1.3 (p < 0.05). The mean distance between centers of mass, average surface distance, and percentage overlap for the seroma and clinical target volume were significantly improved in the fiducial marker patients as compared with the non-fiducial marker patients (p < 0.001). Conclusions: The placement of gold fiducial markers placed at the time of lumpectomy improves interphysician identification and delineation of the seroma cavity and clinical target volume. This has implications in radiotherapy treatment planning for accelerated partial-breast irradiation and for boost after whole-breast irradiation.

  10. In silico instrumental response correction improves precision of label-free proteomics and accuracy of proteomics-based predictive models.

    PubMed

    Lyutvinskiy, Yaroslav; Yang, Hongqian; Rutishauser, Dorothea; Zubarev, Roman A

    2013-08-01

    In the analysis of proteome changes arising during the early stages of a biological process (e.g. disease or drug treatment) or from the indirect influence of an important factor, the biological variations of interest are often small (∼10%). The corresponding requirements for the precision of proteomics analysis are high, and this often poses a challenge, especially when employing label-free quantification. One of the main contributors to the inaccuracy of label-free proteomics experiments is the variability of the instrumental response during LC-MS/MS runs. Such variability might include fluctuations in the electrospray current, transmission efficiency from the air-vacuum interface to the detector, and detection sensitivity. We have developed an in silico post-processing method of reducing these variations, and have thus significantly improved the precision of label-free proteomics analysis. For abundant blood plasma proteins, a coefficient of variation of approximately 1% was achieved, which allowed for sex differentiation in pooled samples and ≈90% accurate differentiation of individual samples by means of a single LC-MS/MS analysis. This method improves the precision of measurements and increases the accuracy of predictive models based on the measurements. The post-acquisition nature of the correction technique and its generality promise its widespread application in LC-MS/MS-based methods such as proteomics and metabolomics. PMID:23589346

  11. Improving diagnostic accuracy using EHR in emergency departments: A simulation-based study.

    PubMed

    Ben-Assuli, Ofir; Sagi, Doron; Leshno, Moshe; Ironi, Avinoah; Ziv, Amitai

    2015-06-01

    It is widely believed that Electronic Health Records (EHR) improve medical decision-making by enabling medical staff to access medical information stored in the system. It remains unclear, however, whether EHR indeed fulfills this claim under the severe time constraints of Emergency Departments (EDs). We assessed whether accessing EHR in an ED actually improves decision-making by clinicians. A simulated ED environment was created at the Israel Center for Medical Simulation (MSR). Four different actors were trained to simulate four specific complaints and behavior and 'consulted' 26 volunteer ED physicians. Each physician treated half of the cases (randomly) with access to EHR, and their medical decisions were compared to those where the physicians had no access to EHR. Comparison of diagnostic accuracy with and without access showed that accessing the EHR led to an increase in the quality of the clinical decisions. Physicians accessing EHR were more highly informed and thus made more accurate decisions. The percentage of correct diagnoses was higher and these physicians were more confident in their diagnoses and made their decisions faster. PMID:25817921

  12. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  13. Haptic Guidance Needs to Be Intuitive Not Just Informative to Improve Human Motor Accuracy

    PubMed Central

    Mugge, Winfred; Kuling, Irene A.; Brenner, Eli; Smeets, Jeroen B. J.

    2016-01-01

    Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects’ errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints. PMID:26982481

  14. Improving the thermal, radial, and temporal accuracy of the analytical ultracentrifuge through external references.

    PubMed

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H; Lewis, Marc S; Brautigam, Chad A; Schuck, Peter; Zhao, Huaying

    2013-09-01

    Sedimentation velocity (SV) is a method based on first principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton temperature logger to directly measure the temperature of a spinning rotor and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration that were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., Anal. Biochem., 437 (2013) 104-108), and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from 11 instruments displayed a significantly reduced standard deviation of approximately 0.7%. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  15. A method for improved accuracy in three dimensions for determining wheel/rail contact points

    NASA Astrophysics Data System (ADS)

    Yang, Xinwen; Gu, Shaojie; Zhou, Shunhua; Zhou, Yu; Lian, Songliang

    2015-11-01

    Searching for the contact points between wheels and rails is important because these points represent the points of exerted contact forces. In order to obtain an accurate contact point and an in-depth description of the wheel/rail contact behaviours on a curved track or in a turnout, a method with improved accuracy in three dimensions is proposed to determine the contact points and the contact patches between the wheel and the rail when considering the effect of the yaw angle and the roll angle on the motion of the wheel set. The proposed method, with no need of the curve fitting of the wheel and rail profiles, can accurately, directly, and comprehensively determine the contact interface distances between the wheel and the rail. The range iteration algorithm is used to improve the computation efficiency and reduce the calculation required. The present computation method is applied for the analysis of the contact of rails of CHINA (CHN) 75 kg/m and wheel sets of wearing type tread of China's freight cars. In addition, it can be proved that the results of the proposed method are consistent with that of Kalker's program CONTACT, and the maximum deviation from the wheel/rail contact patch area of this two methods is approximately 5%. The proposed method, can also be used to investigate static wheel/rail contact. Some wheel/rail contact points and contact patch distributions are discussed and assessed, wheel and rail non-worn and worn profiles included.

  16. Improving the Thermal, Radial and Temporal Accuracy of the Analytical Ultracentrifuge through External References

    PubMed Central

    Ghirlando, Rodolfo; Balbo, Andrea; Piszczek, Grzegorz; Brown, Patrick H.; Lewis, Marc S.; Brautigam, Chad A.; Schuck, Peter; Zhao, Huaying

    2013-01-01

    Sedimentation velocity (SV) is a method based on first-principles that provides a precise hydrodynamic characterization of macromolecules in solution. Due to recent improvements in data analysis, the accuracy of experimental SV data emerges as a limiting factor in its interpretation. Our goal was to unravel the sources of experimental error and develop improved calibration procedures. We implemented the use of a Thermochron iButton® temperature logger to directly measure the temperature of a spinning rotor, and detected deviations that can translate into an error of as much as 10% in the sedimentation coefficient. We further designed a precision mask with equidistant markers to correct for instrumental errors in the radial calibration, which were observed to span a range of 8.6%. The need for an independent time calibration emerged with use of the current data acquisition software (Zhao et al., doi 10.1016/j.ab.2013.02.011) and we now show that smaller but significant time errors of up to 2% also occur with earlier versions. After application of these calibration corrections, the sedimentation coefficients obtained from eleven instruments displayed a significantly reduced standard deviation of ∼ 0.7 %. This study demonstrates the need for external calibration procedures and regular control experiments with a sedimentation coefficient standard. PMID:23711724

  17. Ensemble-Based Network Aggregation Improves the Accuracy of Gene Network Reconstruction

    PubMed Central

    Xiao, Guanghua; Xie, Yang

    2014-01-01

    Reverse engineering approaches to constructing gene regulatory networks (GRNs) based on genome-wide mRNA expression data have led to significant biological findings, such as the discovery of novel drug targets. However, the reliability of the reconstructed GRNs needs to be improved. Here, we propose an ensemble-based network aggregation approach to improving the accuracy of network topologies constructed from mRNA expression data. To evaluate the performances of different approaches, we created dozens of simulated networks from combinations of gene-set sizes and sample sizes and also tested our methods on three Escherichia coli datasets. We demonstrate that the ensemble-based network aggregation approach can be used to effectively integrate GRNs constructed from different studies – producing more accurate networks. We also apply this approach to building a network from epithelial mesenchymal transition (EMT) signature microarray data and identify hub genes that might be potential drug targets. The R code used to perform all of the analyses is available in an R package entitled “ENA”, accessible on CRAN (http://cran.r-project.org/web/packages/ENA/). PMID:25390635

  18. Improving the Accuracy of Automatic Detections at Seismic Stations via Machine Learning

    NASA Astrophysics Data System (ADS)

    Riggelsen, Carsten; Ohrnberger, Matthias

    2010-05-01

    We present a Machine Learning approach aiming for improving the accuracy of automatic detections of noise and signal at 3-component seismic stations. Using supervised learning in conjunction with the multivariate framework of Dynamic Bayesian Networks (DBNs) we make use of historical data obtained from the LEB bulletin to train a classifier to capture the intrinsic characteristics of signal and noise patterns appearing in seismic data streams. On a per station basis this yields generative statistical models that essentially summarize and generalize the information implicitly contained in the LEB allowing for classifying future an previously unseen seismic data of the same kind. Also, the system provides a numerical value reflecting the classification confidence potentially aiding the analyst is correcting or identifying events that are non-typical. The system has the potential for being implemented in real time: both feature computation/extraction as well as classification work on data segments/windows and seismic patterns of varying length, e.g., 12 sec. Various features are considered including spectral features, polarization information and statistical moments and moment ratios. All features are derived from a time-frequency-(amplitude) decomposition of the raw waveform data for each component, taking the 6 frequency bands currently in use at IDC into account. These different feature sets give rise to different DBN structures (model-feature scenarios) that probabilistically relate the features to each other depending on empirical observations and physical knowledge available. 1 week of waveform data is considered for training both the signal and noise classes. The performance of the classifier is measured on a separate test set from the same week of data but also on a 1-month data set, where 4 weeks of data is distributed over a one year period. In the system evaluation both a static approach as well as a sliding-window approach is taken. Binary classification

  19. Study on Improvement of Accuracy in Inertial Photogrammetry by Combining Images with Inertial Measurement Unit

    NASA Astrophysics Data System (ADS)

    Kawasaki, Hideaki; Anzai, Shojiro; Koizumi, Toshio

    2016-06-01

    Inertial photogrammetry is defined as photogrammetry that involves using a camera on which an inertial measurement unit (IMU) is mounted. In inertial photogrammetry, the position and inclination of a shooting camera are calculated using the IMU. An IMU is characterized by error growth caused by time accumulation because acceleration is integrated with respect to time. This study examines the procedure to estimate the position of the camera accurately while shooting using the IMU and the structure from motion (SfM) technology, which is applied in many fields, such as computer vision. When neither the coordinates of the position of the camera nor those of feature points are known, SfM provides a similar positional relationship between the position of the camera and feature points. Therefore, the actual length of positional coordinates is not determined. If the actual length of the position of the camera is unknown, the camera acceleration is obtained by calculating the second order differential of the position of the camera, with respect to the shooting time. The authors had determined the actual length by assigning the position of IMU to the SfM-calculated position. Hence, accuracy decreased because of the error growth, which was the characteristic feature of IMU. In order to solve this problem, a new calculation method was proposed. Using this method, the difference between the IMU-calculated acceleration and the camera-calculated acceleration can be obtained using the method of least squares, and the magnification required for calculating the actual dimension from the position of the camera can be obtained. The actual length can be calculated by multiplying all the SfM point groups by the obtained magnification factor. This calculation method suppresses the error growth, which is due to the time accumulation in IMU, and improves the accuracy of inertial photogrammetry.

  20. Improving the accuracy of MODIS 8-day snow products with in situ temperature and precipitation data

    NASA Astrophysics Data System (ADS)

    Dong, Chunyu; Menzel, Lucas

    2016-03-01

    MODIS snow data are appropriate for a wide range of eco-hydrological studies and applications in the fields of snow-related hazards, early warning systems and water resources management. However, the high spatio-temporal resolution of the remotely sensed data is often biased by snow misclassifications, and cloud cover frequently limits the availability of the MODIS-based snow cover information. In this study, we applied a four-step methodology that aims to optimize the accuracy of MODIS snow data. To reduce the cloud fraction, 8-day MODIS data from both the Aqua and Terra satellites were combined. Neighborhood analysis was applied as well for this purpose, and it also contributed to the retrieval of some omitted snow. Two meteorological filters were then applied to combine information from station-based measurements of minimum ground temperature, precipitation and air temperature. This procedure helped to reduce the overestimation of snow cover. To test this technique, the methodology was applied to the Rhineland-Palatinate region in southwestern Germany (approximately 20,000 km2), where cloud cover is especially high during winter and surface heterogeneity is complex. The results show that mean annual cloud coverage (reference period 2002-2013) of the 8-day MODIS snow maps could be reduced using this methodology from approximately 14% to 4.5%. During the snow season, obstruction by clouds could be reduced by even a higher degree, but still remains at about 11%. Further, the overall snow overestimation declined from 11.0-11.9% (using the original Aqua-Terra data) to 1.0-1.5%. The method is able to improve the overall accuracy of the 8-day MODIS snow product from originally 78% to 89% and even to 93% during cloud free periods.

  1. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    NASA Astrophysics Data System (ADS)

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the

  2. Improving Reliability and Validity of "Achievement via Conformance" Through Computer Applications.

    ERIC Educational Resources Information Center

    Riggs, Donald E.

    This paper describes an experiment conducted in order to improve the reliability and validity of the Achievement via Conformance (AC) scale of the California Psychological Inventory (CPI). The primary goal of AC is to identify those factors of interest and motivation which facilitate achievement in any setting where conformance is positive…

  3. Effective Strategies Urban Superintendents Utilize That Improve the Academic Achievement for African American Males

    ERIC Educational Resources Information Center

    Prioleau, Lushandra

    2013-01-01

    This study examined the effective strategies, resources, and programs urban superintendents utilize to improve the academic achievement for African-American males. This study employed a mixed-methods approach to answer the following research questions regarding urban superintendents and the academic achievement for African-American males: What…

  4. Improving Education Achievement and Attainment in Luxembourg. OECD Economics Department Working Papers, No. 508

    ERIC Educational Resources Information Center

    Carey, David; Ernst, Ekkehard

    2006-01-01

    Improving education achievement in Luxembourg is a priority for strengthening productivity growth and enhancing residents' employment prospects in the private sector, where employers mainly hire cross-border workers. Student achievement in Luxembourg is below the OECD average according to the 2003 OECD PISA study, with the performance gap between…

  5. Improving Student Motivation and Achievement in Mathematics through Teaching to the Multiple Intelligences.

    ERIC Educational Resources Information Center

    Bednar, Janet; Coughlin, Jane; Evans, Elizabeth; Sievers, Theresa

    This action research project described strategies for improving student motivation and achievement in mathematics through multiple intelligences. The targeted population consisted of kindergarten, third, fourth, and fifth grade students located in two major Midwestern cities. Documentation proving low student motivation and achievement in…

  6. Breaking through barriers: using technology to address executive function weaknesses and improve student achievement.

    PubMed

    Schwartz, David M

    2014-01-01

    Assistive technologies provide significant capabilities for improving student achievement. Improved accessibility, cost, and diversity of applications make integration of technology a powerful tool to compensate for executive function weaknesses and deficits and their impact on student performance, learning, and achievement. These tools can be used to compensate for decreased working memory, poor time management, poor planning and organization, poor initiation, and decreased memory. Assistive technology provides mechanisms to assist students with diverse strengths and weaknesses in mastering core curricular concepts. PMID:25010083

  7. Teachers' Perception of Their Principal's Leadership Style and the Effects on Student Achievement in Improving and Non-Improving Schools

    ERIC Educational Resources Information Center

    Hardman, Brenda Kay

    2011-01-01

    Teachers' perceptions of their school leaders influence student achievement in their schools. The extent of this influence is examined in this study. This quantitative study examined teachers' perceptions of the leadership style of their principals as transformational, transactional or passive-avoidant in improving and non-improving schools in…

  8. Efforts to improve the diagnostic accuracy of endoscopic ultrasound-guided fine-needle aspiration for pancreatic tumors

    PubMed Central

    Yamabe, Akane; Irisawa, Atsushi; Bhutani, Manoop S.; Shibukawa, Goro; Fujisawa, Mariko; Sato, Ai; Yoshida, Yoshitsugu; Arakawa, Noriyuki; Ikeda, Tsunehiko; Igarashi, Ryo; Maki, Takumi; Yamamoto, Shogo

    2016-01-01

    Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) is widely used to obtain a definitive diagnosis of pancreatic tumors. Good results have been reported for its diagnostic accuracy, with high sensitivity and specificity of around 90%; however, technological developments and adaptations to improve it still further are currently underway. The endosonographic technique can be improved when several tips and tricks useful to overcome challenges of EUS-FNA are known. This review provides various techniques and equipment for improvement in the diagnostic accuracy in EUS-FNA. PMID:27503153

  9. Integrating empowerment evaluation and quality improvement to achieve healthcare improvement outcomes

    PubMed Central

    Wandersman, Abraham; Alia, Kassandra Ann; Cook, Brittany; Ramaswamy, Rohit

    2015-01-01

    While the body of evidence-based healthcare interventions grows, the ability of health systems to deliver these interventions effectively and efficiently lags behind. Quality improvement approaches, such as the model for improvement, have demonstrated some success in healthcare but their impact has been lessened by implementation challenges. To help address these challenges, we describe the empowerment evaluation approach that has been developed by programme evaluators and a method for its application (Getting To Outcomes (GTO)). We then describe how GTO can be used to implement healthcare interventions. An illustrative healthcare quality improvement example that compares the model for improvement and the GTO method for reducing hospital admissions through improved diabetes care is described. We conclude with suggestions for integrating GTO and the model for improvement. PMID:26178332

  10. Strategies for Achieving High Sequencing Accuracy for Low Diversity Samples and Avoiding Sample Bleeding Using Illumina Platform

    PubMed Central

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    analysis can be repeated from saved sequencing images using the Long Template Protocol to increase accuracy. PMID:25860802

  11. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    SciTech Connect

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-11-01

    FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

  12. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique.

    PubMed

    Velazquez, L; Castro-Palacio, J C

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247

  13. Free Form Deformation–Based Image Registration Improves Accuracy of Traction Force Microscopy

    PubMed Central

    Jorge-Peñas, Alvaro; Izquierdo-Alvarez, Alicia; Aguilar-Cuenca, Rocio; Vicente-Manzanares, Miguel; Garcia-Aznar, José Manuel; Van Oosterwyck, Hans; de-Juan-Pardo, Elena M.; Ortiz-de-Solorzano, Carlos; Muñoz-Barrutia, Arrate

    2015-01-01

    Traction Force Microscopy (TFM) is a widespread method used to recover cellular tractions from the deformation that they cause in their surrounding substrate. Particle Image Velocimetry (PIV) is commonly used to quantify the substrate’s deformations, due to its simplicity and efficiency. However, PIV relies on a block-matching scheme that easily underestimates the deformations. This is especially relevant in the case of large, locally non-uniform deformations as those usually found in the vicinity of a cell’s adhesions to the substrate. To overcome these limitations, we formulate the calculation of the deformation of the substrate in TFM as a non-rigid image registration process that warps the image of the unstressed material to match the image of the stressed one. In particular, we propose to use a B-spline -based Free Form Deformation (FFD) algorithm that uses a connected deformable mesh to model a wide range of flexible deformations caused by cellular tractions. Our FFD approach is validated in 3D fields using synthetic (simulated) data as well as with experimental data obtained using isolated endothelial cells lying on a deformable, polyacrylamide substrate. Our results show that FFD outperforms PIV providing a deformation field that allows a better recovery of the magnitude and orientation of tractions. Together, these results demonstrate the added value of the FFD algorithm for improving the accuracy of traction recovery. PMID:26641883

  14. Analysis of Scattering Components from Fully Polarimetric SAR Images for Improving Accuracies of Urban Density Estimation

    NASA Astrophysics Data System (ADS)

    Susaki, J.

    2016-06-01

    In this paper, we analyze probability density functions (PDFs) of scatterings derived from fully polarimetric synthetic aperture radar (SAR) images for improving the accuracies of estimated urban density. We have reported a method for estimating urban density that uses an index Tv+c obtained by normalizing the sum of volume and helix scatterings Pv+c. Validation results showed that estimated urban densities have a high correlation with building-to-land ratios (Kajimoto and Susaki, 2013b; Susaki et al., 2014). While the method is found to be effective for estimating urban density, it is not clear why Tv+c is more effective than indices derived from other scatterings, such as surface or double-bounce scatterings, observed in urban areas. In this research, we focus on PDFs of scatterings derived from fully polarimetric SAR images in terms of scattering normalization. First, we introduce a theoretical PDF that assumes that image pixels have scatterers showing random backscattering. We then generate PDFs of scatterings derived from observations of concrete blocks with different orientation angles, and from a satellite-based fully polarimetric SAR image. The analysis of the PDFs and the derived statistics reveals that the curves of the PDFs of Pv+c are the most similar to the normal distribution among all the scatterings derived from fully polarimetric SAR images. It was found that Tv+c works most effectively because of its similarity to the normal distribution.

  15. Multimodal nonlinear optical microscopy improves the accuracy of early diagnosis of squamous intraepithelial neoplasia

    NASA Astrophysics Data System (ADS)

    Teh, Seng Khoon; Zheng, Wei; Li, Shuxia; Li, Dong; Zeng, Yan; Yang, Yanqi; Qu, Jianan Y.

    2013-03-01

    We explore diagnostic utility of a multicolor excitation multimodal nonlinear optical (NLO) microscopy for noninvasive detection of squamous epithelial precancer in vivo. The 7,12-dimenthylbenz(a)anthracene treated hamster cheek pouch was used as an animal model of carcinogenesis. The NLO microscope system employed was equipped with the ability to collect multiple tissue endogenous NLO signals such as two-photon excited fluorescence of keratin, nicotinamide adenine dinucleotide, collagen, and tryptophan, and second harmonic generation of collagen in spectral and time domains simultaneously. A total of 34 (11 controlled and 23 treated) Golden Syrian hamsters with 62 in vivo spatially distinct measurement sites were assessed in this study. High-resolution label-free NLO images were acquired from stratum corneum, stratum granulosum-stratum basale, and stroma for all tissue measurement sites. A total of nine and eight features from 745 and 600 nm excitation wavelengths, respectively, involving tissue structural and intrinsic biochemical properties were found to contain significant diagnostic information for precancers detection (p<0.05). Particularly, 600 nm excited tryptophan fluorescence signals emanating from stratum corneum was revealed to provide remarkable diagnostic utility. Multivariate statistical techniques confirmed the integration of diagnostically significant features from multicolor excitation wavelengths yielded improved diagnostic accuracy as compared to using the individual wavelength alone.

  16. A Method to Improve the Accuracy of Particle Diameter Measurements from Shadowgraph Images

    NASA Astrophysics Data System (ADS)

    Erinin, Martin A.; Wang, Dan; Liu, Xinan; Duncan, James H.

    2015-11-01

    A method to improve the accuracy of the measurement of the diameter of particles using shadowgraph images is discussed. To obtain data for analysis, a transparent glass calibration reticle, marked with black circular dots of known diameters, is imaged with a high-resolution digital camera using backlighting separately from both a collimated laser beam and diffuse white light. The diameter and intensity of each dot is measured by fitting an inverse hyperbolic tangent function to the particle image intensity map. Using these calibration measurements, a relationship between the apparent diameter and intensity of the dot and its actual diameter and position relative to the focal plane of the lens is determined. It is found that the intensity decreases and apparent diameter increases/decreases (for collimated/diffuse light) with increasing distance from the focal plane. Using the relationships between the measured properties of each dot and its actual size and position, an experimental calibration method has been developed to increase the particle-diameter-dependent range of distances from the focal plane for which accurate particle diameter measurements can be made. The support of the National Science Foundation under grant OCE0751853 from the Division of Ocean Sciences is gratefully acknowledged.

  17. Accuracy improvement in a calibration test bench for accelerometers by a vision system

    NASA Astrophysics Data System (ADS)

    D'Emilia, Giulio; Di Gasbarro, David; Gaspari, Antonella; Natale, Emanuela

    2016-06-01

    A procedure is described in this paper for the accuracy improvement of calibration of low-cost accelerometers in a prototype rotary test bench, driven by a brushless servo-motor and operating in a low frequency range of vibrations (0 to 5 Hz). Vibration measurements by a vision system based on a low frequency camera have been carried out, in order to reduce the uncertainty of the real acceleration evaluation at the installation point of the sensor to be calibrated. A preliminary test device has been realized and operated in order to evaluate the metrological performances of the vision system, showing a satisfactory behavior if the uncertainty measurement is taken into account. A combination of suitable settings of the control parameters of the motion control system and of the information gained by the vision system allowed to fit the information about the reference acceleration at the installation point to the needs of the procedure for static and dynamic calibration of three-axis accelerometers.

  18. Extended canonical Monte Carlo methods: Improving accuracy of microcanonical calculations using a reweighting technique

    NASA Astrophysics Data System (ADS)

    Velazquez, L.; Castro-Palacio, J. C.

    2015-03-01

    Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

  19. Improving Calculation Accuracies of Accumulation-Mode Fractions Based on Spectral of Aerosol Optical Depths

    NASA Astrophysics Data System (ADS)

    Ying, Zhang; Zhengqiang, Li; Yan, Wang

    2014-03-01

    Anthropogenic aerosols are released into the atmosphere, which cause scattering and absorption of incoming solar radiation, thus exerting a direct radiative forcing on the climate system. Anthropogenic Aerosol Optical Depth (AOD) calculations are important in the research of climate changes. Accumulation-Mode Fractions (AMFs) as an anthropogenic aerosol parameter, which are the fractions of AODs between the particulates with diameters smaller than 1μm and total particulates, could be calculated by AOD spectral deconvolution algorithm, and then the anthropogenic AODs are obtained using AMFs. In this study, we present a parameterization method coupled with an AOD spectral deconvolution algorithm to calculate AMFs in Beijing over 2011. All of data are derived from AErosol RObotic NETwork (AERONET) website. The parameterization method is used to improve the accuracies of AMFs compared with constant truncation radius method. We find a good correlation using parameterization method with the square relation coefficient of 0.96, and mean deviation of AMFs is 0.028. The parameterization method could also effectively solve AMF underestimate in winter. It is suggested that the variations of Angstrom indexes in coarse mode have significant impacts on AMF inversions.

  20. Evaluation of Simultaneous GPS/LEO Orbit Estimation for Improved GPS Orbit Accuracy

    NASA Astrophysics Data System (ADS)

    Weiss, J. P.; Bertiger, W.; Desai, S. D.; Haines, B.; Sibthorpe, A.

    2011-12-01

    We present results for combined precise orbit determination of the GPS constellation and low-Earth orbiters (LEO) and assess the quality of the resulting GPS orbit and clock solutions. The addition of LEO-based GPS receivers to standard ground network/GPS processing is attractive for several reasons: they provide excellent tracking geometry over both hemispheres, their range measurements are not subject to tropospheric delays, and the LEO multipath environments are relatively benign. In this work we include both GRACE and Jason-2/OSTM in otherwise standard JPL IGS analysis center orbit processing and evaluate the impacts on the GPS solutions. We assess GPS orbit and clock accuracy by way of internal metrics for solution precision, ambiguity resolution performance, and postfit residuals, as well as comparisons to independent orbit and clock products. Initial results show that orbit precision improves from 1.4 cm to 1.2 cm in the median (1D) RMS sense, and clock estimate precision is reduced from 1.9 cm to 1.7 cm (median RMS). In addition, we compare the GPS-based terrestrial reference frame to ITRF/IGS08 and show improvements in the Z-origin in terms of both reduced annual signals and a 33% reduction in scatter with LEOs in the solution. We also analyze the frequency content of the orbit errors. Peaks at fortnightly and draconitic periods are of particular interest, and we take advantage of the LEOs unique spatio-temporal sampling of the GPS constellation to identify possible causes of these signals.

  1. 4D microscope-integrated OCT improves accuracy of ophthalmic surgical maneuvers

    NASA Astrophysics Data System (ADS)

    Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Shen, Liangbo; Todorich, Bozho; Shieh, Christine; Kuo, Anthony; Toth, Cynthia; Izatt, Joseph A.

    2016-03-01

    Ophthalmic surgeons manipulate micron-scale tissues using stereopsis through an operating microscope and instrument shadowing for depth perception. While ophthalmic microsurgery has benefitted from rapid advances in instrumentation and techniques, the basic principles of the stereo operating microscope have not changed since the 1930's. Optical Coherence Tomography (OCT) has revolutionized ophthalmic imaging and is now the gold standard for preoperative and postoperative evaluation of most retinal and many corneal procedures. We and others have developed initial microscope-integrated OCT (MIOCT) systems for concurrent OCT and operating microscope imaging, but these are limited to 2D real-time imaging and require offline post-processing for 3D rendering and visualization. Our previously presented 4D MIOCT system can record and display the 3D surgical field stereoscopically through the microscope oculars using a dual-channel heads-up display (HUD) at up to 10 micron-scale volumes per second. In this work, we show that 4D MIOCT guidance improves the accuracy of depth-based microsurgical maneuvers (with statistical significance) in mock surgery trials in a wet lab environment. Additionally, 4D MIOCT was successfully performed in 38/45 (84%) posterior and 14/14 (100%) anterior eye human surgeries, and revealed previously unrecognized lesions that were invisible through the operating microscope. These lesions, such as residual and potentially damaging retinal deformation during pathologic membrane peeling, were visualized in real-time by the surgeon. Our integrated system provides an enhanced 4D surgical visualization platform that can improve current ophthalmic surgical practice and may help develop and refine future microsurgical techniques.

  2. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  3. Multiple lead recordings improve accuracy of bio-impedance plethysmographic technique.

    PubMed

    Kauppinen, P K; Hyttinen, J A; Kööbi, T; Malmivuo, J

    1999-06-01

    We have developed the theory and instrumentation of multiple multi-electrode bio-impedance (BI) measurements based on lead field theoretical approach. To derive reliable information based on BI data, a quantity of measurements should be taken with electrode configurations possessing regional measurement sensitivity. An apparatus has been developed with an eye to the requirements imposed by the theoretical aspects of achieving multiple multi-electrode BI measurements. It has features compensating electrode-contact related errors and errors due to imbalance between the conductive pathways when multiple electrodes are utilised for BI measurement. The proposed design allows simultaneous multi-electrode BI and bioelectric recording with the same electrode system. Initial operation experiences in clinical environment indicate that the device functions as intended, and allows user-friendly utilisation of multiple BI measurements. Contributions presented to BI methodology and instrumentation improve the reliability of BI measurements. PMID:10576427

  4. Improvement in accuracy of defect size measurement by automatic defect classification

    NASA Astrophysics Data System (ADS)

    Samir, Bhamidipati; Pereira, Mark; Paninjath, Sankaranarayanan; Jeon, Chan-Uk; Chung, Dong-Hoon; Yoon, Gi-Sung; Jung, Hong-Yul

    2015-10-01

    accurately estimating the size of the defect from the inspection images automatically. The sensitivity to weak defect signals, filtering out noise to identify the defect signals and locating the defect in the images are key success factors. The performance of the tool is assessed on programmable defect masks and production masks from HVM production flow. Implementation of Calibre® MDPAutoClassify™ is projected to improve the accuracy of defect size as compared to what is reported by inspection machine, which is very critical for production, and the classification of defects will aid in arriving at appropriate dispositions like SEM review, repair and scrap.

  5. Improving the Accuracy of Outdoor Educators' Teaching Self-Efficacy Beliefs through Metacognitive Monitoring

    ERIC Educational Resources Information Center

    Schumann, Scott; Sibthorp, Jim

    2016-01-01

    Accuracy in emerging outdoor educators' teaching self-efficacy beliefs is critical to student safety and learning. Overinflated self-efficacy beliefs can result in delayed skilled development or inappropriate acceptance of risk. In an outdoor education context, neglecting the accuracy of teaching self-efficacy beliefs early in an educator's…

  6. School Improvement Plans and Student Achievement: Preliminary Evidence from the Quality and Merit Project in Italy

    ERIC Educational Resources Information Center

    Caputo, Andrea; Rastelli, Valentina

    2014-01-01

    This study provides preliminary evidence from an Italian in-service training program addressed to lower secondary school teachers which supports school improvement plans (SIPs). It aims at exploring the association between characteristics/contents of SIPs and student improvement in math achievement. Pre-post standardized tests and text analysis of…

  7. Training Theory of Mind and Executive Control: A Tool for Improving School Achievement?

    ERIC Educational Resources Information Center

    Kloo, Daniela; Perner, Josef

    2008-01-01

    In the preschool years, there are marked improvements in theory of mind (ToM) and executive functions. And, children's competence in these two core cognitive domains is associated with their academic achievement. Therefore, training ToM and executive control could be a valuable tool for improving children's success in school. This article reviews…

  8. Using Force-Matched Potentials To Improve the Accuracy of Density Functional Tight Binding for Reactive Conditions

    NASA Astrophysics Data System (ADS)

    Goldman, Nir

    In this work, we show that force matching can be used to determine accurate density functional tight binding (DFTB) models for reactive materials under extreme conditions. Determination of chemical reactivity in high-pressure experiments is an unsolved problem that can span timescales orders of magnitude longer that what can be achieved with standard quantum simulation approaches, such as Kohn-Sham Density Functional Theory. DFTB holds promise as a semi-empirical quantum simulation method that yields a high degree of computational efficiency while potentially retaining the accuracy of these higher order methods. Here, we show that force matching can be used to determine accurate repulsive energies for DFTB for chemical reactivity in condensed phases. Our new models yield improved predictions for physical properties of molten liquid carbon, as well as small molecule production in phenolic polymer combustion. Our approach is general and can be implemented as a way to extend quantum simulations to several orders of magnitude longer timescales than previously possible, allowing for direct comparison with experiments.

  9. Mismatch extension of DNA polymerases and high-accuracy single nucleotide polymorphism diagnostics by gold nanoparticle-improved isothermal amplification.

    PubMed

    Chen, Feng; Zhao, Yue; Fan, Chunhai; Zhao, Yongxi

    2015-09-01

    Sequence mismatches may induce nonspecific extension reaction, causing false results for SNP diagnostics. Herein, we systematically investigated the impact of various 3'-terminal mismatches on isothermal amplification catalyzed by representative DNA polymerases. Despite their diverse efficiencies depending on types of mismatch and kinds of DNA polymerase, all 12 kinds of single 3'-terminal mismatches induced the extension reaction. Generally, only several mismatches (primer-template, C-C, G-A, A-G, and A-A) present an observable inhibitory effect on the amplification reaction, whereas other mismatches trigger amplified signals as high as those of Watson-Crick pairs. The related mechanism was deeply discussed, and a primer-design guideline for specific SNP analysis was summarized. Furthermore, we found that the addition of appropriate gold nanoparticles (AuNPs) can significantly inhibit mismatch extension and enhance the amplification specificity. Also the high-accuracy SNP analysis of human blood genomic DNA has been demonstrated by AuNPs-improved isothermal amplification, the result of which was verified by sequencing (the gold standard method for SNP assay). Collectively, this work provides mechanistic insight into mismatch behavior and achieves accurate SNP diagnostics, holding great potential for the application in molecular diagnostics and personalized medicine. PMID:26249366

  10. [Physiological Basis of the Improvement of Movement Accuracy with the Use of Stabilographic Training with Biological Feedback].

    PubMed

    Kapilevich, L V; Koshelskaya, E V; Krivoschekov, S G

    2015-01-01

    We studied the physiological parameters of ball hitting by volleyball players in unsupported position and opportunities for their improvement by training with biological feedback. Physiological and biomechanical parameters of a direct attack hit from supported position correlate with biomechanical features ofjump shots. At the same time, the physiological basis of accuracy of shots consists of the improvement of trunk and arm movement coordination in the flight phase, the factors of intramuscular and intermuscular coordination of the hitting arm and the change in the displacement of the center of pressure. The use of computer stabilography training with biological feedback helps to optimize physiological and biomechanical parameters of physical actions in unsupported position, which ultimately causes an increase in the accuracy of jump hitting of the ball. The obtained results open the prospects for applying the method of computer stabilography to improve the performance of accuracy-targeted actions in unsupported position in various sports. PMID:26485791

  11. Improvements in dose accuracy delivered with static-MLC IMRT on an integrated linear accelerator control system

    SciTech Connect

    Li Ji; Wiersma, Rodney D.; Stepaniak, Christopher J.; Farrey, Karl J.; Al-Hallaq, Hania A.

    2012-05-15

    Purpose: Dose accuracy has been shown to vary with dose per segment and dose rate when delivered with static multileaf collimator (SMLC) intensity modulated radiation therapy (IMRT) by Varian C-series MLC controllers. The authors investigated the impact of monitor units (MUs) per segment and dose rate on the dose delivery accuracy of SMLC-IMRT fields on a Varian TrueBeam linear accelerator (LINAC), which delivers dose and manages motion of all components using a single integrated controller. Methods: An SMLC sequence was created consisting of ten identical 10 x 10 cm{sup 2} segments with identical MUs. Beam holding between segments was achieved by moving one out-of-field MLC leaf pair. Measurements were repeated for various combinations of MU/segment ranging from 1 to 40 and dose rates of 100-600 MU/min for a 6 MV photon beam (6X) and dose rates of 800-2400 MU/min for a 10 MV flattening-filter free photon (10XFFF) beam. All measurements were made with a Farmer (0.6 cm{sup 3}) ionization chamber placed at the isocenter in a solid-water phantom at 10 cm depth. The measurements were performed on two Varian LINACs: C-series Trilogy and TrueBeam. Each sequence was delivered three times and the dose readings for the corresponding segments were averaged. The effects of MU/segment, dose rate, and LINAC type on the relative dose variation ({Delta}{sub i}) were compared using F-tests ({alpha} = 0.05). Results: On the Trilogy, large {Delta}{sub i} was observed in small MU segments: at 1 MU/segment, the maximum {Delta}{sub i} was 10.1% and 57.9% at 100 MU/min and 600 MU/min, respectively. Also, the first segment of each sequence consistently overshot ({Delta}{sub i} > 0), while the last segment consistently undershot ({Delta}{sub i} < 0). On the TrueBeam, at 1 MU/segment, {Delta}{sub i} ranged from 3.0% to 4.5% at 100 and 600 MU/min; no obvious overshoot/undershoot trend was observed. F-tests showed statistically significant difference [(1 - {beta}) =1.0000] between the

  12. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  13. Improvement of brain segmentation accuracy by optimizing non-uniformity correction using N3.

    PubMed

    Zheng, Weili; Chee, Michael W L; Zagorodnov, Vitali

    2009-10-15

    Smoothly varying and multiplicative intensity variations within MR images that are artifactual, can reduce the accuracy of automated brain segmentation. Fortunately, these can be corrected. Among existing correction approaches, the nonparametric non-uniformity intensity normalization method N3 (Sled, J.G., Zijdenbos, A.P., Evans, A.C., 1998. Nonparametric method for automatic correction of intensity nonuniformity in MRI data. IEEE Trans. Med. Imag. 17, 87-97.) is one of the most frequently used. However, at least one recent study (Boyes, R.G., Gunter, J.L., Frost, C., Janke, A.L., Yeatman, T., Hill, D.L.G., Bernstein, M.A., Thompson, P.M., Weiner, M.W., Schuff, N., Alexander, G.E., Killiany, R.J., DeCarli, C., Jack, C.R., Fox, N.C., 2008. Intensity non-uniformity correction using N3 on 3-T scanners with multichannel phased array coils. NeuroImage 39, 1752-1762.) suggests that its performance on 3 T scanners with multichannel phased-array receiver coils can be improved by optimizing a parameter that controls the smoothness of the estimated bias field. The present study not only confirms this finding, but additionally demonstrates the benefit of reducing the relevant parameter values to 30-50 mm (default value is 200 mm), on white matter surface estimation as well as the measurement of cortical and subcortical structures using FreeSurfer (Martinos Imaging Centre, Boston, MA). This finding can help enhance precision in studies where estimation of cerebral cortex thickness is critical for making inferences. PMID:19559796

  14. A new way in intelligent recognition improves control accuracy and efficiency for spacecrafts' rendezvous and docking

    NASA Astrophysics Data System (ADS)

    Wang, JiaQing; Lu, Yaodong; Wang, JiaFa

    2013-08-01

    Spacecrafts rendezvous and docking (RVD) by human or autonomous control is a complicated and difficult problem especially in the final approach stage. Present control methods have their key technology weakness. It is a necessary, important and difficult step for RVD through human's aiming chaser spacecraft at target spacecraft in a coaxial line by a three-dimension bulge cross target. At present, there is no technology to quantify the alignment in image recognition direction. We present a new practical autonomous method to improve the accuracy and efficiency of RVD control by adding image recognition algorithm instead of human aiming and control. Target spacecraft has a bulge cross target which is designed for chaser spacecraft's aiming accurately and have two center points, one is a plate surface center point(PSCP), another is a bulge cross center point(BCCP), while chaser spacecraft has a monitoring ruler cross center point(RCCP) of the video telescope optical system for aiming . If the three center points are coincident at the monitoring image, the two spacecrafts keep aligning which is suitable for closing to docking. Using the trace spacecraft's video telescope optical system to acquire the real-time monitoring image of the target spacecraft's bulge cross target. Appling image processing and intelligent recognition algorithm to get rid of interference source to compute the three center points' coordinate and exact digital offset of two spacecrafts' relative position and attitude real-timely, which is used to control the chaser spacecraft pneumatic driving system to change the spacecraft attitude in six direction: up, down, front, back, left, right, pitch, drift and roll precisely. This way is also practical and economical because it needs not adding any hardware, only adding the real-time image recognition software into spacecrafts' present video system. It is suitable for autonomous control and human control.

  15. [Accuracy Improvement of Temperature Calculation of the Laser-Induced Plasma Using Wavelet Transform Baseline Subtraction].

    PubMed

    Liu, Li; Xiao, Ping-ping

    2016-02-01

    Temperature is one of the most important parameters in studying of laser-induced plasma characteristics. To reduce the impact of continuous background on the calculation of temperatures using Boltzmann plots, the wavelet transform was used to decompose the spectrums, and the low-frequency signals represented the spectral baseline were deducted by using soft-threshold method. Selecting the appropriate wavelet decomposition level L and threshold coefficient a can increase the linear regression coefficient R2 of Boltzmann plots, and the calculation accuracy of plasma temperature was improved. The LIBS spectra of low alloy steel sample region from 417 to 445 nm were decomposed by using db4 wavelet, and then baseline subtraction and signal reconstruction were carried out, respectively. Twelve Fe atomic lines were chosen to establish Boltzmann plots, and the temperatures were calculated from the slope of the fitted lines in the plots. The value L and a were optimized according R², the results showed that the 8-layer db4 wavelet decomposition can gain the high R², while the value of a associated with the delay time td, e. g. , the optimum a corresponding to maximum values of R² is 0.3 when td ≤ 4.0 µs, and then decrease with the increasing of td, and reduced to 0 when td ≥ 6. 0 µs. The interference due to baseline on the spectral characteristic lines gradually reduced with the increasing of td, and therefore a decreased with td increase. After the baseline was deducted, the temperature calculated by Boltzmann plot decrease of about 2 000 to 3 000 K. The temperature gradually decreased with the increasing of the td, and the temperature fluctuation is reduced after baseline subtraction, these results are consistent with the physical process of plasma expansion. PMID:27209766

  16. Improvement of a Shape Accuracy Considering the Characteristics of Cable Network Systems for Large Mesh Antenna Reflectors

    NASA Astrophysics Data System (ADS)

    Harada, Satoshi; Meguro, Akira; Ueba, Masazumi

    This paper addressed cable network configurations to improve surface accuracies for large mesh antennas. In this study, we focused on manufacturing error, assembling error, and sensitivity to displacements of support points. To facilitate accuracy management by the lengths of cables, high stiffness cables were used in surface cable network. Then, low stiffness cables were used in other cables. The effect of the lengths of errors was estimated by analysis and experiments for a test model. Micro-gravity test was also executed to estimate the accuracy of shape adjustment on ground. The sensitivity is calculated under some variations of cable stiffness. From the results, it is clarified that the proposed configuration can reduce the surface distortion and keep high surface accuracy.

  17. Accuracy Improvement by the Least Squares Image Matching Evaluated on the CARTOSAT-1

    NASA Astrophysics Data System (ADS)

    Afsharnia, H.; Azizi, A.; Arefi, H.

    2015-12-01

    Generating accurate elevation data from satellite images is a prerequisite step for applications that involve disaster forecasting and management using GIS platforms. In this respect, the high resolution satellite optical sensors may be regarded as one of the prime and valuable sources for generating accurate and updated elevation information. However, one of the main drawbacks of conventional approaches for automatic elevation generation from these satellite optical data using image matching techniques is the lack of flexibility in the image matching functional models to take dynamically into account the geometric and radiometric dissimilarities between the homologue stereo image points. The classical least squares image matching (LSM) method, on the other hand, is quite flexible in incorporating the geometric and radiometric variations of image pairs into its functional model. The main objective of this paper is to evaluate and compare the potential of the LSM technique for generating disparity maps from high resolution satellite images to achieve sub pixel precision. To evaluate the rate of success of the LSM, the size of the y-disparities between the homologous points is taken as the precision criteria. The evaluation is performed on the Cartosat-1 stereo along track images over a highly mountainous terrain. The precision improvement is judged based on the standard deviation and the scatter pattern of the y-disparity data. The analysis of the results indicate that, the LSM has achieved the matching precision of about 0.18 pixels which is clearly superior to the manual pointing that yielded the precision of 0.37 pixels.

  18. Effects of simulated interventions to improve school entry academic skills on socioeconomic inequalities in educational achievement.

    PubMed

    Chittleborough, Catherine R; Mittinty, Murthy N; Lawlor, Debbie A; Lynch, John W

    2014-01-01

    Randomized controlled trial evidence shows that interventions before age 5 can improve skills necessary for educational success; the effect of these interventions on socioeconomic inequalities is unknown. Using trial effect estimates, and marginal structural models with data from the Avon Longitudinal Study of Parents and Children (n = 11,764, imputed), simulated effects of plausible interventions to improve school entry academic skills on socioeconomic inequality in educational achievement at age 16 were examined. Progressive universal interventions (i.e., more intense intervention for those with greater need) to improve school entry academic skills could raise population levels of educational achievement by 5% and reduce absolute socioeconomic inequality in poor educational achievement by 15%. PMID:25327718

  19. Effects of Simulated Interventions to Improve School Entry Academic Skills on Socioeconomic Inequalities in Educational Achievement

    PubMed Central

    Chittleborough, Catherine R; Mittinty, Murthy N; Lawlor, Debbie A; Lynch, John W

    2014-01-01

    Randomized controlled trial evidence shows that interventions before age 5 can improve skills necessary for educational success; the effect of these interventions on socioeconomic inequalities is unknown. Using trial effect estimates, and marginal structural models with data from the Avon Longitudinal Study of Parents and Children (n = 11,764, imputed), simulated effects of plausible interventions to improve school entry academic skills on socioeconomic inequality in educational achievement at age 16 were examined. Progressive universal interventions (i.e., more intense intervention for those with greater need) to improve school entry academic skills could raise population levels of educational achievement by 5% and reduce absolute socioeconomic inequality in poor educational achievement by 15%. PMID:25327718

  20. The Consequences of "School Improvement": Examining the Association between Two Standardized Assessments Measuring School Improvement and Student Science Achievement

    ERIC Educational Resources Information Center

    Maltese, Adam V.; Hochbein, Craig D.

    2012-01-01

    For more than half a century concerns about the ability of American students to compete in a global workplace focused policymakers' attention on improving school performance generally, and student achievement in science, technology, engineering, and mathematics (STEM) specifically. In its most recent form--No Child Left Behind--there is evidence…

  1. Improved accuracy of acute graft-versus-host disease staging among multiple centers.

    PubMed

    Levine, John E; Hogan, William J; Harris, Andrew C; Litzow, Mark R; Efebera, Yvonne A; Devine, Steven M; Reshef, Ran; Ferrara, James L M

    2014-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD. PMID:25455279

  2. Improved accuracy of acute graft-versus-host disease staging among multiple centers

    PubMed Central

    Levine, John E.; Hogan, William J.; Harris, Andrew C.; Litzow, Mark R.; Efebera, Yvonne A.; Devine, Steven M.; Reshef, Ran; Ferrara, James L.M.

    2015-01-01

    The clinical staging of acute graft-versus-host disease (GVHD) varies significantly among bone marrow transplant (BMT) centers, but adherence to long-standing practices poses formidable barriers to standardization among centers. We have analyzed the sources of variability and developed a web-based remote data entry system that can be used by multiple centers simultaneously and that standardizes data collection in key areas. This user-friendly, intuitive interface resembles an online shopping site and eliminates error-prone entry of free text with drop-down menus and pop-up detailed guidance available at the point of data entry. Standardized documentation of symptoms and therapeutic response reduces errors in grade assignment and allows creation of confidence levels regarding the diagnosis. Early review and adjudication of borderline cases improves consistency of grading and further enhances consistency among centers. If this system achieves widespread use it may enhance the quality of data in multicenter trials to prevent and treat acute GVHD. PMID:25455279

  3. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy.

    PubMed

    Monaghan, Kieran A

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an

  4. Four Reasons to Question the Accuracy of a Biotic Index; the Risk of Metric Bias and the Scope to Improve Accuracy

    PubMed Central

    Monaghan, Kieran A.

    2016-01-01

    Natural ecological variability and analytical design can bias the derived value of a biotic index through the variable influence of indicator body-size, abundance, richness, and ascribed tolerance scores. Descriptive statistics highlight this risk for 26 aquatic indicator systems; detailed analysis is provided for contrasting weighted-average indices applying the example of the BMWP, which has the best supporting data. Differences in body size between taxa from respective tolerance classes is a common feature of indicator systems; in some it represents a trend ranging from comparatively small pollution tolerant to larger intolerant organisms. Under this scenario, the propensity to collect a greater proportion of smaller organisms is associated with negative bias however, positive bias may occur when equipment (e.g. mesh-size) selectively samples larger organisms. Biotic indices are often derived from systems where indicator taxa are unevenly distributed along the gradient of tolerance classes. Such skews in indicator richness can distort index values in the direction of taxonomically rich indicator classes with the subsequent degree of bias related to the treatment of abundance data. The misclassification of indicator taxa causes bias that varies with the magnitude of the misclassification, the relative abundance of misclassified taxa and the treatment of abundance data. These artifacts of assessment design can compromise the ability to monitor biological quality. The statistical treatment of abundance data and the manipulation of indicator assignment and class richness can be used to improve index accuracy. While advances in methods of data collection (i.e. DNA barcoding) may facilitate improvement, the scope to reduce systematic bias is ultimately limited to a strategy of optimal compromise. The shortfall in accuracy must be addressed by statistical pragmatism. At any particular site, the net bias is a probabilistic function of the sample data, resulting in an

  5. A Comparison of Group-Oriented Contingencies and Randomized Reinforcers to Improve Homework Completion and Accuracy for Students with Disabilities

    ERIC Educational Resources Information Center

    Lynch, AnnMarie; Theodore, Lea A.; Bray, Melissa A.; Kehle, Thomas J.

    2009-01-01

    The present study employed an alternating-treatments design to compare the differential effect of group contingencies on the improvement of homework completion and accuracy of students with disabilities in a self-contained fifth-grade classroom. Generally, past investigations have indicated a positive association between homework performance and…

  6. The Role of Incidental Unfocused Prompts and Recasts in Improving English as a Foreign Language Learners' Accuracy

    ERIC Educational Resources Information Center

    Rahimi, Muhammad; Zhang, Lawrence Jun

    2016-01-01

    This study was designed to investigate the effects of incidental unfocused prompts and recasts on improving English as a foreign language (EFL) learners' grammatical accuracy as measured in students' oral interviews and the Test of English as a Foreign Language (TOEFL) grammar test. The design of the study was quasi-experimental with pre-tests,…

  7. Improving accuracy of cell and chromophore concentration measurements using optical density

    PubMed Central

    2013-01-01

    Background UV–vis spectrophotometric optical density (OD) is the most commonly-used technique for estimating chromophore formation and cell concentration in liquid culture. OD wavelength is often chosen with little thought given to its effect on the quality of the measurement. Analysis of the contributions of absorption and scattering to the measured optical density provides a basis for understanding variability among spectrophotometers and enables a quantitative evaluation of the applicability of the Beer-Lambert law. This provides a rational approach for improving the accuracy of OD measurements used as a proxy for direct dry weight (DW), cell count, and pigment levels. Results For pigmented organisms, the choice of OD wavelength presents a tradeoff between the robustness and the sensitivity of the measurement. The OD at a robust wavelength is primarily the result of light scattering and does not vary with culture conditions; whereas, the OD at a sensitive wavelength is additionally dependent on light absorption by the organism’s pigments. Suitably robust and sensitive wavelengths are identified for a wide range of organisms by comparing their spectra to the true absorption spectra of dyes. The relative scattering contribution can be reduced either by measurement at higher OD, or by the addition of bovine serum albumin. Reduction of scattering or correlation with off-peak light attenuation provides for more accurate assessment of chromophore levels within cells. Conversion factors between DW, OD, and colony-forming unit density are tabulated for 17 diverse organisms to illustrate the scope of variability of these correlations. Finally, an inexpensive short pathlength LED-based flow cell is demonstrated for the online monitoring of growth in a bioreactor at culture concentrations greater than 5 grams dry weight per liter which would otherwise require off-line dilutions to obtain non-saturated OD measurements. Conclusions OD is most accurate as a time

  8. Incorporation of Inter-Subject Information to Improve the Accuracy of Subject-Specific P300 Classifiers.

    PubMed

    Xu, Minpeng; Liu, Jing; Chen, Long; Qi, Hongzhi; He, Feng; Zhou, Peng; Wan, Baikun; Ming, Dong

    2016-05-01

    Although the inter-subject information has been demonstrated to be effective for a rapid calibration of the P300-based brain-computer interface (BCI), it has never been comprehensively tested to find if the incorporation of heterogeneous data could enhance the accuracy. This study aims to improve the subject-specific P300 classifier by adding other subject's data. A classifier calibration strategy, weighted ensemble learning generic information (WELGI), was developed, in which elementary classifiers were constructed by using both the intra- and inter-subject information and then integrated into a strong classifier with a weight assessment. 55 subjects were recruited to spell 20 characters offline using the conventional P300-based BCI, i.e. the P300-speller. Four different metrics, the P300 accuracy and precision, the round accuracy, and the character accuracy, were performed for a comprehensive investigation. The results revealed that the classifier constructed on the training dataset in combination with adding other subject's data was significantly superior to that without the inter-subject information. Therefore, the WELGI is an effective classifier calibration strategy which uses the inter-subject information to improve the accuracy of subject-specific P300 classifiers, and could also be applied to other BCI paradigms. PMID:27005002

  9. Audit-based education: a potentially effective program for improving guideline achievement in CKD patients.

    PubMed

    de Goeij, Moniek C M; Rotmans, Joris I

    2013-09-01

    The achievement of treatment guidelines in patients with chronic kidney disease is poor, and more efforts are needed to improve this. Audit-based education is a program that may contribute to this improvement. de Lusignana et al. investigated whether audit-based education is effective in lowering systolic blood pressure in a primary-care setting. Although the program is inventive and promising, several adjustments are needed before it can be applied as an effective strategy. PMID:23989357

  10. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  11. Improving Mathematics Achievement for All California Students: The Report of the California Mathematics Task Force.

    ERIC Educational Resources Information Center

    California State Dept. of Education, Sacramento.

    This document is the result of the California Mathematics Task Force assigned to address the need to improve the mathematics achievement of California's students. Five recommendations are made and discussed: (1) The State Superintendent of Public Instruction (SSPI) must establish clear and specific content and performance standards for mathematics…

  12. Supporting Instructional Improvement in Low-Performing Schools to Increase Students' Academic Achievement

    ERIC Educational Resources Information Center

    Bellei, Cristian

    2013-01-01

    This is an impact evaluation of the Technical Support to Failing Schools Program, a Chilean compensatory program that provided 4-year in-school technical assistance to low-performing schools to improve students' academic achievement. The author implemented a quasi-experimental design by using difference-in-differences estimation combined with…

  13. Dynamic Geometry Software Improves Mathematical Achievement: Systematic Review and Meta-Analysis

    ERIC Educational Resources Information Center

    Chan, Kan Kan; Leung, Siu Wai

    2014-01-01

    Dynamic geometry software (DGS) aims to enhance mathematics education. This systematic review and meta-analysis evaluated the quasi-experimental studies on the effectiveness of DGS-based instruction in improving students' mathematical achievement. Research articles published between 1990 and 2013 were identified from major databases according to a…

  14. What Matters for Elementary Literacy Coaching? Guiding Principles for Instructional Improvement and Student Achievement

    ERIC Educational Resources Information Center

    L'Allier, Susan; Elish-Piper, Laurie; Bean, Rita M.

    2010-01-01

    Literacy coaches provide job-embedded professional development for teachers, and the number of literacy coaches in elementary schools is increasing. Although literacy coaching offers promise in terms of improving teacher practice and student achievement, guidance is needed regarding the qualifications, activities, and roles of literacy coaches.…

  15. Instructional Leadership Influence on Collective Teacher Efficacy to Improve School Achievement

    ERIC Educational Resources Information Center

    Fancera, Samuel F.; Bliss, James R.

    2011-01-01

    The purpose of this study was to examine whether instructional leadership functions, as defined in Hallinger's Principal Instructional Management Rating Scale, positively influence collective teacher efficacy to improve school achievement. Teachers from sample schools provided data for measures of collective teacher efficacy and instructional…

  16. Investing in Educator Data Literacy Improves Student Achievement. Evidence of Impact: The Oregon Data Project

    ERIC Educational Resources Information Center

    Data Quality Campaign, 2012

    2012-01-01

    Since 2007 the Oregon DATA Project has been investing resources to provide educators on-the-job training around effective data use to improve student achievement. New evidence shows that their efforts are paying off. A 2011 Oregon DATA Project report detailed the impact of their investment in the state's educators, finding the following: (1)…

  17. Improving Student Interest and Achievement in Social Studies Using a Multiple Intelligence Approach.

    ERIC Educational Resources Information Center

    Hanley, Chris; Hermiz, Carmen; Lagioia-Peddy, Jennifer; Levine-Albuck, Valerie

    This action research paper describes a program initiated by teacher researchers to improve academic achievement and interest in social studies. The targeted group consisted of fifth graders in a lower middle class community in the Midwest. Analysis of the problem-causes data show three main factors: curriculum, attitude, and effect. In regard to…

  18. Staff Development Designed To Improve the Achievement of Students with Disabilities.

    ERIC Educational Resources Information Center

    Byrnes, MaryAnn; Majors, Martha

    This paper describes how the University of Massachusetts (Boston) developed partnership programs to improve achievement of students with significant disabilities just beginning to participate in a standards-based general curriculum. Fundamental to the effort was development of a 12-credit graduate certificate program focused on adapting the…

  19. Improving Teaching Capacity to Increase Student Achievement: The Key Role of Data Interpretation by School Leaders

    ERIC Educational Resources Information Center

    Lynch, David; Smith, Richard; Provost, Steven; Madden, Jake

    2016-01-01

    Purpose: This paper argues that in a well-organised school with strong leadership and vision coupled with a concerted effort to improve the teaching performance of each teacher, student achievement can be enhanced. The purpose of this paper is to demonstrate that while macro-effect sizes such as "whole of school" metrics are useful for…

  20. Improving High School Students' Mathematics Achievement through the Use of Motivational Strategies.

    ERIC Educational Resources Information Center

    Portal, Jamie; Sampson, Lisa

    This report describes a program for motivating students in mathematics in order to improve achievement at the high school level. The targeted population consisted of high school students in a middle class community located in a suburb of a large metropolitan area. The problems of underachievement were documented through data collected from surveys…

  1. Improving Achievement in Low-Performing Schools: Key Results for School Leaders

    ERIC Educational Resources Information Center

    Ward, Randolph E.; Burke, Mary Ann

    2004-01-01

    As accountability in schools becomes more crucial, educators are looking for comprehensive and innovative management practices that respond to challenges and realities of student academic achievement. In order to improve academic performance and the quality of instruction, the entire school community needs to be involved. This book provides six…

  2. Analyzing Academic Achievement of Junior High School Students by an Improved Rough Set Model

    ERIC Educational Resources Information Center

    Pai, Ping-Feng; Lyu, Yi-Jia; Wang, Yu-Min

    2010-01-01

    Rough set theory (RST) is an emerging technique used to deal with problems in data mining and knowledge acquisition. However, the RST approach has not been widely explored in the field of academic achievement. This investigation developed an improved RST (IMRST) model, which employs linear discriminant analysis to determine a reduct of RST, and…

  3. Metacognitive Scaffolds Improve Self-Judgments of Accuracy in a Medical Intelligent Tutoring System

    ERIC Educational Resources Information Center

    Feyzi-Behnagh, Reza; Azevedo, Roger; Legowski, Elizabeth; Reitmeyer, Kayse; Tseytlin, Eugene; Crowley, Rebecca S.

    2014-01-01

    In this study, we examined the effect of two metacognitive scaffolds on the accuracy of confidence judgments made while diagnosing dermatopathology slides in SlideTutor. Thirty-one (N = 31) first- to fourth-year pathology and dermatology residents were randomly assigned to one of the two scaffolding conditions. The cases used in this study were…

  4. Improving the Accuracy of Teacher Self-Evaluation through Staff Development.

    ERIC Educational Resources Information Center

    Smylie, Mark

    This study examined impact of a staff development program on increasing the accuracy of teachers' self-evaluation of classroom performance. Teachers were provided with specific formal feedback through a program called The Effective Use of Time Program (EUOT). The program had four components: (1) observation in the classroom and feedback about…

  5. Improving Accuracy Is Not the Only Reason for Writing, and Even If It Were...

    ERIC Educational Resources Information Center

    Bruton, Anthony

    2009-01-01

    For research into language development in L2 writing to have any relevance, it has to be situated within a framework of decisions in writing pedagogy. Furthermore, a perspective on L2 language development cannot be limited only to accuracy levels. Even if this is the case, it is counter-intuitive that further input may be detrimental to language…

  6. Bureau of Indian Affairs Schools: New Facilities Management Information System Promising, but Improved Data Accuracy Needed.

    ERIC Educational Resources Information Center

    General Accounting Office, Washington, DC.

    A General Accounting Office (GAO) study evaluated the Bureau of Indian Affairs' (BIA) new facilities management information system (FMIS). Specifically, the study examined whether the new FMIS addresses the old system's weaknesses and meets BIA's management needs, whether BIA has finished validating the accuracy of data transferred from the old…

  7. Improving the accuracy of Weyl-Heisenberg wavelet and symmetrized Gaussian representations using customized phase-space-region operators.

    PubMed

    Lombardini, Richard; Poirier, Bill

    2006-09-01

    A particular basis set method developed by one of the authors, involving maximally localized orthogonal Weyl-Heisenberg wavelets (or "weylets") and a phase space truncation scheme, has been successfully applied to exact quantum calculations for many degrees of freedom (DOF's) [B. Poirier and A. Salam, J. Chem. Phys. 121, 1740 (2004)]. However, limitations in accuracy arise in the many-DOF case, owing to memory limits on conventional computers. This paper addresses this accuracy limitation by introducing phase space region operators (PSRO's) that customize individual weylet basis functions for the problem of interest. The construction of the PSRO's is straightforward, and does not require a priori knowledge of the desired eigenstates. The PSRO, when applied to weylets, as well as to simple phase space Gaussian basis functions, exhibits remarkable improvements in accuracy, reducing computed eigenvalue errors by orders of magnitude. The method is applied to various model systems at varying DOF's. PMID:17025784

  8. Does Children’s Academic Achievement Improve when Single Mothers Marry?

    PubMed Central

    Wagmiller, Robert L.; Gershoff, Elizabeth; Veliz, Philip; Clements, Margaret

    2011-01-01

    Promoting marriage, especially among low-income single mothers with children, is increasingly viewed as a promising public policy strategy for improving developmental outcomes for disadvantaged children. Previous research suggests, however, that children’s academic achievement either does not improve or declines when single mothers marry. In this paper, we argue that previous research may understate the benefits of mothers’ marriages to children from single-parent families because (1) the short-term and long-term developmental consequences of marriage are not adequately distinguished and (2) child and family contexts in which marriage is likely to confer developmental advantages are not differentiated from those that do not. Using multiple waves of data from the ECLS-K, we find that single mothers’ marriages are associated with modest but statistically significant improvements in their children’s academic achievement trajectories. However, only children from more advantaged single-parent families benefit from their mothers’ marriage. PMID:21611134

  9. Approaches for achieving long-term accuracy and precision of δ18O and δ2H for waters analyzed using laser absorption spectrometers.

    PubMed

    Wassenaar, Leonard I; Coplen, Tyler B; Aggarwal, Pradeep K

    2014-01-21

    The measurement of δ(2)H and δ(18)O in water samples by laser absorption spectroscopy (LAS) are adopted increasingly in hydrologic and environmental studies. Although LAS instrumentation is easy to use, its incorporation into laboratory operations is not as easy, owing to extensive offline data manipulation required for outlier detection, derivation and application of algorithms to correct for between-sample memory, correcting for linear and nonlinear instrumental drift, VSMOW-SLAP scale normalization, and in maintaining long-term QA/QC audits. Here we propose a series of standardized water-isotope LAS performance tests and routine sample analysis templates, recommended procedural guidelines, and new data processing software (LIMS for Lasers) that altogether enables new and current LAS users to achieve and sustain long-term δ(2)H and δ(18)O accuracy and precision for these important isotopic assays. PMID:24328223

  10. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2009-01-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  11. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Astrophysics Data System (ADS)

    Justh, H. L.; Justus, C. G.; Badger, A. M.

    2009-12-01

    at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented of the work underway to derive better multipliers by including possible variation with latitude and/or Ls. This is achieved by comparison of Mars-GRAM MapYear=0 output with TES limb data. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths. Answers may also be provided to the issues raised in a recent study by Desai(2008). Desai has shown that the actual landing sites of Mars Pathfinder, the Mars Exploration Rovers and the Phoenix Mars Lander have been further downrange than predicted by models prior to landing. Desai’s reconstruction of their entries into the Martian atmosphere showed that the models consistently predicted higher densities than those found upon EDL. The solution of this problem would be important to the Mars Program since future exploration of Mars by landers and rovers will require more accurate landing capabilities, especially for the proposed Mars Sample Return mission.

  12. Improved solution accuracy for Landsat-4 (TDRSS-user) orbit determination

    NASA Technical Reports Server (NTRS)

    Oza, D. H.; Niklewski, D. J.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.

    1994-01-01

    This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using a Prototype Filter Smoother (PFS), with the accuracy of an established batch-least-squares system, the Goddard Trajectory Determination System (GTDS). The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and convariances for the sequential case) of solutions produced by the batch and sequential methods. The filtered and smoothed PFS orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 15 meters.

  13. A neural network based ensemble approach for improving the accuracy of meteorological fields used for regional air quality modeling.

    PubMed

    Cheng, Shuiyuan; Li, Li; Chen, Dongsheng; Li, Jianbing

    2012-12-15

    A neural network based ensemble methodology was presented in this study to improve the accuracy of meteorological input fields for regional air quality modeling. Through nonlinear integration of simulation results from two meteorological models (MM5 and WRF), the ensemble approach focused on the optimization of meteorological variable values (temperature, surface air pressure, and wind field) in the vertical layer near ground. To illustrate the proposed approach, a case study in northern China during two selected air pollution events, in 2006, was conducted. The performances of the MM5, the WRF, and the ensemble approach were assessed using different statistical measures. The results indicated that the ensemble approach had a higher simulation accuracy than the MM5 and the WRF model. Performance was improved by more than 12.9% for temperature, 18.7% for surface air pressure field, and 17.7% for wind field. The atmospheric PM(10) concentrations in the study region were also simulated by coupling the air quality model CMAQ with the MM5 model, the WRF model, and the ensemble model. It was found that the modeling accuracy of the ensemble-CMAQ model was improved by more than 7.0% and 17.8% when compared to the MM5-CMAQ and the WRF-CMAQ models, respectively. The proposed neural network based meteorological modeling approach holds great potential for improving the performance of regional air quality modeling. PMID:23000477

  14. Travel-time source-specific station correction improves location accuracy

    NASA Astrophysics Data System (ADS)

    Giuntini, Alessandra; Materni, Valerio; Chiappini, Stefano; Carluccio, Roberto; Console, Rodolfo; Chiappini, Massimo

    2013-04-01

    Accurate earthquake locations are crucial for investigating seismogenic processes, as well as for applications like verifying compliance to the Comprehensive Test Ban Treaty (CTBT). Earthquake location accuracy is related to the degree of knowledge about the 3-D structure of seismic wave velocity in the Earth. It is well known that modeling errors of calculated travel times may have the effect of shifting the computed epicenters far from the real locations by a distance even larger than the size of the statistical error ellipses, regardless of the accuracy in picking seismic phase arrivals. The consequences of large mislocations of seismic events in the context of the CTBT verification is particularly critical in order to trigger a possible On Site Inspection (OSI). In fact, the Treaty establishes that an OSI area cannot be larger than 1000 km2, and its larger linear dimension cannot be larger than 50 km. Moreover, depth accuracy is crucial for the application of the depth event screening criterion. In the present study, we develop a method of source-specific travel times corrections based on a set of well located events recorded by dense national seismic networks in seismically active regions. The applications concern seismic sequences recorded in Japan, Iran and Italy. We show that mislocations of the order of 10-20 km affecting the epicenters, as well as larger mislocations in hypocentral depths, calculated from a global seismic network and using the standard IASPEI91 travel times can be effectively removed by applying source-specific station corrections.

  15. Two Simple Rules for Improving the Accuracy of Empiric Treatment of Multidrug-Resistant Urinary Tract Infections

    PubMed Central

    Strymish, Judith; Gupta, Kalpana

    2015-01-01

    The emergence of multidrug-resistant (MDR) uropathogens is making the treatment of urinary tract infections (UTIs) more challenging. We sought to evaluate the accuracy of empiric therapy for MDR UTIs and the utility of prior culture data in improving the accuracy of the therapy chosen. The electronic health records from three U.S. Department of Veterans Affairs facilities were retrospectively reviewed for the treatments used for MDR UTIs over 4 years. An MDR UTI was defined as an infection caused by a uropathogen resistant to three or more classes of drugs and identified by a clinician to require therapy. Previous data on culture results, antimicrobial use, and outcomes were captured from records from inpatient and outpatient settings. Among 126 patient episodes of MDR UTIs, the choices of empiric therapy against the index pathogen were accurate in 66 (52%) episodes. For the 95 patient episodes for which prior microbiologic data were available, when empiric therapy was concordant with the prior microbiologic data, the rate of accuracy of the treatment against the uropathogen improved from 32% to 76% (odds ratio, 6.9; 95% confidence interval, 2.7 to 17.1; P < 0.001). Genitourinary tract (GU)-directed agents (nitrofurantoin or sulfa agents) were equally as likely as broad-spectrum agents to be accurate (P = 0.3). Choosing an agent concordant with previous microbiologic data significantly increased the chance of accuracy of therapy for MDR UTIs, even if the previous uropathogen was a different species. Also, GU-directed or broad-spectrum therapy choices were equally likely to be accurate. The accuracy of empiric therapy could be improved by the use of these simple rules. PMID:26416859

  16. Improving Mars-GRAM: Increasing the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    Extensively utilized for numerous mission applications, the Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model. In a Monte-Carlo mode, Mars-GRAM's perturbation modeling capability is used to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM has been found to be inexact when used during the Mars Science Laboratory (MSL) site selection process for sensitivity studies for MapYear=0 and large optical depth values such as tau=3. Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM) from the surface to 80 km altitude. Mars-GRAM with the MapYear parameter set to 0 utilizes results from a MGCM run with a fixed value of tau=3 at all locations for the entire year. Imprecise atmospheric density and pressure at all altitudes is a consequence of this use of MGCM with tau=3. Density factor values have been determined for tau=0.3, 1 and 3 as a preliminary fix to this pressure-density problem. These factors adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. These density factors are fixed values for all latitudes and Ls and are included in Mars-GRAM Release 1.3. Work currently being done, to derive better multipliers by including variations with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data, will be highlighted in the presentation. The TES limb data utilized in this process has been validated by a comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS). This comparison study was undertaken for locations on Mars of varying latitudes, Ls, and LTST. The more precise density factors will be included in Mars-GRAM 2005 Release 1.4 and thus improve the results of future sensitivity studies done for large

  17. Improved protein identification using automated high mass measurement accuracy MALDI FT-ICR MS peptide mass fingerprinting

    NASA Astrophysics Data System (ADS)

    Horn, David M.; Peters, Eric C.; Klock, Heath; Meyers, Andrew; Brock, Ansgar

    2004-11-01

    A comparison between automated peptide mass fingerprinting systems using MALDI-TOF and MALDI FT-ICR MS is presented using 86 overexpressed proteins from Thermotoga maritima. The high mass measurement accuracy of FT-ICR MS greatly reduces the probability of an incorrect assignment of a protein in peptide mass fingerprinting by significantly decreasing the score and peptide sequence coverage of the highest ranked random protein match from the database. This improved mass accuracy led to the identification of all 86 proteins with the FT-ICR data versus 84 proteins using the TOF data against the T. maritima database. The beneficial effect of mass accuracy becomes much more evident with the addition of variable modifications and an increase in the size of the database used in the search. A search of the same data against the T. maritima database with the addition of a variable modification resulted in 77 identifications using MALDI-TOF and 84 identifications using MALDI FT-ICR MS. When searching the NCBInr database, the FT-ICR based system identified 82 of 86 proteins while the TOF based system could only identify 73. The MALDI FT-ICR based system has the further advantage of producing fewer unassigned masses in each peptide mass fingerprint, resulting in greatly reduced sequence coverage and score for the highest ranked random match and improving confidence in the correctly assigned top scoring protein. Finally, the use of rms error as a measure for instrumental mass accuracy is discussed.

  18. Improving the Accuracy of Computer-aided Diagnosis for Breast MR Imaging by Differentiating between Mass and Nonmass Lesions.

    PubMed

    Gallego-Ortiz, Cristina; Martel, Anne L

    2016-03-01

    for mass and nonmass lesions improves the accuracy of CAD for breast MR imaging. By cascading classifiers, we achieved a significant improvement in performance with respect to the use of a one-shot classifier. Our cascaded classifier may provide an advantage for screening women at high risk for breast cancer, in whom the ability to diagnose cancers at an early stage is of primary importance. (©) RSNA, 2015 Online supplemental material is available for this article. PMID:26383229

  19. Physician involvement enhances coding accuracy to ensure national standards: an initiative to improve awareness among new junior trainees.

    PubMed

    Nallasivan, S; Gillott, T; Kamath, S; Blow, L; Goddard, V

    2011-06-01

    Record Keeping Standards is a development led by the Royal College of Physicians of London (RCP) Health Informatics Unit and funded by the National Health Service (NHS) Connecting for Health. A supplementary report produced by the RCP makes a number of recommendations based on a study held at an acute hospital trust. We audited the medical notes and coding to assess the accuracy, documentation by the junior doctors and also to correlate our findings with the RCP audit. Northern Lincolnshire & Goole Hospitals NHS Foundation Trust has 114,000 'finished consultant episodes' per year. A total of 100 consecutive medical (50) and rheumatology (50) discharges from Diana Princess of Wales Hospital from August-October 2009 were reviewed. The results showed an improvement in coding accuracy (10% errors), comparable to the RCP audit but with 5% documentation errors. Physician involvement needs enhancing to improve the effectiveness and to ensure clinical safety. PMID:21677911

  20. Significant Improvement of Puncture Accuracy and Fluoroscopy Reduction in Percutaneous Transforaminal Endoscopic Discectomy With Novel Lumbar Location System

    PubMed Central

    Fan, Guoxin; Guan, Xiaofei; Zhang, Hailong; Wu, Xinbo; Gu, Xin; Gu, Guangfei; Fan, Yunshan; He, Shisheng

    2015-01-01

    Abstract Prospective nonrandomized control study. The study aimed to investigate the implication of the HE's Lumbar LOcation (HELLO) system in improving the puncture accuracy and reducing fluoroscopy in percutaneous transforaminal endoscopic discectomy (PTED). Percutaneous transforaminal endoscopic discectomy is one of the most popular minimally invasive spine surgeries that heavily depend on repeated fluoroscopy. Increased fluoroscopy will induce higher radiation exposure to surgeons and patients. Accurate puncture in PTED can be achieved by accurate preoperative location and definite trajectory. The HELLO system mainly consists of self-made surface locator and puncture-assisted device. The surface locator was used to identify the exact puncture target and the puncture-assisted device was used to optimize the puncture trajectory. Patients who had single L4/5 or L5/S1 lumbar intervertebral disc herniation and underwent PTED were included the study. Patients receiving the HELLO system were assigned in Group A, and those taking conventional method were assigned in Group B. Study primary endpoint was puncture times and fluoroscopic time, and the secondary endpoint was location time and operation time. A total of 62 patients who received PTED were included in this study. The average age was 45.35 ± 8.70 years in Group A and 46.61 ± 7.84 years in Group B (P = 0.552). There were no significant differences in gender, body mass index, conservative time, and surgical segment between the 2 groups (P > 0.05). The puncture time(s) were 1.19 ± 0.48 in Group A and 6.03 ± 1.87 in Group B (P < 0.001). The fluoroscopic times were 14.03 ± 2.54 in Group A and 25.19 ± 4.28 in Group B (P < 0.001). The preoperative location time was 4.67 ± 1.41 minutes in Group A and 6.98 ± 0.94 minutes in Group B (P < 0.001). The operation time was 79.42 ± 10.15 minutes in Group A and 89.65 ± 14.06 minutes in Group B (P

  1. Unstructured grids in 3D and 4D for a time-dependent interface in front tracking with improved accuracy

    SciTech Connect

    Glimm, J.; Grove, J. W.; Li, X. L.; Li, Y.; Xu, Z.

    2002-01-01

    Front tracking traces the dynamic evolution of an interface separating differnt materials or fluid components. In this paper, they describe three types of the grid generation methods used in the front tracking method. One is the unstructured surface grid. The second is a structured grid-based reconstruction method. The third is a time-space grid, also grid based, for a conservative tracking algorithm with improved accuracy.

  2. Algorithms for the reconstruction of the singular wave front of laser radiation: analysis and improvement of accuracy

    SciTech Connect

    Aksenov, V P; Kanev, F Yu; Izmailov, I V; Starikov, F A

    2008-07-31

    The possibility of reconstructing a singular wave front of laser beams by the local tilts of the wave front measured with a Hartmann sensor is considered. The accuracy of the reconstruction algorithm described by Fried is estimated and its modification is proposed, which allows one to improve the reliability of the phase reconstruction. Based on the Fried algorithm and its modification, a combined algorithm is constructed whose advantages are demonstrated in numerical experiments. (control of laser radiation parameters)

  3. Development of an Automated Bone Mineral Density Software Application: Facilitation Radiologic Reporting and Improvement of Accuracy.

    PubMed

    Tsai, I-Ta; Tsai, Meng-Yuan; Wu, Ming-Ting; Chen, Clement Kuen-Huang

    2016-06-01

    The conventional method of bone mineral density (BMD) report production by dictation and transcription is time consuming and prone to error. We developed an automated BMD reporting system based on the raw data from a dual energy X-ray absorptiometry (DXA) scanner for facilitating the report generation. The automated BMD reporting system, a web application, digests the DXA's raw data and automatically generates preliminary reports. In Jan. 2014, 500 examinations were randomized into an automatic group (AG) and a manual group (MG), and the speed of report generation was compared. For evaluation of the accuracy and analysis of errors, 5120 examinations during Jan. 2013 and Dec. 2013 were enrolled retrospectively, and the context of automatically generated reports (AR) was compared with the formal manual reports (MR). The average time spent for report generation in AG and in MG was 264 and 1452 s, respectively (p < 0.001). The accuracy of calculation of T and Z scores in AR is 100 %. The overall accuracy of AR and MR is 98.8 and 93.7 %, respectively (p < 0.001). The mis-categorization rate in AR and MR is 0.039 and 0.273 %, respectively (p = 0.0013). Errors occurred in AR and can be grouped into key-in errors by technicians and need for additional judgements. We constructed an efficient and reliable automated BMD reporting system. It facilitates current clinical service and potentially prevents human errors from technicians, transcriptionists, and radiologists. PMID:26644156

  4. Space Age Geodesy: Global Earth Observations of Ever Improving resolution and Accuracy

    NASA Astrophysics Data System (ADS)

    Carter, W. E.

    2007-12-01

    The launch of Sputnik-I by the USSR in 1957, and the resulting competitive US-USSR space exploration and weapons programs, led to the need for global geodetic measurements of unprecedented accuracy, and the means to develop new observing techniques to meet those needs. By the 1970s the geodetic community developed very long baseline interferometry (VLBI), lunar laser ranging (LLR), and satellite laser ranging (SLR), and launched international tests that led to the establishment of the International Earth Rotation Service (IERS). Today the IERS provides a stable International Celestial Reference Frame (ICRF), and accurate earth orientation parameters (EOP) values, using a combination of VLBI, LLR, SLR, and the Global Positioning System (GPS). There are hundreds of continuously operating GPS stations around the world, providing centimeter station locations and millimeter per year station velocities, in the International Terrestrial Reference Frame (ITRF). The location of any point on earth can be determined relative to the ITRF to within a few centimeters from a few days of GPS observations, and using kinematic GPS, the positions of moving objects can be tracked to a few centimeters at distances of tens of kilometers from the nearest GPS ground stations. This geodetic infrastructure and space age technology has led to the development of new airborne topographic mapping techniques, most significantly, airborne laser swath mapping (ALSM). With ALSM, it is now possible to map thousands of square kilometers of terrain with sub-decimeter vertical accuracy in hours. For example, the entire length of the San Andreas fault, in California, was mapped in a few hundred hours of flying time. Within the next few decades, global ALSM observations will make it possible for scientists to immediately access (by the internet) data bases containing the locations (cm accuracy) and rates of motion (mm per year accuracy) of points on the surface of earth, with sub-meter spatial resolution

  5. Improving the accuracy of migration age detail in multiple-area population forecasts

    SciTech Connect

    Schroeder, E.C.; Pittenger, D.B.

    1983-05-01

    Population projections are often required for many geographical areas, and must be prepared with maximal computer and minimal analytical effort. At the same time, realistic age detail forecasts require a flexible means of treating age-specific net migration. This report presents a migration projection technique compatible with these constraints. A simplified version of Pittenger's model is used, where future migration patterns are automatically assigned from characteristics of historical patterns. A comparative test of age-pattern accuracy for 1970-1980 indicates that this technique is superior to the commonly used plus-minus adjustment to historical rates. 13 references, 6 figures, 4 tables.

  6. Improvements are needed in reporting of accuracy studies for diagnostic tests used for detection of finfish pathogens.

    PubMed

    Gardner, Ian A; Burnley, Timothy; Caraguel, Charles

    2014-12-01

    Indices of test accuracy, such as diagnostic sensitivity and specificity, are important considerations in test selection for a defined purpose (e.g., screening or confirmation) and affect the interpretation of test results. Many biomedical journals recommend that authors clearly and transparently report test accuracy studies following the Standards for Reporting of Diagnostic Accuracy (STARD) guidelines ( www.stard-statement.org ). This allows readers to evaluate overall study validity and assess potential bias in diagnostic sensitivity and specificity estimates. The purpose of the present study was to evaluate the reporting quality of studies evaluating test accuracy for finfish diseases using the 25 items in the STARD checklist. Based on a database search, 11 studies that included estimates of diagnostic accuracy were identified for independent evaluation by three reviewers. For each study, STARD checklist items were scored as "yes," "no," or "not applicable." Only 10 of the 25 items were consistently reported in most (≥80%) papers, and reporting of the other items was highly variable (mostly between 30% and 60%). Three items ("number, training, and expertise of readers and testers"; "time interval between index tests and reference standard"; and "handling of indeterminate results, missing data, and outliers of the index tests") were reported in less than 10% of papers. Two items ("time interval between index tests and reference standard" and "adverse effects from testing") were considered minimally relevant to fish health because test samples usually are collected postmortem. Modification of STARD to fit finfish studies should increase use by authors and thereby improve the overall reporting quality regardless of how the study was designed. Furthermore, the use of STARD may lead to the improved design of future studies. PMID:25252270

  7. Recipe for Success: An Updated Parents' Guide to Improving Colorado Schools and Student Achievement. Second Edition.

    ERIC Educational Resources Information Center

    Taher, Bonnie; Durr, Pamela

    This guide describes ways that parents can help improve student achievement and school quality. It answers such questions as how to choose the right early-education opportunity for a preschooler, how to make sure a 5-year-old is ready for school, how to help a daughter do well in school, how to work with a daughter's or son's teachers, how to help…

  8. Improved reticle requalification accuracy and efficiency via simulation-powered automated defect classification

    NASA Astrophysics Data System (ADS)

    Paracha, Shazad; Eynon, Benjamin; Noyes, Ben F.; Nhiev, Anthony; Vacca, Anthony; Fiekowsky, Peter; Fiekowsky, Dan; Ham, Young Mog; Uzzel, Doug; Green, Michael; MacDonald, Susan; Morgan, John

    2014-04-01

    Advanced IC fabs must inspect critical reticles on a frequent basis to ensure high wafer yields. These necessary requalification inspections have traditionally carried high risk and expense. Manually reviewing sometimes hundreds of potentially yield-limiting detections is a very high-risk activity due to the likelihood of human error; the worst of which is the accidental passing of a real, yield-limiting defect. Painfully high cost is incurred as a result, but high cost is also realized on a daily basis while reticles are being manually classified on inspection tools since these tools often remain in a non-productive state during classification. An automatic defect analysis system (ADAS) has been implemented at a 20nm node wafer fab to automate reticle defect classification by simulating each defect's printability under the intended illumination conditions. In this paper, we have studied and present results showing the positive impact that an automated reticle defect classification system has on the reticle requalification process; specifically to defect classification speed and accuracy. To verify accuracy, detected defects of interest were analyzed with lithographic simulation software and compared to the results of both AIMS™ optical simulation and to actual wafer prints.

  9. Improved accuracy of computed tomography in local staging of rectal cancer using water enema.

    PubMed

    Lupo, L; Angelelli, G; Pannarale, O; Altomare, D; Macarini, L; Memeo, V

    1996-01-01

    A new technique in the preoperative staging computed tomography of rectal cancer using a water enema to promote full distension of the rectum was compared with standard CT in a non-randomised blind study. One hundred and twenty-one patients were enrolled. There were 57 in the water enema CT group and 64 in the standard group. The stage of the disease was assessed following strict criteria and tested against the pathological examination of the resected specimen. Water enema CT was significantly more accurate than standard CT with an accuracy of 84.2% vs. 62.5% (Kappa: 0.56 vs. 0.33: Kappa Weighted: 0.93 vs. 0.84). The diagnostic gain was mainly evident in the identification of rectal wall invasion within or beyond the muscle layer (94.7 vs. 61). The increased accuracy was 33.7% (CL95: 17-49; P < 0.001). The results indicate that water enema CT should replace CT for staging rectal cancer and may offer an alternative to endorectal ultrasound. PMID:8739828

  10. Quantification of terrestrial laser scanner (TLS) elevation accuracy in oil palm plantation for IFSAR improvement

    NASA Astrophysics Data System (ADS)

    Muhadi, N. A.; Abdullah, A. F.; Kassim, M. S. M.

    2016-06-01

    In order to ensure the oil palm productivity is high, plantation site should be chosen wisely. Slope is one of the essential factors that need to be taken into consideration when doing a site selection. High quality of plantation area map with elevation information is needed for decision-making especially when dealing with hilly and steep area. Therefore, accurate digital elevation models (DEMs) are required. This research aims to increase the accuracy of Interferometric Synthetic Aperture Radar (IFSAR) by integrating Terrestrial Laser Scanner (TLS) to generate DEMs. However, the focus of this paper is to evaluate the z-value accuracy of TLS data and Real-Time Kinematic GPS (RTK-GPS) as a reference. Besides, this paper studied the importance of filtering process in developing an accurate DEMs. From this study, it has been concluded that the differences of z-values between TLS and IFSAR were small if the points were located on route and when TLS data has been filtered. This paper also concludes that laser scanner (TLS) should be set up on the route to reduce elevation error.

  11. FDG-PET improves accuracy in distinguishing frontotemporal dementia and Alzheimer's disease.

    PubMed

    Foster, Norman L; Heidebrink, Judith L; Clark, Christopher M; Jagust, William J; Arnold, Steven E; Barbas, Nancy R; DeCarli, Charles S; Turner, R Scott; Koeppe, Robert A; Higdon, Roger; Minoshima, Satoshi

    2007-10-01

    Distinguishing Alzheimer's disease (AD) and frontotemporal dementia (FTD) currently relies on a clinical history and examination, but positron emission tomography with [(18)F] fluorodeoxyglucose (FDG-PET) shows different patterns of hypometabolism in these disorders that might aid differential diagnosis. Six dementia experts with variable FDG-PET experience made independent, forced choice, diagnostic decisions in 45 patients with pathologically confirmed AD (n = 31) or FTD (n = 14) using five separate methods: (1) review of clinical summaries, (2) a diagnostic checklist alone, (3) summary and checklist, (4) transaxial FDG-PET scans and (5) FDG-PET stereotactic surface projection (SSP) metabolic and statistical maps. In addition, we evaluated the effect of the sequential review of a clinical summary followed by SSP. Visual interpretation of SSP images was superior to clinical assessment and had the best inter-rater reliability (mean kappa = 0.78) and diagnostic accuracy (89.6%). It also had the highest specificity (97.6%) and sensitivity (86%), and positive likelihood ratio for FTD (36.5). The addition of FDG-PET to clinical summaries increased diagnostic accuracy and confidence for both AD and FTD. It was particularly helpful when raters were uncertain in their clinical diagnosis. Visual interpretation of FDG-PET after brief training is more reliable and accurate in distinguishing FTD from AD than clinical methods alone. FDG-PET adds important information that appropriately increases diagnostic confidence, even among experienced dementia specialists. PMID:17704526

  12. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  13. Accuracy of genomic prediction in switchgrass (Panicum virgatum L.) improved by accounting for linkage disequilibrium

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Switchgrass is a relatively high-yielding and environmentally sustainable biomass crop, but further genetic gains in biomass yield must be achieved to make it an economically viable bioenergy feedstock. Genomic selection is an attractive technology to generate rapid genetic gains in switchgrass and ...

  14. Reconciling multiple data sources to improve accuracy of large-scale prediction of forest disease incidence

    USGS Publications Warehouse

    Hanks, E.M.; Hooten, M.B.; Baker, F.A.

    2011-01-01

    Ecological spatial data often come from multiple sources, varying in extent and accuracy. We describe a general approach to reconciling such data sets through the use of the Bayesian hierarchical framework. This approach provides a way for the data sets to borrow strength from one another while allowing for inference on the underlying ecological process. We apply this approach to study the incidence of eastern spruce dwarf mistletoe (Arceuthobium pusillum) in Minnesota black spruce (Picea mariana). A Minnesota Department of Natural Resources operational inventory of black spruce stands in northern Minnesota found mistletoe in 11% of surveyed stands, while a small, specific-pest survey found mistletoe in 56% of the surveyed stands. We reconcile these two surveys within a Bayesian hierarchical framework and predict that 35-59% of black spruce stands in northern Minnesota are infested with dwarf mistletoe. ?? 2011 by the Ecological Society of America.

  15. Primary measurement of total ultrasonic power with improved accuracy in rf voltage measurement.

    PubMed

    Dubey, P K; Kumar, Ashok; Kumar, Yudhisther; Gupta, Reeta; Joshi, Deepa

    2010-10-01

    Out of the various existing ultrasonic power measurement techniques, the radiation force balance method using microbalance is most widely used in low power (below 1 W) regime. The major source of uncertainty associated with this technique is the error in ac voltage measurement applied to the transducer for the generation of ultrasonic waves. The sources that deteriorate the ac voltage measurement accuracy include cable length and impedance mismatch. We introduce a new differential peak to peak measurement approach to reduce the ac voltage measurement error. The method holds the average peak amplitude of each polarity. Ultralow offset difference amplifier is used to measure peak to peak voltage. The method is insensitive to the variations in the dc offset of the source. The functionality of this method has been tested and compared with the conventional rf voltage measurement method. The output of this proposed technique is dc, which can be measured with an error of less than 0.1%. PMID:21034111

  16. Accuracy of Numerical Simulations of Tip Clearance Flow in Transonic Compressor Rotors Improved Dramatically

    NASA Technical Reports Server (NTRS)

    VanZante, Dale E.; Strazisar, Anthony J.; Wood, Jerry R.; Hathaway, Michael D.; Okiishi, Theodore H.

    2000-01-01

    The tip clearance flows of transonic compressor rotors have a significant impact on rotor and stage performance. Although numerical simulations of these flows are quite sophisticated, they are seldom verified through rigorous comparisons of numerical and measured data because, in high-speed machines, measurements acquired in sufficient detail to be useful are rare. Researchers at the NASA Glenn Research Center at Lewis Field compared measured tip clearance flow details (e.g., trajectory and radial extent) of the NASA Rotor 35 with results obtained from a numerical simulation. Previous investigations had focused on capturing the detailed development of the jetlike flow leaking through the clearance gap between the rotating blade tip and the stationary compressor shroud. However, we discovered that the simulation accuracy depends primarily on capturing the detailed development of a wall-bounded shear layer formed by the relative motion between the leakage jet and the shroud.

  17. Linear combinations of biomarkers to improve diagnostic accuracy with three ordinal diagnostic categories

    PubMed Central

    Kang, Le; Xiong, Chengjie; Crane, Paul; Tian, Lili

    2015-01-01

    Many researchers have addressed the problem of finding the optimal linear combination of biomarkers to maximize the area under receiver operating characteristic (ROC) curves for scenarios with binary disease status. In practice, many disease processes such as Alzheimer can be naturally classified into three diagnostic categories such as normal, mild cognitive impairment and Alzheimer’s disease (AD), and for such diseases the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. In this article, we propose a few parametric and nonparametric approaches to address the problem of finding the optimal linear combination to maximize the VUS. We carried out simulation studies to investigate the performance of the proposed methods. We apply all of the investigated approaches to a real data set from a cohort study in early stage AD. PMID:22865796

  18. Improving accuracy using subpixel smoothing for multiband effective-mass Hamiltonians of semiconductor nanostructures

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Ti; Hsieh, Tung-Han; Chang, Shu-Wei

    2016-04-01

    We develop schemes of subpixel smoothing for the multiband Luttinger-Kohn and Burt-Foreman Hamiltonians of semiconductor nanostructures. With proper procedures of parameter averages at abrupt interfaces, computational errors of envelope functions due to the discontinuity of heterostructures are significantly reduced. Two smoothing approaches are presented. One is based on eliminations of the first-order perturbation in energy, and the other is an application of the Hellmann-Feynman theorem. Using the finite-difference method, we find that while the procedure of perturbation theory seems to be more robust than that of Hellmann-Feynman theorem, the errors of both schemes are (considerably) lower than that without smoothing or with direct but unjustified averages of untransformed parameters. The proposed approaches may enhance numerical accuracies and reduce computational cost for the modeling of nanostructures.

  19. Laser Measurements Based for Volumetric Accuracy Improvement of Multi-axis Systems

    NASA Astrophysics Data System (ADS)

    Vladimir, Sokolov; Konstantin, Basalaev

    The paper describes a new developed approach to CNC-controlled multi-axis systems geometric errors compensation based on optimal error correction strategy. Multi-axis CNC-controlled systems - machine-tools and CMM's are the basis of modern engineering industry. Similar design principles of both technological and measurement equipment allow usage of similar approaches to precision management. The approach based on geometric errors compensation are widely used at present time. The paper describes a system for compensation of geometric errors of multi-axis equipment based on the new approach. The hardware basis of the developed system is a multi-function laser interferometer. The principles of system's implementation, results of measurements and system's functioning simulation are described. The effectiveness of application of described principles to multi-axis equipment of different sizes and purposes for different machining directions and zones within workspace is presented. The concepts of optimal correction strategy is introduced and dynamic accuracy control is proposed.

  20. Estimation of Missing Daily Temperatures: Can a Weather Categorization Improve Its Accuracy?.

    NASA Astrophysics Data System (ADS)

    Huth, Radan; Nemeová, Ivana

    1995-07-01

    A method of estimating missing daily temperatures is proposed. The procedure is based on a weather classification consisting of two steps: principal component analysis and cluster analysis. At each time of observation @0700, 1400, and 2100 local time) the weather is characterized by temperature, relative humidity, wind speed, and cloudiness. The coefficients of regression equations, enabling the missing temperatures to be determined from the known temperatures at nearby stations, are computed within each weather class. The influence of various parameters @input variables, number of weather classes, number of principal components, their rotation, type of regression equation) on the accuracy of estimated temperatures is discussed. The method yields better results than ordinary regression methods that do not utilize a weather classification. An examination of statistical properties of the estimated temperatures confirms the applicability of the completed temperature series in climate studies.

  1. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  2. The Application of Digital Pathology to Improve Accuracy in Glomerular Enumeration in Renal Biopsies

    PubMed Central

    Troost, Jonathan P.; Gasim, Adil; Bagnasco, Serena; Avila-Casado, Carmen; Johnstone, Duncan; Hodgin, Jeffrey B.; Conway, Catherine; Gillespie, Brenda W.; Nast, Cynthia C.; Barisoni, Laura; Hewitt, Stephen M.

    2016-01-01

    Background In renal biopsy reporting, quantitative measurements, such as glomerular number and percentage of globally sclerotic glomeruli, is central to diagnostic accuracy and prognosis. The aim of this study is to determine the number of glomeruli and percent globally sclerotic in renal biopsies by means of registration of serial tissue sections and manual enumeration, compared to the numbers in pathology reports from routine light microscopic assessment. Design We reviewed 277 biopsies from the Nephrotic Syndrome Study Network (NEPTUNE) digital pathology repository, enumerating 9,379 glomeruli by means of whole slide imaging. Glomerular number and the percentage of globally sclerotic glomeruli are values routinely recorded in the official renal biopsy pathology report from the 25 participating centers. Two general trends in reporting were noted: total number per biopsy or average number per level/section. Both of these approaches were assessed for their accuracy in comparison to the analogous numbers of annotated glomeruli on WSI. Results The number of glomeruli annotated was consistently higher than those reported (p<0.001); this difference was proportional to the number of glomeruli. In contrast, percent globally sclerotic were similar when calculated on total glomeruli, but greater in FSGS when calculated on average number of glomeruli (p<0.01). The difference in percent globally sclerotic between annotated and those recorded in pathology reports was significant when global sclerosis is greater than 40%. Conclusions Although glass slides were not available for direct comparison to whole slide image annotation, this study indicates that routine manual light microscopy assessment of number of glomeruli is inaccurate, and the magnitude of this error is proportional to the total number of glomeruli. PMID:27310011

  3. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    ScienceCinema

    Olivi, Alessandro, M.D.

    2010-09-01

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  4. Recent Advances in Image Assisted Neurosurgical Procedures: Improved Navigational Accuracy and Patient Safety

    SciTech Connect

    Olivi, Alessandro, M.D.

    2010-08-28

    Neurosurgical procedures require precise planning and intraoperative support. Recent advances in image guided technology have provided neurosurgeons with improved navigational support for more effective and safer procedures. A number of exemplary cases will be presented.

  5. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load.

  6. Estimates of achievable potential for electricity efficiency improvements in U.S. residences

    SciTech Connect

    Brown, Richard

    1993-05-01

    This paper investigates the potential for public policies to achieve electricity efficiency improvements in US residences. This estimate of achievable potential builds upon a database of energy-efficient technologies developed for a previous study estimating the technical potential for electricity savings. The savings potential and cost for each efficiency measure in the database is modified to reflect the expected results of policies implemented between 1990 and 2010. Factors included in these modifications are: the market penetration of efficiency measures, the costs of administering policies, and adjustments to the technical potential measures to reflect the actual energy savings and cost experienced in the past. When all adjustment factors are considered, this study estimates that policies can achieve approximately 45% of the technical potential savings during the period from 1990 to 2010. Thus, policies can potentially avoid 18% of the annual frozen-efficiency baseline electricity consumption forecast for the year 2010. This study also investigates the uncertainty in best estimate of achievable potential by estimating two alternative scenarios -- a

  7. Information from later lactations improves accuracy of genomic predictions of fertility-related disorders in Norwegian Red.

    PubMed

    Haugaard, Katrine; Svendsen, Morten; Heringstad, Bjørg

    2015-07-01

    Our aim was to investigate whether including information from later lactations improves accuracy of genomic breeding values for 4 fertility-related disorders: cystic ovaries, retained placenta, metritis, and silent heat. Data consisted of health records from 6,015,245 lactations from 2,480,976 Norwegian Red cows, recorded from 1979 to 2012. These were daughters of 3,675 artificial insemination bulls. The mean frequency of these disorders for cows in lactation 1 to 5 ranged from 0.6 to 2.4% for cystic ovaries, 1.0 to 1.5% for metritis, 1.9 to 4.1% for retained placenta, and 2.4 to 3.8% for silent heat. Genomic information was available for all sires, and the 312 youngest bulls were used for validation. After standard editing of a 25K/54K single nucleotide polymorphism data set that was imputed both ways, a total of 48,249 single nucleotide polymorphism loci were available for genomic predictions. Genomic breeding values were predicted using univariate genomic BLUP for the first lactation only and for the first 5 lactations and multivariate genomic BLUP with 5 lactations for each disorder was also used for genomic predictions. Correlations between estimated breeding values for the 4 traits in 5 lactations with predicted genomic breeding values were compared. Accuracy ranged from 0.47 and 0.51 for cystic ovaries, 0.50 to 0.74 for retained placenta, 0.21 to 0.47 for metritis, and 0.22 to 0.60 for silent heat. Including later lactations in a multitrait genomic BLUP improved accuracy of genomic estimated breeding values for cystic ovaries, retained placenta, and silent heat, whereas for metritis no obvious advantage in accuracy was found. PMID:25912869

  8. Recent improvements in efficiency, accuracy, and convergence for implicit approximate factorization algorithms. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Pulliam, T. H.; Steger, J. L.

    1985-01-01

    In 1977 and 1978, general purpose centrally space differenced implicit finite difference codes in two and three dimensions have been introduced. These codes, now called ARC2D and ARC3D, can run either in inviscid or viscous mode for steady or unsteady flow. Since the introduction of the ARC2D and ARC3D codes, overall computational efficiency could be improved by making use of a number of algorithmic changes. These changes are related to the use of a spatially varying time step, the use of a sequence of mesh refinements to establish approximate solutions, implementation of various ways to reduce inversion work, improved numerical dissipation terms, and more implicit treatment of terms. The present investigation has the objective to describe the considered improvements and to quantify advantages and disadvantages. It is found that using established and simple procedures, a computer code can be maintained which is competitive with specialized codes.

  9. Fluctuation elimination of fringe pattern to improve the accuracy of phase calculation

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Guo, Tong; Zhang, Sixiang; Hu, Xiaotang

    2011-11-01

    3D fringe projection measurement techniques are increasingly important in production for automation, quality control, reversal engineering, and biomedical engineering because of the advantages of non-contact operation, full-field acquisition and automatic data processing. With the advent of DLP (Digital Light Processing) projectors, digital fringe pattern projection techniques have been widely studied in academia and applied to industries. The experimental data from living profile of fringe patterns show that the obtained intensity has some fluctuation, which cause the calculated phase data inaccuracy. This paper presents one software method to eliminate the fluctuation between fringe patterns. Four-step phase-shifting algorithm is used to calculate the wrapped phase data, so four fringe pattern images having pi/2 shift in between need to be captured. Because of the fluctuation of intensity, the captured fringe patterns have an up or down shift among the four images. By considering the histogram of each fringe pattern, we present one compensation method to eliminate the fluctuation between fringe patterns. Simulated data are first tested by generating fringe patterns with fluctuation. Then experimental data from a 3D imaging system demonstrate the validity for calculating the phase and shape information with high accuracy. The results show that the proposed method eliminates the fluctuation between fringe pattern images to give accurate shape data information.

  10. Pseudo-inverse linear discriminants for the improvement of overall classification accuracies.

    PubMed

    Daqi, Gao; Ahmed, Dastagir; Lili, Guo; Zejian, Wang; Zhe, Wang

    2016-09-01

    This paper studies the learning and generalization performances of pseudo-inverse linear discriminant (PILDs) based on the processing minimum sum-of-squared error (MS(2)E) and the targeting overall classification accuracy (OCA) criterion functions. There is little practicable significance to prove the equivalency between a PILD with the desired outputs in reverse proportion to the number of class samples and an FLD with the totally projected mean thresholds. When the desired outputs of each class are assigned a fixed value, a PILD is partly equal to an FLD. With the customarily desired outputs {1, -1}, a practicable threshold is acquired, which is only related to sample sizes. If the desired outputs of each sample are changeable, a PILD has nothing in common with an FLD. The optimal threshold may thus be singled out from multiple empirical ones related to sizes and distributed regions. Depending upon the processing MS(2)E criteria and the actually algebraic distances, an iterative learning strategy of PILD is proposed, the outstanding advantages of which are with limited epoch, without learning rate and divergent risk. Enormous experimental results for the benchmark datasets have verified that the iterative PILDs with optimal thresholds have good learning and generalization performances, and even reach the top OCAs for some datasets among the existing classifiers. PMID:27351107

  11. How Can the Accuracy of Neutron Nonelastic Cross Sections be Improved?

    NASA Astrophysics Data System (ADS)

    Dietrich, Frank

    2008-10-01

    The nonelastic cross section for incident neutrons is particularly important for applications because it directly determines the sum of all reaction processes other than elastic scattering, and is closely related to the compound-nucleus formation cross section. Scatter in available measurements of the nonelastic cross section shows that this quantity is not known very accurately ( 5--10%). We will show examples of this, together with results from a new technique that shows promise of reducing uncertainties to 2--3% in the range of a few MeV to a few tens of MeV [1]. Comparison of results using this technique on Fe, Pb, Th, and U with optical model calculations suggests that optical potentials are not reliable for predicting nonelastic cross sections to better than 5%, even when they reproduce total cross sections well ( 1%). We will suggest a limited set of high-accuracy measurements of nonelastic cross sections that could be made to guide the further development of optical models that are able to predict nonelastic cross sections reliably. [1] F. S. Dietrich, J. D. Anderson, R. W. Bauer, and S. M. Grimes, Phys. Rev. C68, 064608 (2003).

  12. Analysis and improvement of detection accuracy for a wireless motion sensing system using integrated coil component

    SciTech Connect

    Hashi, S.; Ishiyama, K.; Yabukami, S.; Kanetaka, H.; Arai, K. I.

    2010-05-15

    Integration of the exciting coil and the pick-up coil array for the wireless magnetic motion sensing system has been investigated to clear the limitation of the system arrangement. From the comparison of the integrated-type and the sandwich-type, which was proposed by our previous study, regardless of the lower signal-to-noise ratio of the integrated-type than that of the sandwich-type a repeatable detection accuracy of around 1 mm is obtained at the distance of 120 mm from the pick-up coil array (sandwich-type: up to 140 mm). A different tendency of the detection errors in detection was also observed. In spite of different tendency, the cause of the errors has been clarified. The impedance change of the exciting coil due to a resonance of the LC marker perturbs strength of the magnetic field which is used for marker excitation. However, the errors are able to compensate to the actual positions and orientations of the marker by using compensatory method which was already established.

  13. Combination of pulse volume recording (PVR) parameters and ankle-brachial index (ABI) improves diagnostic accuracy for peripheral arterial disease compared with ABI alone.

    PubMed

    Hashimoto, Tomoko; Ichihashi, Shigeo; Iwakoshi, Shinichi; Kichikawa, Kimihiko

    2016-06-01

    The ankle-brachial index (ABI) measurement is widely used as a screening tool to detect peripheral arterial disease (PAD). With the advent of the oscillometric ABI device incorporating a system for the measurement of pulse volume recording (PVR), not only ABI but also other parameters, such as the percentage of mean arterial pressure (%MAP) and the upstroke time (UT), can be obtained automatically. The purpose of the present study was to compare the diagnostic accuracy for PAD with ABI alone with that of a combination of ABI, %MAP and UT. This study included 108 consecutive patients on whom 216 limb measurements were performed. The sensitivity, specificity and positive and negative predictive values of ABI, %MAP, UT and their combination were evaluated and compared with CT angiography that was used as a gold standard for the detection of PAD. The diagnostic accuracy as well as the optimal cutoff values of %MAP and UT were evaluated using receiver operating characteristic (ROC) curve analysis. The combination of ABI, %MAP and UT achieved higher sensitivity, negative predictive value and accuracy than ABI alone, particularly for mild stenosis. The areas under the ROC curve for the detection of 50% stenosis with UT and %MAP were 0.798 and 0.916, respectively. The optimal UT and %MAP values to detect ≧50% stenosis artery were 183 ms and 45%, respectively. The combination of ABI, %MAP and UT contributed to the improvement of the diagnostic accuracy for PAD. Consideration of the values of %MAP and UT in addition to ABI may have a significant impact on the detection of early PAD lesions. PMID:26911230

  14. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  15. Improving the Quality of Nursing Home Care and Medical-Record Accuracy with Direct Observational Technologies

    ERIC Educational Resources Information Center

    Schnelle, John F.; Osterweil, Dan; Simmons, Sandra F.

    2005-01-01

    Nursing home medical-record documentation of daily-care occurrence may be inaccurate, and information is not documented about important quality-of-life domains. The inadequacy of medical record data creates a barrier to improving care quality, because it supports an illusion of care consistent with regulations, which reduces the motivation and…

  16. On improvements of neural network accuracy with fixed number of active neurons

    NASA Astrophysics Data System (ADS)

    Sokolova, Natalia; Nikolaev, Dmitry P.; Polevoy, Dmitry

    2015-02-01

    In this paper an improvement possibility of multilayer perceptron based classifiers with using composite classifier scheme with predictor function was exploited. Recognition of embossed number characters on plastic cards in the image taken by mobile camera was used as a model problem.

  17. Electron Microprobe Analysis of Hf in Zircon: Suggestions for Improved Accuracy of a Difficult Measurement

    NASA Astrophysics Data System (ADS)

    Fournelle, J.; Hanchar, J. M.

    2013-12-01

    It is not commonly recognized as such, but the accurate measurement of Hf in zircon is not a trivial analytical issue. This is important to assess because Hf is often used as an internal standard for trace element analyses of zircon by LA-ICPMS. The issues pertaining to accuracy revolve around: (1) whether the Hf Ma or the La line is used; (2) what accelerating voltage is applied if Zr La is also measured, and (3) what standard for Hf is used. Weidenbach, et al.'s (2004) study of the 91500 zircon demonstrated the spread (in accuracy) of possible EPMA values for six EPMA labs, 2 of which used Hf Ma, 3 used Hf La, and one used Hf Lb, and standards ranged from HfO2, a ZrO2-HfO2 compound, Hf metal, and hafnon. Weidenbach, et al., used the ID-TIMS values as the correct value (0.695 wt.% Hf.), for which not one of the EPMA labs came close to that value (3 were low and 3 were high). Those data suggest: (1) that there is a systematic underestimation error of the 0.695 wt% Hf (ID-TIMS Hf) value if Hf Ma is used; most likely an issue with the matrix correction, as the analytical lines and absorption edges of Zr La, Si Ka and Hf Ma are rather tightly packed in the electromagnetic spectrum. Mass absorption coefficients are easily in error (e.g., Donovan's determination of the MAC of Hf by Si Ka of 5061 differs from the typically used Henke value of 5449 (Donovan et al, 2002); and (2) For utilization of the Hf La line, however, the second order Zr Ka line interferes with Hf La if the accelerating voltage is greater than 17.99 keV. If this higher keV is used and differential mode PHA is applied, only a portion of the interference is removed (e.g., removal of escape peaks), causing an overestimation of Hf content. Unfortunately, it is virtually impossible to apply an interference correction in this case, as it is impossible to locate Hf-free Zr probe standard. We have examined many of the combinations used by those six EPMA labs and concluded that the optimal EPMA is done with Hf

  18. Improved solution accuracy for TDRSS-based TOPEX/Poseidon orbit determination

    NASA Technical Reports Server (NTRS)

    Doll, C. E.; Mistretta, G. D.; Hart, R. C.; Oza, D. H.; Bolvin, D. T.; Cox, C. M.; Nemesure, M.; Niklewski, D. J.; Samii, M. V.

    1994-01-01

    Orbit determination results are obtained by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) using a batch-least-squares estimator available in the Goddard Trajectory Determination System (GTDS) and an extended Kalman filter estimation system to process Tracking and Data Relay Satellite (TDRS) System (TDRSS) measurements. GTDS is the operational orbit determination system used by the FDD in support of the Ocean Topography Experiment (TOPEX)/Poseidon spacecraft navigation and health and safety operations. The extended Kalman filter was implemented in an orbit determination analysis prototype system, closely related to the Real-Time Orbit Determination System/Enhanced (RTOD/E) system. In addition, the Precision Orbit Determination (POD) team within the GSFC Space Geodesy Branch generated an independent set of high-accuracy trajectories to support the TOPEX/Poseidon scientific data. These latter solutions use the geodynamics (GEODYN) orbit determination system with laser ranging and Doppler Orbitography and Radiopositioning integrated by satellite (DORIS) tracking measurements. The TOPEX/Poseidon trajectories were estimated for November 7 through November 11, 1992, the timeframe under study. Independent assessments were made of the consistencies of solutions produced by the batch and sequential methods. The batch-least-squares solutions were assessed based on the solution residuals, while the sequential solutions were assessed based on primarily the estimated covariances. The batch-least-squares and sequential orbit solutions were compared with the definitive POD orbit solutions. The solution differences were generally less than 2 meters for the batch-least-squares and less than 13 meters for the sequential estimation solutions. After the sequential estimation solutions were processed with a smoother algorithm, position differences with POD orbit solutions of less than 7 meters were obtained. The differences among the POD, GTDS, and filter

  19. A predictive nomogram improved diagnostic accuracy and interobserver agreement of perirectal lymph nodes metastases in rectal cancer

    PubMed Central

    Ding, Ying; Tu, Shanshan; Liu, Yi; Qian, Youcun; Xu, Linghui; Tong, Tong; Cai, Sanjun; Peng, Junjie

    2016-01-01

    Objective To develop a predictive nomogram to improve the diagnostic accuracy and interobserver agreement of pre-therapeutic lymph nodes metastases in patients with rectal cancer. Materials and Methods An institutional database of 411 patients with rectal cancer was used to develop a nomogram to predict perirectal lymph nodes metastases. Patients' clinicopathological and MRI-assessed imaging variables were included in the multivariate logistic regression analysis. The model was externally validated and the performance was assessed by area under curve (AUC) of the receiver operator characteristics (ROC) curves. The interobserver agreement was measured between two independent radiologists. Results The diagnostic accuracy of the conventional MRI-assessed cN stage was 68%; 14.2% of the patients were over-staged and 17.8% of the patients were under-staged. A total of 35.1% of the patients had disagreed diagnosis for the cN stage between the two radiologists, with a kappa value of 0.295. A nomogram for predicting pathological lymph nodes metastases was successfully developed, with an AUC of 0.78 on the training data and 0.71 on the validation data. The predictors included in the nomogram were MRI cT stage, CRM involvement, preoperative CEA, tumor grade and lymph node size category. This nomogram yielded improved prediction in cN stage than the conventional MRI-based assessment. Conclusions By incorporating clinicopathological and MRI imaging features, we established a nomogram that improved the diagnostic accuracy and remarkably minimized the interobserver disagreement in predicting lymph nodes metastases in rectal cancers. PMID:26910373

  20. ANS shell elements with improved transverse shear accuracy. [Assumed Natural Coordinate Strain

    NASA Technical Reports Server (NTRS)

    Jensen, Daniel D.; Park, K. C.

    1992-01-01

    A method of forming assumed natural coordinate strain (ANS) plate and shell elements is presented. The ANS method uses equilibrium based constraints and kinematic constraints to eliminate hierarchical degrees of freedom which results in lower order elements with improved stress recovery and displacement convergence. These techniques make it possible to easily implement the element into the standard finite element software structure, and a modified shape function matrix can be used to create consistent nodal loads.

  1. An Approach to Improve Accuracy of Optical Tracking Systems in Cranial Radiation Therapy

    PubMed Central

    Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris

    2015-01-01

    This work presents a new method for the accurate estimation of soft tissue thickness based on near infrared (NIR) laser measurements. By using this estimation, our goal is to develop an improved non-invasive marker-less optical tracking system for cranial radiation therapy. Results are presented for three subjects and reveal an RMS error of less than 0.34 mm. PMID:26180663

  2. Assimilating aircraft-based measurements to improve forecast accuracy of volcanic ash transport

    NASA Astrophysics Data System (ADS)

    Fu, G.; Lin, H. X.; Heemink, A. W.; Segers, A. J.; Lu, S.; Palsson, T.

    2015-08-01

    The 2010 Eyjafjallajökull volcano eruption had serious consequences to civil aviation. This has initiated a lot of research on volcanic ash transport forecast in recent years. For forecasting the volcanic ash transport after eruption onset, a volcanic ash transport and diffusion model (VATDM) needs to be run with Eruption Source Parameters (ESP) such as plume height and mass eruption rate as input, and with data assimilation techniques to continuously improve the initial conditions of the forecast. Reliable and accurate ash measurements are crucial for providing a successful ash clouds advice. In this paper, simulated aircraft-based measurements, as one type of volcanic ash measurements, will be assimilated into a transport model to identify the potential benefit of this kind of observations in an assimilation system. The results show assimilating aircraft-based measurements can significantly improve the state of ash clouds, and further providing an improved forecast as aviation advice. We also show that for advice of aeroplane flying level, aircraft-based measurements should be preferably taken from this level to obtain the best performance on it. Furthermore it is shown that in order to make an acceptable advice for aviation decision makers, accurate knowledge about uncertainties of ESPs and measurements is of great importance.

  3. Modified surface loading process for achieving improved performance of the quantum dot-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Jin, Zhongxiu; Zhu, Jun; Xu, Yafeng; Zhou, Li; Dai, Songyuan

    2016-06-01

    Achieving high surface coverage of the colloidal quantum dots (QDs) on TiO2 films has been challenging for quantum dot-sensitized solar cells (QDSCs). Herein, a general surface engineering approach was proposed to increase the loading of these QDs. It was found that S2- treatment/QD re-uptake process can significantly improve the attachment of the QDs on TiO2 films. Surface concentration of the QDs was improved by ∼60%, which in turn greatly enhances light absorption and decreases carrier recombination in QDSCs. Ensuing QDSCs with optimized QD loading exhibit a power conversion efficiency of 3.66%, 83% higher than those fabricated with standard procedures.

  4. US objectives generally achieved at broadcasting satellite international conference. Improvements can help in future conferences

    NASA Astrophysics Data System (ADS)

    1984-08-01

    The implementation of broadcasting satellite service for the Western Hemisphere was planned. Broadcasting satellites transmit television programs and other information services from Earth orbit to home or office antennas. At the request of the Senate Appropriations Subcommittee on Commerce, Justice, State and the Judiciary, GAO reviewed conference results as compared to established conference objectives and examined the interagency coordination of U.S. participation in this international conference. The United States basically achieved its two most important conference objectives: adopting a technically and procedurally flexible plan for broadcasting satellite service and obtaining a sufficient allocation of satellite orbit slots and frequencies to meet domestic needs. The U.S. was unable, however, to obtain agreement on adopting a maximum signal power level for satellites. The Department of State could improve its preparation, internal coordination, and administrative support for future international conferences and recommends actions to the Secretary of State to improve its international telecommunications activities.

  5. Progress in Improving the Accuracy of Hugoniot Equation-of-State Measurements at the AWE Helen Laser.

    NASA Astrophysics Data System (ADS)

    Rothman, Stephen; Evans, Andrew; Graham, Peter; Horsfield, Colin

    1998-11-01

    For several years we have been conducting a series of equation-of-state (EOS) experiments using the Helen laser at AWE with the aim of an accuracy of 1% in shock velocity measurements(A.M. Evans, N.J. Freeman, P. Graham, C.J. Horsfield, S.D. Rothman, B.R. Thomas and A.J. Tyrrell, Laser and Particle Beams, vol. 14, no. 2, pp. 113-123, 1996.). Our best results to date are 1.2% in velocity on copper and aluminium double-step targets which lead to 4% in copper principal Hugoniot pressures. The accuracy in pressure depends not only on two measured shock velocities but also target density and the EOS of Al which is used here as a standard. In order to quantify sources of error and to improve accuracy we have measured the preheat-induced expansion of target surfaces using a Michelson interferometer. Analysis of streaks from this has also given reflectivity measurements. We are also investigating the use of a shaped laser pulse designed to give constant pressure for 2.5ns which will reduce the fractional errors in both step transit time and height by allowing the use of a thicker step.

  6. An improved multivariate analytical method to assess the accuracy of acoustic sediment classification maps.

    NASA Astrophysics Data System (ADS)

    Biondo, M.; Bartholomä, A.

    2014-12-01

    High resolution hydro acoustic methods have been successfully employed for the detailed classification of sedimentary habitats. The fine-scale mapping of very heterogeneous, patchy sedimentary facies, and the compound effect of multiple non-linear physical processes on the acoustic signal, cause the classification of backscatter images to be subject to a great level of uncertainty. Standard procedures for assessing the accuracy of acoustic classification maps are not yet established. This study applies different statistical techniques to automated classified acoustic images with the aim of i) quantifying the ability of backscatter to resolve grain size distributions ii) understanding complex patterns influenced by factors other than grain size variations iii) designing innovative repeatable statistical procedures to spatially assess classification uncertainties. A high-frequency (450 kHz) sidescan sonar survey, carried out in the year 2012 in the shallow upper-mesotidal inlet the Jade Bay (German North Sea), allowed to map 100 km2 of surficial sediment with a resolution and coverage never acquired before in the area. The backscatter mosaic was ground-truthed using a large dataset of sediment grab sample information (2009-2011). Multivariate procedures were employed for modelling the relationship between acoustic descriptors and granulometric variables in order to evaluate the correctness of acoustic classes allocation and sediment group separation. Complex patterns in the acoustic signal appeared to be controlled by the combined effect of surface roughness, sorting and mean grain size variations. The area is dominated by silt and fine sand in very mixed compositions; in this fine grained matrix, percentages of gravel resulted to be the prevailing factor affecting backscatter variability. In the absence of coarse material, sorting mostly affected the ability to detect gradual but significant changes in seabed types. Misclassification due to temporal discrepancies

  7. A physical education trial improves adolescents' cognitive performance and academic achievement: the EDUFIT study.

    PubMed

    Ardoy, D N; Fernández-Rodríguez, J M; Jiménez-Pavón, D; Castillo, R; Ruiz, J R; Ortega, F B

    2014-02-01

    To analyze the effects of an intervention focused on increasing the time and intensity of Physical Education (PE), on adolescents' cognitive performance and academic achievement. A 4-month group-randomized controlled trial was conducted in 67 adolescents from South-East Spain, 2007. Three classes were randomly allocated into control group (CG), experimental group 1 (EG1) and experimental group 2 (EG2). CG received usual PE (two sessions/week), EG1 received four PE sessions/week and EG2 received four PE sessions/week of high intensity. Cognitive performance (non-verbal and verbal ability, abstract reasoning, spatial ability, verbal reasoning and numerical ability) was assessed by the Spanish Overall and Factorial Intelligence Test, and academic achievement by school grades. All the cognitive performance variables, except verbal reasoning, increased more in EG2 than in CG (all P < 0.05). Average school grades (e.g., mathematics) increased more in EG2 than in CG. Overall, EG2 improved more than EG1, without differences between EG1 and CG. Increased PE can benefit cognitive performance and academic achievement. This study contributes to the current knowledge by suggesting that the intensity of PE sessions might play a role in the positive effect of physical activity on cognition and academic success. Future studies involving larger sample sizes should confirm or contrast these preliminary findings. PMID:23826633

  8. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2010-01-01

    The poster provides an overview of techniques to improve the Mars Global Reference Atmospheric Model (Mars-GRAM) sensitivity. It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for TES MapYear = 0 and large optical depth values such as tau = 3 is less than realistic. A preliminary fix has been made to Mars-GRAM by adding a density factor value that was determined for tau = 0.3, 1 and 3.

  9. Improved mass resolution and mass accuracy in TOF-SIMS spectra and images using argon gas cluster ion beams.

    PubMed

    Shon, Hyun Kyong; Yoon, Sohee; Moon, Jeong Hee; Lee, Tae Geol

    2016-06-01

    The popularity of argon gas cluster ion beams (Ar-GCIB) as primary ion beams in time-of-flight secondary ion mass spectrometry (TOF-SIMS) has increased because the molecular ions of large organic- and biomolecules can be detected with less damage to the sample surfaces. However, Ar-GCIB is limited by poor mass resolution as well as poor mass accuracy. The inferior quality of the mass resolution in a TOF-SIMS spectrum obtained by using Ar-GCIB compared to the one obtained by a bismuth liquid metal cluster ion beam and others makes it difficult to identify unknown peaks because of the mass interference from the neighboring peaks. However, in this study, the authors demonstrate improved mass resolution in TOF-SIMS using Ar-GCIB through the delayed extraction of secondary ions, a method typically used in TOF mass spectrometry to increase mass resolution. As for poor mass accuracy, although mass calibration using internal peaks with low mass such as hydrogen and carbon is a common approach in TOF-SIMS, it is unsuited to the present study because of the disappearance of the low-mass peaks in the delayed extraction mode. To resolve this issue, external mass calibration, another regularly used method in TOF-MS, was adapted to enhance mass accuracy in the spectrum and image generated by TOF-SIMS using Ar-GCIB in the delayed extraction mode. By producing spectra analyses of a peptide mixture and bovine serum albumin protein digested with trypsin, along with image analyses of rat brain samples, the authors demonstrate for the first time the enhancement of mass resolution and mass accuracy for the purpose of analyzing large biomolecules in TOF-SIMS using Ar-GCIB through the use of delayed extraction and external mass calibration. PMID:26861497

  10. Employee Perceptions of Progress with Implementing a Student-Centered Model of Institutional Improvement: An Achieving the Dream Case Study

    ERIC Educational Resources Information Center

    Cheek, Annesa LeShawn

    2011-01-01

    Achieving the Dream is a national initiative focused on helping more community college students succeed, particularly students of color and low-income students. Achieving the Dream's student-centered model of institutional improvement focuses on eliminating gaps and raising student achievement by helping institutions build a culture of evidence…

  11. Improving GLOBALlAND30 Artificial Type Extraction Accuracy in Low-Density Residents

    NASA Astrophysics Data System (ADS)

    Hou, Lili; Zhu, Ling; Peng, Shu; Xie, Zhenlei; Chen, Xu

    2016-06-01

    GlobalLand 30 is the first 30m resolution land cover product in the world. It covers the area within 80°N and 80°S. There are ten classes including artificial cover, water bodies, woodland, lawn, bare land, cultivated land, wetland, sea area, shrub and snow,. The TM imagery from Landsat is the main data source of GlobalLand 30. In the artificial surface type, one of the omission error happened on low-density residents' part. In TM images, hash distribution is one of the typical characteristics of the low-density residents, and another one is there are a lot of cultivated lands surrounded the low-density residents. Thus made the low-density residents part being blurred with cultivated land. In order to solve this problem, nighttime light remote sensing image is used as a referenced data, and on the basis of NDBI, we add TM6 to calculate the amount of surface thermal radiation index TR-NDBI (Thermal Radiation Normalized Difference Building Index) to achieve the purpose of extracting low-density residents. The result shows that using TR-NDBI and the nighttime light remote sensing image are a feasible and effective method for extracting low-density residents' areas.

  12. Improved Energy Bound Accuracy Enhances the Efficiency of Continuous Protein Design

    PubMed Central

    Roberts, Kyle E.; Donald, Bruce R.

    2015-01-01

    Flexibility and dynamics are important for protein function and a protein’s ability to accommodate amino acid substitutions. However, when computational protein design algorithms search over protein structures, the allowed flexibility is often reduced to a relatively small set of discrete side-chain and backbone conformations. While simplifications in scoring functions and protein flexibility are currently necessary to computationally search the vast protein sequence and conformational space, a rigid representation of a protein causes the search to become brittle and miss low-energy structures. Continuous rotamers more closely represent the allowed movement of a side chain within its torsional well and have been successfully incorporated into the protein design framework to design biomedically relevant protein systems. The use of continuous rotamers in protein design enables algorithms to search a larger conformational space than previously possible, but adds additional complexity to the design search. To design large, complex systems with continuous rotamers, new algorithms are needed to increase the efficiency of the search. We present two methods, PartCR and HOT, that greatly increase the speed and efficiency of protein design with continuous rotamers. These methods specifically target the large errors in energetic terms that are used to bound pairwise energies during the design search. By tightening the energy bounds, additional pruning of the conformation space can be achieved, and the number of conformations that must be enumerated to find the global minimum energy conformation is greatly reduced. PMID:25846627

  13. Improving quality and reducing inequities: a challenge in achieving best care

    PubMed Central

    Nicewander, David A.; Qin, Huanying; Ballard, David J.

    2006-01-01

    The health care quality chasm is better described as a gulf for certain segments of the population, such as racial and ethnic minority groups, given the gap between actual care received and ideal or best care quality. The landmark Institute of Medicine report Crossing the Quality Chasm: A New Health System for the 21st Century challenges all health care organizations to pursue six major aims of health care improvement: safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness. “Equity” aims to ensure that quality care is available to all and that the quality of care provided does not differ by race, ethnicity, or other personal characteristics unrelated to a patient's reason for seeking care. Baylor Health Care System is in the unique position of being able to examine the current state of equity in a typical health care delivery system and to lead the way in health equity research. Its organizational vision, “culture of quality,” and involved leadership bode well for achieving equitable best care. However, inequities in access, use, and outcomes of health care must be scrutinized; the moral, ethical, and economic issues they raise and the critical injustice they create must be remedied if this goal is to be achieved. Eliminating any observed inequities in health care must be synergistically integrated with quality improvement. Quality performance indicators currently collected and evaluated indicate that Baylor Health Care System often performs better than the national average. However, there are significant variations in care by age, gender, race/ethnicity, and socioeconomic status that indicate the many remaining challenges in achieving “best care” for all. PMID:16609733

  14. Embedded Analytical Solutions Improve Accuracy in Convolution-Based Particle Tracking Models using Python

    NASA Astrophysics Data System (ADS)

    Starn, J. J.

    2013-12-01

    -flow finite-difference transport simulations (MT3DMS). Results show more accurate simulation of pumping-well BTCs for a given grid cell size when using analytical solutions. The code base is extended to transient flow and BTCs are compared to results from MT3DMS simulations. Results show the particle-based solutions can resolve transient behavior using coarser model grids with far less computational effort than MT3DMS. The effect of simulation accuracy on parameter estimates (porosity) also is investigated. Porosity estimated using more accurate analytical solutions are less biased than in synthetic finite-difference transport simulations, which tend to be biased by coarseness of the grid. Eliminating the bias by using a finer grid comes at the expense of much larger computational effort. Finally, the code base was applied to an actual groundwater-flow model of Salt Lake Valley, Utah. Particle simulations using the Python code base compare well with finite-difference simulations, but with less computational effort, and have the added advantage of delineating flow paths, thus explicitly connecting solute source areas with receptors, and producing complete particle-age distributions. Knowledge of source areas and age distribution greatly enhances the analysis of dissolved solids data in Salt Lake Valley.

  15. From Guide to Practice: Improving Your After School Science Program to Increase Student Academic Achievement

    NASA Astrophysics Data System (ADS)

    Taylor, J.

    2013-12-01

    Numerous science organizations, such as NASA, offer educational outreach activities geared towards after school. For some programs, the primary goal is to grow students' love of science. For others, the programs are also intended to increase academic achievement. For those programs looking to support student learning in out-of-school time environments, aligning the program with learning during the classroom day can be a challenge. The Institute for Education Sciences, What Works Clearinghouse, put together a 'Practice Guide' for maximizing learning time beyond the regular school day. These practice guides provide concrete recommendations for educators supported by research. While this guide is not specific to any content or subject-area, the recommendations provided align very well with science education. After school science is often viewed as a fun, dynamic environment for students. Indeed, one of the recommendations to ensure time is structured according to students' needs is to provide relevant and interesting experiences. Given that our after school programs provide such creative environments for students, what other components are needed to promote increased academic achievement? The recommendations provided to academic achievement, include: 1. Align Instruction, 2. Maximize Attendance and Participation, 3. Adapt Instruction, 4. Provide Engaging Experiences, and 5. Evaluate Program. In this session we will examine these five recommendations presented in the Practice Guide, discuss how these strategies align with science programs, and examine what questions each program should address in order to provide experiences that lend themselves to maximizing instruction. Roadblocks and solutions for overcoming challenges in each of the five areas will be presented. Jessica Taylor will present this research based on her role as an author on the Practice Guide, 'Improving Academic Achievement in Out-of-School Time' and her experience working in various informal science

  16. Methods for improving low-angle, low-altitude radar tracking accuracy

    NASA Astrophysics Data System (ADS)

    Ozkara, Ali

    1993-09-01

    This thesis studies the problem of low-angle, low-altitude target tracking where the presence of multipath causes large angle errors. The problem is examined for a low sidelobe monopulse radar over a flat earth. A detailed multipath model is used to simulate the reflecting surface and the reflected signal is included in the monopulse processing simulation. Using this model the tracking error is obtained, and two multipath error reduction techniques are evaluated. The first method uses frequency agility to measure the angle over a wide frequency range. By averaging the results of many frequencies, the angle estimate can be significantly improved over that of a single frequency. The second method is referred to is difference beam phase toggling. By flipping the difference beam phase by 180 deg for two subsequent pulses, the return from a reflected path can be inside to cancel.

  17. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2peak)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  18. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    NASA Technical Reports Server (NTRS)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  19. Improving measurement accuracy by optimum data acquisition for Nd:YAG Thomson scattering system.

    PubMed

    Minami, T; Itoh, Y; Yamada, I; Yasuhara, R; Funaba, H; Nakanishi, H; Hatae, T

    2014-11-01

    A new high speed Nd:YAG Thomson scattering AD Convertor (HYADC) that can directly convert the detected scattered light signal into a digital signal is under development. The HYADC is expected to improve a signal to noise ratio of the Nd:YAG Thomson scattering measurement. The data storage of the HYADC which is required for the direct conversion of whole plasma discharge is drastically reduced by a ring buffer memory and a stop trigger system. Data transfer of the HYADC is performed by the SiTCP. The HYADC is easily expandable to a multi-channel system by the distributed data processing, and is very compact and easy to implement as a built-in system of the polychromators. PMID:25430250

  20. Improving accuracy and usability of growth charts: case study in Rwanda

    PubMed Central

    Brown, Suzana; McSharry, Patrick

    2016-01-01

    Objectives We evaluate and compare manually collected paper records against electronic records for monitoring the weights of children under the age of 5. Setting Data were collected by 24 community health workers (CHWs) in 2 Rwandan communities, 1 urban and 1 rural. Participants The same CHWs collected paper and electronic records. Paper data contain weight and age for 320 boys and 380 girls. Electronic data contain weight and age for 922 girls and 886 boys. Electronic data were collected over 9 months; most of the data is cross-sectional, with about 330 children with time-series data. Both data sets are compared with the international standard provided by the WHO growth chart. Primary and secondary outcome measures The plan was to collect 2000 individual records for the electronic data set—we finally collected 1878 records. Paper data were collected by the same CHWs, but most data were fragmented and hard to read. We transcribed data only from children for whom we were able to obtain the date of birth, to determine the exact age at the time of measurement. Results Mean absolute error (MAE) and mean absolute percentage error (MAPE) provide a way to quantify the magnitude of the error in using a given model. Comparing a model, log(weight)=a+b log(age), shows that electronic records provide considerable improvements over paper records, with 40% reduction in both performance metrics. Electronic data improve performance over the WHO model by 10% in MAPE and 7% in MAE. Results are statistically significant using the Kolmogorov-Smirnov test at p<0.01. Conclusions This study demonstrates that using modern electronic tools for health data collection is allowing better tracking of health indicators. We have demonstrated that electronic records facilitate development of a country-specific model that is more accurate than the international standard provided by the WHO growth chart. PMID:26817635

  1. Improving accuracy of electron density measurement in the presence of metallic implants using orthovoltage computed tomography

    SciTech Connect

    Yang Ming; Virshup, Gary; Mohan, Radhe; Shaw, Chris C.; Zhu, X. Ronald; Dong Lei

    2008-05-15

    The goal of this study was to evaluate the improvement in electron density measurement and metal artifact reduction using orthovoltage computed tomography (OVCT) imaging compared with conventional kilovoltage CT (KVCT). For this study, a bench-top system was constructed with adjustable x-ray tube voltage up to 320 kVp. A commercial tissue-characterization phantom loaded with inserts of various human tissue substitutes was imaged using 125 kVp (KVCT) and 320 kVp (OVCT) x rays. Stoichiometric calibration was performed for both KVCT and OVCT imaging using the Schneider method. The metal inserts--titanium rods and aluminum rods--were used to study the impact of metal artifacts on the electron-density measurements both inside and outside the metal inserts. It was found that the relationships between Hounsfield units and relative electron densities (to water) were more predictable for OVCT than KVCT. Unlike KVCT, the stoichiometric calibration for OVCT was insensitive to the use of tissue substitutes for direct electron density calibration. OVCT was found to significantly reduce metal streak artifacts. Errors in electron-density measurements within uniform tissue substitutes were reduced from 42% (maximum) and 18% (root-mean-square) in KVCT to 12% and 2% in OVCT, respectively. Improvements were also observed inside the metal implants. For the detectors optimized for KVCT, the imaging dose is almost doubled for OVCT for the image quality comparable to KVCT. OVCT may be a good option for high-precision radiotherapy treatment planning, especially for patients with metal implants and especially for charged particle therapy, such as proton therapy.

  2. Improving accuracy of rare variant imputation with a two-step imputation approach.

    PubMed

    Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G; Rivadeneira, Fernando; Estrada, Karol

    2015-03-01

    Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) <5%). In this study, we present a two-step imputation approach improving the quality of the 1000 Genomes imputation by genotyping only a subset of samples to create a local reference population on a dense array with many low-frequency markers. In this approach, the study sample, genotyped with a first generation array, is imputed first to the local reference sample genotyped on a dense array and hereafter to the 1000 Genomes reference panel. We show that mean imputation quality, measured by the r(2) using this approach, increases by 28% for variants with a MAF between 1 and 5% as compared with direct imputation to 1000 Genomes reference. Similarly, the concordance rate between calls of imputed and true genotypes was found to be significantly higher for heterozygotes (P<1e-15) and rare homozygote calls (P<1e-15) in this low frequency range. The two-step approach in our setting improves imputation quality compared with traditional direct imputation noteworthy in the low-frequency spectrum and is a cost-effective strategy in large epidemiological studies. PMID:24939589

  3. Improving accuracy of rare variant imputation with a two-step imputation approach

    PubMed Central

    Kreiner-Møller, Eskil; Medina-Gomez, Carolina; Uitterlinden, André G; Rivadeneira, Fernando; Estrada, Karol

    2015-01-01

    Genotype imputation has been the pillar of the success of genome-wide association studies (GWAS) for identifying common variants associated with common diseases. However, most GWAS have been run using only 60 HapMap samples as reference for imputation, meaning less frequent and rare variants not being comprehensively scrutinized. Next-generation arrays ensuring sufficient coverage together with new reference panels, as the 1000 Genomes panel, are emerging to facilitate imputation of low frequent single-nucleotide polymorphisms (minor allele frequency (MAF) <5%). In this study, we present a two-step imputation approach improving the quality of the 1000 Genomes imputation by genotyping only a subset of samples to create a local reference population on a dense array with many low-frequency markers. In this approach, the study sample, genotyped with a first generation array, is imputed first to the local reference sample genotyped on a dense array and hereafter to the 1000 Genomes reference panel. We show that mean imputation quality, measured by the r2 using this approach, increases by 28% for variants with a MAF between 1 and 5% as compared with direct imputation to 1000 Genomes reference. Similarly, the concordance rate between calls of imputed and true genotypes was found to be significantly higher for heterozygotes (P<1e-15) and rare homozygote calls (P<1e-15) in this low frequency range. The two-step approach in our setting improves imputation quality compared with traditional direct imputation noteworthy in the low-frequency spectrum and is a cost-effective strategy in large epidemiological studies. PMID:24939589

  4. "Score the Core" Web-based pathologist training tool improves the accuracy of breast cancer IHC4 scoring.

    PubMed

    Engelberg, Jesse A; Retallack, Hanna; Balassanian, Ronald; Dowsett, Mitchell; Zabaglo, Lila; Ram, Arishneel A; Apple, Sophia K; Bishop, John W; Borowsky, Alexander D; Carpenter, Philip M; Chen, Yunn-Yi; Datnow, Brian; Elson, Sarah; Hasteh, Farnaz; Lin, Fritz; Moatamed, Neda A; Zhang, Yanhong; Cardiff, Robert D

    2015-11-01

    Hormone receptor status is an integral component of decision-making in breast cancer management. IHC4 score is an algorithm that combines hormone receptor, HER2, and Ki-67 status to provide a semiquantitative prognostic score for breast cancer. High accuracy and low interobserver variance are important to ensure the score is accurately calculated; however, few previous efforts have been made to measure or decrease interobserver variance. We developed a Web-based training tool, called "Score the Core" (STC) using tissue microarrays to train pathologists to visually score estrogen receptor (using the 300-point H score), progesterone receptor (percent positive), and Ki-67 (percent positive). STC used a reference score calculated from a reproducible manual counting method. Pathologists in the Athena Breast Health Network and pathology residents at associated institutions completed the exercise. By using STC, pathologists improved their estrogen receptor H score and progesterone receptor and Ki-67 proportion assessment and demonstrated a good correlation between pathologist and reference scores. In addition, we collected information about pathologist performance that allowed us to compare individual pathologists and measures of agreement. Pathologists' assessment of the proportion of positive cells was closer to the reference than their assessment of the relative intensity of positive cells. Careful training and assessment should be used to ensure the accuracy of breast biomarkers. This is particularly important as breast cancer diagnostics become increasingly quantitative and reproducible. Our training tool is a novel approach for pathologist training that can serve as an important component of ongoing quality assessment and can improve the accuracy of breast cancer prognostic biomarkers. PMID:26410019

  5. A Simple and Efficient Methodology To Improve Geometric Accuracy in Gamma Knife Radiation Surgery: Implementation in Multiple Brain Metastases

    SciTech Connect

    Karaiskos, Pantelis; Moutsatsos, Argyris; Pappas, Eleftherios; Georgiou, Evangelos; Roussakis, Arkadios; Torrens, Michael; Seimenis, Ioannis

    2014-12-01

    Purpose: To propose, verify, and implement a simple and efficient methodology for the improvement of total geometric accuracy in multiple brain metastases gamma knife (GK) radiation surgery. Methods and Materials: The proposed methodology exploits the directional dependence of magnetic resonance imaging (MRI)-related spatial distortions stemming from background field inhomogeneities, also known as sequence-dependent distortions, with respect to the read-gradient polarity during MRI acquisition. First, an extra MRI pulse sequence is acquired with the same imaging parameters as those used for routine patient imaging, aside from a reversal in the read-gradient polarity. Then, “average” image data are compounded from data acquired from the 2 MRI sequences and are used for treatment planning purposes. The method was applied and verified in a polymer gel phantom irradiated with multiple shots in an extended region of the GK stereotactic space. Its clinical impact in dose delivery accuracy was assessed in 15 patients with a total of 96 relatively small (<2 cm) metastases treated with GK radiation surgery. Results: Phantom study results showed that use of average MR images eliminates the effect of sequence-dependent distortions, leading to a total spatial uncertainty of less than 0.3 mm, attributed mainly to gradient nonlinearities. In brain metastases patients, non-eliminated sequence-dependent distortions lead to target localization uncertainties of up to 1.3 mm (mean: 0.51 ± 0.37 mm) with respect to the corresponding target locations in the “average” MRI series. Due to these uncertainties, a considerable underdosage (5%-32% of the prescription dose) was found in 33% of the studied targets. Conclusions: The proposed methodology is simple and straightforward in its implementation. Regarding multiple brain metastases applications, the suggested approach may substantially improve total GK dose delivery accuracy in smaller, outlying targets.

  6. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  7. Improving Delivery Accuracy of Stereotactic Body Radiotherapy to a Moving Tumor Using Simplified Volumetric Modulated Arc Therapy

    PubMed Central

    Ko, Young Eun; Cho, Byungchul; Kim, Su Ssan; Song, Si Yeol; Choi, Eun Kyung; Ahn, Seung Do; Yi, Byongyong

    2016-01-01

    Purpose To develop a simplified volumetric modulated arc therapy (VMAT) technique for more accurate dose delivery in thoracic stereotactic body radiation therapy (SBRT). Methods and Materials For each of the 22 lung SBRT cases treated with respiratory-gated VMAT, a dose rate modulated arc therapy (DrMAT) plan was retrospectively generated. A dynamic conformal arc therapy plan with 33 adjoining coplanar arcs was designed and their beam weights were optimized by an inverse planning process. All sub-arc beams were converted into a series of control points with varying MLC segment and dose rates and merged into an arc beam for a DrMAT plan. The plan quality of original VMAT and DrMAT was compared in terms of target coverage, compactness of dose distribution, and dose sparing of organs at risk. To assess the delivery accuracy, the VMAT and DrMAT plans were delivered to a motion phantom programmed with the corresponding patients’ respiratory signal; results were compared using film dosimetry with gamma analysis. Results The plan quality of DrMAT was equivalent to that of VMAT in terms of target coverage, dose compactness, and dose sparing for the normal lung. In dose sparing for other critical organs, DrMAT was less effective than VMAT for the spinal cord, heart, and esophagus while being well within the limits specified by the Radiation Therapy Oncology Group. Delivery accuracy of DrMAT to a moving target was similar to that of VMAT using a gamma criterion of 2%/2mm but was significantly better using a 2%/1mm criterion, implying the superiority of DrMAT over VMAT in SBRT for thoracic/abdominal tumors with respiratory movement. Conclusion We developed a DrMAT technique for SBRT that produces plans of a quality similar to that achieved with VMAT but with better delivery accuracy. This technique is well-suited for small tumors with motion uncertainty. PMID:27333199

  8. Pre-operative Thresholds for Achieving Meaningful Clinical Improvement after Arthroscopic Treatment of Femoroacetabular Impingement

    PubMed Central

    Nwachukwu, Benedict U.; Fields, Kara G.; Nawabi, Danyal H.; Kelly, Bryan T.; Ranawat, Anil S.

    2016-01-01

    sagittal CEA was the only variable maintaining significance (p = 0.032). Conclusion: We used a large prospective hip arthroscopy database to identify pre-operative patient outcome score thresholds predictive of meaningful post-operative outcome improvement after arthroscopic FAI treatment. This is the largest reported hip arthroscopy cohort to define MCID and the first to do so for iHOT-33. The HOS-ADL may have the best predictive ability for achieving MCID after hip arthroscopy. Patients with relatively high pre-operative ADL, quality of life and functional status appear to have a high chance for achieveing MCID up to our defined thresholds. Hip dysplasia is an important outcome modifier. The findings of this study may be useful for managing preoperative expectation for patients undergoing arthroscopic FAI surgery.

  9. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    SciTech Connect

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  10. Application of Slepian theory for improving the accuracy of SH-based global ionosphere models in the Arctic region

    NASA Astrophysics Data System (ADS)

    Etemadfard, Hossein; Mashhadi Hossainali, Masoud

    2016-03-01

    Due to significant energy resources in polar regions, they have emerged as strategic parts of the world. Consequently, various researches have been funded in order to study these areas in further details. This research intends to improve the accuracy of spherical harmonic (SH)-based Global Ionospheric Models (GIMs) by reconstructing a new map of ionosphere in the Arctic region. For this purpose, the spatiospectral concentration is applied to optimize the base functions. It is carried out using the Slepian theory which was developed by Simons. Here the new base functions and the corresponding coefficients are derived from the SH models for the polar regions. Then, VTEC (vertical total electron content) is reconstructed using Slepian functions and the new coefficients. Reconstructed VTECs and the VTECs derived from SH models are compared to the estimates of this parameter, which are directly derived from dual-frequency GPS measurements. Three International Global Navigation Satellite Systems Service stations located in the northern polar region have been used for this purpose. The starting and ending day of year of adopted GPS data are 69 and 83, respectively, (totally 15 successive days) of the year 2013. According to the obtained results, on average, application of Slepian theory can improve accuracy of the GIM by 1 to 2 total electron content unit (TECU) (1 TECU = 1016 el m-2) in the Arctic region.

  11. Development of a Haptic Elbow Spasticity Simulator (HESS) for Improving Accuracy and Reliability of Clinical Assessment of Spasticity

    PubMed Central

    Park, Hyung-Soon; Kim, Jonghyun; Damiano, Diane L.

    2013-01-01

    This paper presents the framework for developing a robotic system to improve accuracy and reliability of clinical assessment. Clinical assessment of spasticity tends to have poor reliability because of the nature of the in-person assessment. To improve accuracy and reliability of spasticity assessment, a haptic device, named the HESS (Haptic Elbow Spasticity Simulator) has been designed and constructed to recreate the clinical “feel” of elbow spasticity based on quantitative measurements. A mathematical model representing the spastic elbow joint was proposed based on clinical assessment using the Modified Ashworth Scale (MAS) and quantitative data (position, velocity, and torque) collected on subjects with elbow spasticity. Four haptic models (HMs) were created to represent the haptic feel of MAS 1, 1+, 2, and 3. The four HMs were assessed by experienced clinicians; three clinicians performed both in-person and haptic assessments, and had 100% agreement in MAS scores; and eight clinicians who were experienced with MAS assessed the four HMs without receiving any training prior to the test. Inter-rater reliability among the eight clinicians had substantial agreement (κ = 0.626). The eight clinicians also rated the level of realism (7.63 ± 0.92 out of 10) as compared to their experience with real patients. PMID:22562769

  12. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation. PMID:22010755

  13. Improving the accuracy of the structure prediction of the third hypervariable loop of the heavy chains of antibodies

    PubMed Central

    Messih, Mario Abdel; Lepore, Rosalba; Marcatili, Paolo; Tramontano, Anna

    2014-01-01

    Motivation: Antibodies are able to recognize a wide range of antigens through their complementary determining regions formed by six hypervariable loops. Predicting the 3D structure of these loops is essential for the analysis and reengineering of novel antibodies with enhanced affinity and specificity. The canonical structure model allows high accuracy prediction for five of the loops. The third loop of the heavy chain, H3, is the hardest to predict because of its diversity in structure, length and sequence composition. Results: We describe a method, based on the Random Forest automatic learning technique, to select structural templates for H3 loops among a dataset of candidates. These can be used to predict the structure of the loop with a higher accuracy than that achieved by any of the presently available methods. The method also has the advantage of being extremely fast and returning a reliable estimate of the model quality. Availability and implementation: The source code is freely available at http://www.biocomputing.it/H3Loopred/ Contact: anna.tramontano@uniroma1.it Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24930144

  14. RAPID COMMUNICATION: Improving prediction accuracy of GPS satellite clocks with periodic variation behaviour

    NASA Astrophysics Data System (ADS)

    Heo, Youn Jeong; Cho, Jeongho; Heo, Moon Beom

    2010-07-01

    The broadcast ephemeris and IGS ultra-rapid predicted (IGU-P) products are primarily available for use in real-time GPS applications. The IGU orbit precision has been remarkably improved since late 2007, but its clock products have not shown acceptably high-quality prediction performance. One reason for this fact is that satellite atomic clocks in space can be easily influenced by various factors such as temperature and environment and this leads to complicated aspects like periodic variations, which are not sufficiently described by conventional models. A more reliable prediction model is thus proposed in this paper in order to be utilized particularly in describing the periodic variation behaviour satisfactorily. The proposed prediction model for satellite clocks adds cyclic terms to overcome the periodic effects and adopts delay coordinate embedding, which offers the possibility of accessing linear or nonlinear coupling characteristics like satellite behaviour. The simulation results have shown that the proposed prediction model outperforms the IGU-P solutions at least on a daily basis.

  15. What's ahead in glucose monitoring? New techniques hold promise for improved ease and accuracy.

    PubMed

    Bode, B W; Sabbah, H; Davidson, P C

    2001-04-01

    Advances in blood glucose monitoring have made it easier, more comfortable, and more practical for patients to monitor frequently. The new meters for intermittent monitoring are smaller and less dependent on technical aptitude than older models. They require less blood, and many provide downloadable information for glucose analysis. Data systems used with new meters provide valuable information that can dramatically improve glycemic control. Continuous glucose sensing (figure 4) is another major breakthrough in management of diabetes. Current systems allow only retrospective analyses, but real-time readings should be available in the near future. Such technological advances hold promise for preventing both hypoglycemia and hyperglycemia and for reducing the risk of long-term complications associated with diabetes. An artificial, mechanical islet cell may be the big next step toward bringing this disease under control. By combining continuous glucose monitoring data with continuous insulin delivery via an external or an implantable insulin pump, the outlook promises to be much brighter for patients with type 1 diabetes. PMID:11317468

  16. Molecular investigations to improve diagnostic accuracy in patients with ARC syndrome.

    PubMed

    Cullinane, Andrew R; Straatman-Iwanowska, Anna; Seo, Jeong K; Ko, Jae S; Song, Kyung S; Gizewska, Maria; Gruszfeld, Dariusz; Gliwicz, Dorota; Tuysuz, Beyhan; Erdemir, Gulin; Sougrat, Rachid; Wakabayashi, Yoshiyuki; Hinds, Rupert; Barnicoat, Angela; Mandel, Hanna; Chitayat, David; Fischler, Björn; Garcia-Cazorla, Angels; Knisely, A S; Kelly, Deirdre A; Maher, Eamonn R; Gissen, Paul

    2009-02-01

    Arthrogryposis, Renal dysfunction and Cholestasis (ARC) syndrome is a multi-system autosomal recessive disorder caused by germline mutations in VPS33B. The detection of germline VPS33B mutations removes the need for diagnostic organ biopsies (these carry a>50% risk of life-threatening haemorrhage due to platelet dysfunction); however, VPS33B mutations are not detectable in approximately 25% of patients. In order further to define the molecular basis of ARC we performed mutation analysis and mRNA and protein studies in patients with a clinical diagnosis of ARC. Here we report novel mutations in VPS33B in patients from Eastern Europe and South East Asia. One of the mutations was present in 7 unrelated Korean patients. Reduced expression of VPS33B and cellular phenotype was detected in fibroblasts from patients clinically diagnosed with ARC with and without known VPS33B mutations. One mutation-negative patient was found to have normal mRNA and protein levels. This patient's clinical condition improved and he is alive at the age of 2.5 years. Thus we show that all patients with a classical clinical course of ARC had decreased expression of VPS33B whereas normal VPS33B expression was associated with good prognosis despite initial diagnosis of ARC. PMID:18853461

  17. Molecular Investigations to Improve Diagnostic Accuracy in Patients With ARC Syndrome

    PubMed Central

    Cullinane, Andrew R.; Straatman-Iwanowska, Anna; Seo, Jeong K.; Ko, Jae S.; Song, Kyung S.; Gizewska, Maria; Gruszfeld, Dariusz; Gliwicz, Dorota; Tuysuz, Beyhan; Erdemir, Gulin; Sougrat, Rachid; Wakabayashi, Yoshiyuki; Hinds, Rupert; Barnicoat, Angela; Mandel, Hanna; Chitayat, David; Fischler, Björn; Garcia-Cazorla, Angels; Knisely, A. S.; Kelly, Deirdre A.; Maher, Eamonn R.; Gissen, Paul

    2008-01-01

    Arthrogryposis, Renal dysfunction and Cholestasis (ARC) syndrome is a multi-system autosomal recessive disorder caused by germline mutations in VPS33B. The detection of germline VPS33B mutations removes the need for diagnostic organ biopsies (these carry a >50% risk of life-threatening haemorrhage due to platelet dysfunction); however, VPS33B mutations are not detectable in ∼25% of patients. In order further to define the molecular basis of ARC we performed mutation analysis and mRNA and protein studies in patients with a clinical diagnosis of ARC. Here we report novel mutations in VPS33B in patients from Eastern Europe and South East Asia. One of the mutations was present in 7 unrelated Korean patients. Reduced expression of VPS33B and cellular phenotype was detected in fibroblasts from patients clinically diagnosed with ARC with and without known VPS33B mutations. One mutation-negative patient was found to have normal mRNA and protein levels. This patient's clinical condition improved and he is alive at the age of 2.5 years. Thus we show that all patients with a classical clinical course of ARC had decreased expression of VPS33B whereas normal VPS33B expression was associated with good prognosis despite initial diagnosis of ARC. PMID:18853461

  18. Drift removal for improving the accuracy of gait parameters using wearable sensor systems.

    PubMed

    Takeda, Ryo; Lisco, Giulia; Fujisawa, Tadashi; Gastaldi, Laura; Tohyama, Harukazu; Tadano, Shigeru

    2014-01-01

    Accumulated signal noise will cause the integrated values to drift from the true value when measuring orientation angles of wearable sensors. This work proposes a novel method to reduce the effect of this drift to accurately measure human gait using wearable sensors. Firstly, an infinite impulse response (IIR) digital 4th order Butterworth filter was implemented to remove the noise from the raw gyro sensor data. Secondly, the mode value of the static state gyro sensor data was subtracted from the measured data to remove offset values. Thirdly, a robust double derivative and integration method was introduced to remove any remaining drift error from the data. Lastly, sensor attachment errors were minimized by establishing the gravitational acceleration vector from the acceleration data at standing upright and sitting posture. These improvements proposed allowed for removing the drift effect, and showed an average of 2.1°, 33.3°, 15.6° difference for the hip knee and ankle joint flexion/extension angle, when compared to without implementation. Kinematic and spatio-temporal gait parameters were also calculated from the heel-contact and toe-off timing of the foot. The data provided in this work showed potential of using wearable sensors in clinical evaluation of patients with gait-related diseases. PMID:25490587

  19. Improving accuracy and reliability of 186-keV measurements for unattended enrichment monitoring

    SciTech Connect

    Ianakiev, Kiril D; Boyer, Brian D; Swinhoe, Martyn T; Moss, Calvin E; Goda, Joetta M; Favalli, Andrea; Lombardi, Marcie; Paffett, Mark T; Hill, Thomas R; MacArthur, Duncan W; Smith, Morag K

    2010-04-13

    Improving the quality of safeguards measurements at Gas Centrifuge Enrichment Plants (GCEPs), whilst reducing the inspection effort, is an important objective given the number of existing and new plants that need to be safeguarded. A useful tool in many safeguards approaches is the on-line monitoring of enrichment in process pipes. One aspect of this measurement is a simple, reliable and precise passive measurement of the 186-keV line from {sup 235}U. (The other information required is the amount of gas in the pipe. This can be obtained by transmission measurements or pressure measurements). In this paper we describe our research efforts towards such a passive measurement system. The system includes redundant measurements of the 186-keV line from the gas and separately from the wall deposits. The design also includes measures to reduce the effect of the potentially important background. Such an approach would practically eliminate false alarms and can maintain the operation of the system even with a hardware malfunction in one of the channels. The work involves Monte Carlo modeling and the construction of a proof-of-principle prototype. We will carry out experimental tests with UF{sub 6} gas in pipes with and without deposits in order to demonstrate the deposit correction.

  20. Using standards to improve middle school students' accuracy at evaluating the quality of their recall.

    PubMed

    Lipko, Amanda R; Dunlosky, John; Hartwig, Marissa K; Rawson, Katherine A; Swan, Karen; Cook, Dale

    2009-12-01

    When recalling key term definitions from class materials, students may recall entirely incorrect definitions, yet will often claim that these commission errors are entirely correct; that is, they are overconfident in the quality of their recall responses. We investigated whether this overconfidence could be reduced by providing various standards to middle school students as they evaluated their recall responses. Students studied key term definitions, attempted to recall each one, and then were asked to score the quality of their recall. In Experiment 1, they evaluated their recall responses by rating each response as fully correct, partially correct, or incorrect. Most important, as they evaluated a particular response, it was presented either alone (i.e., without a standard) or with the correct definition present. Providing this full-definition standard reduced overconfidence in commission errors: Students assigned full or partial credit to 73% of their commission errors when they received no standard, whereas they assigned credit to only 44% of these errors when receiving the full-definition standard. In Experiment 2, a new standard was introduced: Idea units from each definition were presented, and students indicated whether each idea unit was in their response. After making these idea-unit judgments, the students then evaluated the quality of their entire response. Idea-unit standards further reduced overconfidence. Thus, although middle school students are overconfident in evaluating the quality of their recall responses, using standards substantially reduces this overconfidence and promises to improve the efficacy of their self-regulated learning. PMID:20025417

  1. Combining double-difference relocation with regional depth-phase modelling to improve hypocentre accuracy

    NASA Astrophysics Data System (ADS)

    Ma, Shutian; Eaton, David W.

    2011-05-01

    Precise and accurate earthquake hypocentres are critical for various fields, such as the study of tectonic process and seismic-hazard assessment. Double-difference relocation methods are widely used and can dramatically improve the precision of event relative locations. In areas of sparse seismic network coverage, however, a significant trade-off exists between focal depth, epicentral location and the origin time. Regional depth-phase modelling (RDPM) is suitable for sparse networks and can provide focal-depth information that is relatively insensitive to uncertainties in epicentral location and independent of errors in the origin time. Here, we propose a hybrid method in which focal depth is determined using RDPM and then treated as a fixed parameter in subsequent double-difference calculations, thus reducing the size of the system of equations and increasing the precision of the hypocentral solutions. Based on examples using small earthquakes from eastern Canada and southwestern USA, we show that the application of this technique yields solutions that appear to be more robust and accurate than those obtained by standard double-difference relocation method alone.

  2. Electronic Prescribing: Improving the Efficiency and Accuracy of Prescribing in the Ambulatory Care Setting

    PubMed Central

    Porterfield, Amber; Engelbert, Kate; Coustasse, Alberto

    2014-01-01

    Electronic prescribing (e-prescribing) is an important part of the nation's push to enhance the safety and quality of the prescribing process. E-prescribing allows providers in the ambulatory care setting to send prescriptions electronically to the pharmacy and can be a stand-alone system or part of an integrated electronic health record system. The methodology for this study followed the basic principles of a systematic review. A total of 47 sources were referenced. Results of this research study suggest that e-prescribing reduces prescribing errors, increases efficiency, and helps to save on healthcare costs. Medication errors have been reduced to as little as a seventh of their previous level, and cost savings due to improved patient outcomes and decreased patient visits are estimated to be between $140 billion and $240 billion over 10 years for practices that implement e-prescribing. However, there have been significant barriers to implementation including cost, lack of provider support, patient privacy, system errors, and legal issues. PMID:24808808

  3. Methods for improving accuracy and extending results beyond periods covered by traditional ground-truth in remote sensing classification of a complex landscape

    NASA Astrophysics Data System (ADS)

    Mueller-Warrant, George W.; Whittaker, Gerald W.; Banowetz, Gary M.; Griffith, Stephen M.; Barnhart, Bradley L.

    2015-06-01

    Successful development of approaches to quantify impacts of diverse landuse and associated agricultural management practices on ecosystem services is frequently limited by lack of historical and contemporary landuse data. We hypothesized that ground truth data from one year could be used to extrapolate previous or future landuse in a complex landscape where cropping systems do not generally change greatly from year to year because the majority of crops are established perennials or the same annual crops grown on the same fields over multiple years. Prior to testing this hypothesis, it was first necessary to classify 57 major landuses in the Willamette Valley of western Oregon from 2005 to 2011 using normal same year ground-truth, elaborating on previously published work and traditional sources such as Cropland Data Layers (CDL) to more fully include minor crops grown in the region. Available remote sensing data included Landsat, MODIS 16-day composites, and National Aerial Imagery Program (NAIP) imagery, all of which were resampled to a common 30 m resolution. The frequent presence of clouds and Landsat7 scan line gaps forced us to conduct of series of separate classifications in each year, which were then merged by choosing whichever classification used the highest number of cloud- and gap-free bands at any given pixel. Procedures adopted to improve accuracy beyond that achieved by maximum likelihood pixel classification included majority-rule reclassification of pixels within 91,442 Common Land Unit (CLU) polygons, smoothing and aggregation of areas outside the CLU polygons, and majority-rule reclassification over time of forest and urban development areas. Final classifications in all seven years separated annually disturbed agriculture, established perennial crops, forest, and urban development from each other at 90 to 95% overall 4-class validation accuracy. In the most successful use of subsequent year ground-truth data to classify prior year landuse, an

  4. Analyses to Verify and Improve the Accuracy of the Manufactured Home Energy Audit (MHEA)

    SciTech Connect

    Ternes, Mark P; Gettings, Michael B

    2008-12-01

    A series of analyses were performed to determine the reasons that the Manufactured Home Energy Audit (MHEA) over predicted space-heating energy savings as measured in a recent field test and to develop appropriate corrections to improve its performance. The study used the Home Energy Rating System (HERS) Building Energy Simulation Test (BESTEST) to verify that MHEA accurately calculates the UA-values of mobile home envelope components and space-heating energy loads as compared with other, well-accepted hourly energy simulation programs. The study also used the Procedures for Verification of RESNET Accredited HERS Software Tools to determine that MHEA accurately calculates space-heating energy consumptions for gas furnaces, heat pumps, and electric-resistance furnaces. Even though MHEA's calculations were shown to be correct from an engineering point of view, three modifications to MHEA's algorithms and use of a 0.6 correction factor were incorporated into MHEA to true-up its predicted savings to values measured in a recent field test. A simulated use of the revised version of MHEA in a weatherization program revealed that MHEA would likely still recommend a significant number of cost-effective weatherization measures in mobile homes (including ceiling, floor, and even wall insulation and far fewer storm windows). Based on the findings from this study, it was recommended that a revised version of MHEA with all the changes and modifications outlined in this report should be finalized and made available to the weatherization community as soon as possible, preferably in time for use within the 2009 Program Year.

  5. Improving radiotherapy planning, delivery accuracy, and normal tissue sparing using cutting edge technologies

    PubMed Central

    Glide-Hurst, Carri K.

    2014-01-01

    In the United States, more than half of all new invasive cancers diagnosed are non-small cell lung cancer, with a significant number of these cases presenting at locally advanced stages, resulting in about one-third of all cancer deaths. While the advent of stereotactic ablative radiation therapy (SABR, also known as stereotactic body radiotherapy, or SBRT) for early-staged patients has improved local tumor control to >90%, survival results for locally advanced stage lung cancer remain grim. Significant challenges exist in lung cancer radiation therapy including tumor motion, accurate dose calculation in low density media, limiting dose to nearby organs at risk, and changing anatomy over the treatment course. However, many recent technological advancements have been introduced that can meet these challenges, including four-dimensional computed tomography (4DCT) and volumetric cone-beam computed tomography (CBCT) to enable more accurate target definition and precise tumor localization during radiation, respectively. In addition, advances in dose calculation algorithms have allowed for more accurate dosimetry in heterogeneous media, and intensity modulated and arc delivery techniques can help spare organs at risk. New delivery approaches, such as tumor tracking and gating, offer additional potential for further reducing target margins. Image-guided adaptive radiation therapy (IGART) introduces the potential for individualized plan adaptation based on imaging feedback, including bulky residual disease, tumor progression, and physiological changes that occur during the treatment course. This review provides an overview of the current state of the art technology for lung cancer volume definition, treatment planning, localization, and treatment plan adaptation. PMID:24688775

  6. Identifying the procedural gap and improved methods for maintaining accuracy during total hip arthroplasty.

    PubMed

    Gross, Allan; Muir, Jeffrey M

    2016-09-01

    Osteoarthritis is a ubiquitous condition, affecting 26 million Americans each year, with up to 17% of adults over age 75 suffering from one variation of arthritis. The hip is one of the most commonly affected joints and while there are conservative options for treatment, as symptoms progress, many patients eventually turn to surgery to manage their pain and dysfunction. Early surgical options such as osteotomy or arthroscopy are reserved for younger, more active patients with less severe disease and symptoms. Total hip arthroplasty offers a viable solution for patients with severe degenerative changes; however, post-surgical discrepancies in leg length, offset and component malposition are common and cause significant complications. Such discrepancies are associated with consequences such as low back pain, neurological deficits, instability and overall patient dissatisfaction. Current methods for managing leg length and offset during hip arthroplasty are either inaccurate and susceptible to error or are cumbersome, expensive and lengthen surgical time. There is currently no viable option that provides accurate, real-time data to surgeons regarding leg length, offset and cup position in a cost-effective manner. As such, we hypothesize that a procedural gap exists in hip arthroplasty, a gap into which fall a large majority of arthroplasty patients who are at increased risk of complications following surgery. These complications and associated treatments place significant stress on the healthcare system. The costs associated with addressing leg length and offset discrepancies can be minor, requiring only heel lifts and short-term rehabilitation, but can also be substantial, with revision hip arthroplasty costs of up to $54,000 per procedure. The need for a cost-effective, simple to use and unobtrusive technology to address this procedural gap in hip arthroplasty and improve patient outcomes is of increasing importance. Given the aging of the population, the projected

  7. Improvement of Accuracy of Proper Motions of Hipparcos Catalogue Stars Using Optical Latitude Observations

    NASA Astrophysics Data System (ADS)

    Damljanovic, G.

    2009-09-01

    ), Mizusawa (MZL FZT), Tuorla -- Turku (TT VZT), Mizusawa (MZP and MZQ PZT), Mount Stromlo (MS PZT), Ondřejov (OJP PZT), Punta Indio (PIP PZT), Richmond (RCP and RCQ PZT) and Washington (WA, W and WGQ PZT). The task is to improve the proper motions in declination of the observed Hipparcos stars. The original method was developed, and it consists of removing from the instantaneous observed latitudes all known effects (polar motion and some local instrumental errors). The corrected latitudes are then used to calculate the corrections of the Hipparcos proper motions in declination (Damljanović 2005). The Least Squares Method (LSM) is used with the linear model. We compared the calculated results with ARIHIP and EOC-2 data, and found a good agreement. The newly obtained values of proper motions in declination are substantially more precise than those of the Hipparcos Catalogue. It is because the time interval covered by the latitude observations (tens of years) is much longer than the Hipparcos one (less than four years), and because of the great number of observations made during this interval (Damljanović et al. 2006). Our method is completely different from the one used to compute the EOC-2 catalogue (Vondrák 2004). It was also an almost independent check of the proper motions of EOC-2. The catalogue EOC-2 is used in this thesis to distinguish the corrections of the two stars of a pair observed by using the Horrebow -- Talcott method. The difference between the two proper motions is constrained by the difference in the EOC-2 and Hipparcos catalogues (Damljanović and Pejović 2006). The main result of the thesis is the catalogue of proper motions in declination of 2347 Hipparcos stars.

  8. Improving accuracy in shallow-landslide susceptibility analyses at regional scale

    NASA Astrophysics Data System (ADS)

    Iovine, Giulio G. R.; Rago, Valeria; Frustaci, Francesco; Bruno, Claudia; Giordano, Stefania; Muto, Francesco; Gariano, Stefano L.; Pellegrino, Annamaria D.; Conforti, Massimo; Pascale, Stefania; Distilo, Daniela; Basile, Vincenzo; Soleri, Sergio; Terranova, Oreste G.

    2015-04-01

    Calabria (southern Italy) is particularly exposed to geo-hydrological risk. In the last decades, slope instabilities, mainly related to rainfall-induced landslides, repeatedly affected its territory. Among these, shallow landslides, characterized by abrupt onset and extremely rapid movements, are among the most destructive and dangerous phenomena for people and infrastructures. In this study, a susceptibility analysis to shallow landslides has been performed by refining a method recently applied in Costa Viola - central Calabria (Iovine et al., 2014), and only focusing on landslide source activations (regardless of their possible evolution as debris flows). A multivariate approach has been applied to estimating the presence/absence of sources, based on linear statistical relationships with a set of causal variables. The different classes of numeric causal variables have been determined by means of a data clustering method, designed to determine the best arrangement. A multi-temporal inventory map of sources, mainly obtained from interpretation of air photographs taken in 1954-1955, and in 2000, has been adopted to selecting the training and the validation sets. Due to the wide extend of the territory, the analysis has been iteratively performed by a step-by-step decreasing cell-size approach, by adopting greater spatial resolutions and thematic details (e.g. lithology, land-use, soil, morphometry, rainfall) for high-susceptible sectors. Through a sensitivity analysis, the weight of the considered factors in predisposing shallow landslides has been evaluated. The best set of variables has been identified by iteratively including one variable at a time, and comparing the results in terms of performance. Furthermore, susceptibility evaluations obtained through logistic regression have been compared to those obtained by applying neural networks. Obtained results may be useful to improve land utilization planning, and to select proper mitigation measures in shallow

  9. Improvement of classification accuracy in a phase-tagged steady-state visual evoked potential-based brain computer interface using multiclass support vector machine

    PubMed Central

    2013-01-01

    Background Brain computer interface (BCI) is an emerging technology for paralyzed patients to communicate with external environments. Among current BCIs, the steady-state visual evoked potential (SSVEP)-based BCI has drawn great attention due to its characteristics of easy preparation, high information transfer rate (ITR), high accuracy, and low cost. However, electroencephalogram (EEG) signals are electrophysiological responses reflecting the underlying neural activities which are dependent upon subject’s physiological states (e.g., emotion, attention, etc.) and usually variant among different individuals. The development of classification approaches to account for each individual’s difference in SSVEP is needed but was seldom reported. Methods This paper presents a multiclass support vector machine (SVM)-based classification approach for gaze-target detections in a phase-tagged SSVEP-based BCI. In the training steps, the amplitude and phase features of SSVEP from off-line recordings were used to train a multiclass SVM for each subject. In the on-line application study, effective epochs which contained sufficient SSVEP information of gaze targets were first determined using Kolmogorov-Smirnov (K-S) test, and the amplitude and phase features of effective epochs were subsequently inputted to the multiclass SVM to recognize user’s gaze targets. Results The on-line performance using the proposed approach has achieved high accuracy (89.88 ± 4.76%), fast responding time (effective epoch length = 1.13 ± 0.02 s), and the information transfer rate (ITR) was 50.91 ± 8.70 bits/min. Conclusions The multiclass SVM-based classification approach has been successfully implemented to improve the classification accuracy in a phase-tagged SSVEP-based BCI. The present study has shown the multiclass SVM can be effectively adapted to each subject’s SSVEPs to discriminate SSVEP phase information from gazing at different gazed targets. PMID:23692974

  10. Preliminary study for improving the VIIRS DNB low light calibration accuracy with ground based active light source

    NASA Astrophysics Data System (ADS)

    Cao, Changyong; Zong, Yuqing; Bai, Yan; Shao, Xi

    2015-09-01

    There is a growing interest in the science and user community in the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) low light detection capabilities at night for quantitative applications such as airglow, geophysical retrievals under lunar illumination, light power estimation, search and rescue, energy use, urban expansion and other human activities. Given the growing interest in the use of the DNB data, a pressing need arises for improving the calibration stability and absolute accuracy of the DNB at low radiances. Currently the low light calibration accuracy was estimated at a moderate 15%-100% while the long-term stability has yet to be characterized. This study investigates selected existing night light point sources from Suomi NPP DNB observations and evaluates the feasibility of SI traceable nightlight source at radiance levels near 3 nW·cm-2·sr-1, that potentially can be installed at selected sites for VIIRS DNB calibration/validation. The illumination geometry, surrounding environment, as well as atmospheric effects are also discussed. The uncertainties of the ground based light source are estimated. This study will contribute to the understanding of how the Earth's atmosphere and surface variability contribute to the stability of the DNB measured radiances, and how to separate them from instrument calibration stability. It presents the need for SI traceable active light sources to monitor the calibration stability, radiometric and geolocation accuracy, and point spread functions of the DNB. Finally, it is also hoped to address whether or not active light sources can be used for detecting environmental changes, such as aerosols.

  11. Tailored selection of study individuals to be sequenced in order to improve the accuracy of genotype imputation.

    PubMed

    Peil, Barbara; Kabisch, Maria; Fischer, Christine; Hamann, Ute; Bermejo, Justo Lorenzo

    2015-02-01

    The addition of sequence data from own-study individuals to genotypes from external data repositories, for example, the HapMap, has been shown to improve the accuracy of imputed genotypes. Early approaches for reference panel selection favored individuals who best reflect recombination patterns in the study population. By contrast, a maximization of genetic diversity in the reference panel has been recently proposed. We investigate here a novel strategy to select individuals for sequencing that relies on the characterization of the ancestral kernel of the study population. The simulated study scenarios consisted of several combinations of subpopulations from HapMap. HapMap individuals who did not belong to the study population constituted an external reference panel which was complemented with the sequences of study individuals selected according to different strategies. In addition to a random choice, individuals with the largest statistical depth according to the first genetic principal components were selected. In all simulated scenarios the integration of sequences from own-study individuals increased imputation accuracy. The selection of individuals based on the statistical depth resulted in the highest imputation accuracy for European and Asian study scenarios, whereas random selection performed best for an African-study scenario. Present findings indicate that there is no universal 'best strategy' to select individuals for sequencing. We propose to use the methodology described in the manuscript to assess the advantage of focusing on the ancestral kernel under own study characteristics (study size, genetic diversity, availability and properties of external reference panels, frequency of imputed variants…). PMID:25537753

  12. Photoplethysmogram intensity ratio: A potential indicator for improving the accuracy of PTT-based cuffless blood pressure estimation.

    PubMed

    Ding, Xiao-Rong; Zhang, Yuan-Ting

    2015-01-01

    The most commonly used method for cuffless blood pressure (BP) measurement is using pulse transit time (PTT), which is based on Moens-Korteweg (M-K) equation underlying the assumption that arterial geometries such as the arterial diameter keep unchanged. However, the arterial diameter is dynamic which varies over the cardiac cycle, and it is regulated through the contraction or relaxation of the vascular smooth muscle innervated primarily by the sympathetic nervous system. This may be one of the main reasons that impair the BP estimation accuracy. In this paper, we propose a novel indicator, the photoplethysmogram (PPG) intensity ratio (PIR), to evaluate the arterial diameter change. The deep breathing (DB) maneuver and Valsalva maneuver (VM) were performed on five healthy subjects for assessing parasympathetic and sympathetic nervous activities, respectively. Heart rate (HR), PTT, PIR and BP were measured from the simultaneously recorded electrocardiogram (ECG), PPG, and continuous BP. It was found that PIR increased significantly from inspiration to expiration during DB, whilst BP dipped correspondingly. Nevertheless, PIR changed positively with BP during VM. In addition, the spectral analysis revealed that the dominant frequency component of PIR, HR and SBP, shifted significantly from high frequency (HF) to low frequency (LF), but not obvious in that of PTT. These results demonstrated that PIR can be potentially used to evaluate the smooth muscle tone which modulates arterial BP in the LF range. The PTT-based BP measurement that take into account the PIR could therefore improve its estimation accuracy. PMID:26736283

  13. An enhanced Cramér-Rao bound weighted method for attitude accuracy improvement of a star tracker.

    PubMed

    Zhang, Jun; Wang, Jian

    2016-06-01

    This study presents a non-average weighted method for the QUEST (QUaternion ESTimator) algorithm, using the inverse value of root sum square of Cramér-Rao bound and focal length drift errors of the tracking star as weight, to enhance the pointing accuracy of a star tracker. In this technique, the stars that are brighter, or at low angular rate, or located towards the center of star field will be given a higher weight in the attitude determination process, and thus, the accuracy is readily improved. Simulations and ground test results demonstrate that, compared to the average weighted method, it can reduce the attitude uncertainty by 10%-20%, which is confirmed particularly for the sky zones with non-uniform distribution of stars. Moreover, by using the iteratively weighted center of gravity algorithm as the newly centroiding method for the QUEST algorithm, the current attitude uncertainty can be further reduced to 44% with a negligible additional computing load. PMID:27370431

  14. NREL Evaluates Thermal Performance of Uninsulated Walls to Improve Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-03-01

    NREL researchers discover ways to increase accuracy in building energy simulations tools to improve predictions of potential energy savings in homes. Uninsulated walls are typical in older U.S. homes where the wall cavities were not insulated during construction or where the insulating material has settled. Researchers at the National Renewable Energy Laboratory (NREL) are investigating ways to more accurately calculate heat transfer through building enclosures to verify the benefit of energy efficiency upgrades that reduce energy use in older homes. In this study, scientists used computational fluid dynamics (CFD) analysis to calculate the energy loss/gain through building walls and visualize different heat transfer regimes within the uninsulated cavities. The effects of ambient outdoor temperature, the radiative properties of building materials, insulation levels, and the temperature dependence of conduction through framing members were considered. The research showed that the temperature dependence of conduction through framing members dominated the differences between this study and previous results - an effect not accounted for in existing building energy simulation tools. The study provides correlations for the resistance of the uninsulated assemblies that can be implemented into building simulation tools to increase the accuracy of energy use estimates in older homes, which are currently over-predicted.

  15. Improvement of the Accuracy of InSAR Image Co-Registration Based On Tie Points – A Review

    PubMed Central

    Zou, Weibao; Li, Yan; Li, Zhilin; Ding, Xiaoli

    2009-01-01

    Interferometric Synthetic Aperture Radar (InSAR) is a new measurement technology, making use of the phase information contained in the Synthetic Aperture Radar (SAR) images. InSAR has been recognized as a potential tool for the generation of digital elevation models (DEMs) and the measurement of ground surface deformations. However, many critical factors affect the quality of InSAR data and limit its applications. One of the factors is InSAR data processing, which consists of image co-registration, interferogram generation, phase unwrapping and geocoding. The co-registration of InSAR images is the first step and dramatically influences the accuracy of InSAR products. In this paper, the principle and processing procedures of InSAR techniques are reviewed. One of important factors, tie points, to be considered in the improvement of the accuracy of InSAR image co-registration are emphatically reviewed, such as interval of tie points, extraction of feature points, window size for tie point matching and the measurement for the quality of an interferogram. PMID:22399966

  16. A Computerized Decision Support System Improves the Accuracy of Temperature Capture from Nursing Personnel at the Bedside

    PubMed Central

    Kroth, Philip J.; Dexter, Paul R.; Overhage, J. Marc; Knipe, Cynthia; Hui, Siu L.; Belsito, Anne; McDonald, Clement J.

    2006-01-01

    Objective To assess the effect of a computerized decision support system (CDSS) on the accuracy of patient temperature recording at the bed side. Design This is a randomized, controlled trial comparing nurses assigned to an intervention group that received CDSS whenever they attempted to store a low temperature (≤ 96.4°F) or a control group that received no CDSS. Measurements The computer recorded temperatures that would trigger reminders equally in both control and intervention groups. It also logged the reactions of nurses who received reminders and whether they retook the patient’s temperature or chose to store the original low value. Results We analyzed the temperature data over a 10-month period tracking a total of 44339 temperatures taken by the control group and 45823 temperatures taken by the intervention group. We showed a 51% relative reduction in the number of erroneous low temperatures stored by the intervention versus the control group. Conclusion CDSS are effective with nursing personnel in improving the accuracy of temperature capture at the bedside. PMID:17238380

  17. Achievement for All: improving psychosocial outcomes for students with special educational needs and disabilities.

    PubMed

    Humphrey, Neil; Lendrum, Ann; Barlow, Alexandra; Wigelsworth, Michael; Squires, Garry

    2013-04-01

    Students with special educational needs and disabilities (SEND) are at a greatly increased risk of experiencing poor psychosocial outcomes. Developing effective interventions that address the cause of these outcomes has therefore become a major policy priority in recent years. We report on a national evaluation of the Achievement for All (AfA) programme that was designed to improve outcomes for students with SEND through: (1) academic assessment, tracking and intervention, (2) structured conversations with parents, and (3) developing provision to improve wider outcomes (e.g. positive relationships). Using a quasi-experimental, pre-test-post-test control group design, we assessed the impact of AfA on teacher ratings of the behaviour problems, positive relationships and bullying of students with SEND over an 18-month period. Participants were 4758 students with SEND drawn from 323 schools across England. Our main impact analysis demonstrated that AfA had a significant impact on all three response variables when compared to usual practice. Hierarchical linear modelling of data from the intervention group highlighted a range of school-level contextual factors and implementation activities and student-level individual differences that moderated the impact of AfA on our study outcomes. The implications of our findings are discussed, and study strengths and limitations are noted. PMID:23380579

  18. The Stories Clinicians Tell: Achieving High Reliability and Improving Patient Safety

    PubMed Central

    Cohen, Daniel L; Stewart, Kevin O

    2016-01-01

    The patient safety movement has been deeply affected by the stories patients have shared that have identified numerous opportunities for improvements in safety. These stories have identified system and/or human inefficiencies or dysfunctions, possibly even failures, often resulting in patient harm. Although patients’ stories tell us much, less commonly heard are the stories of clinicians and how their personal observations regarding the environments they work in and the circumstances and pressures under which they work may degrade patient safety and lead to harm. If the health care industry is to function like a high-reliability industry, to improve its processes and achieve the outcomes that patients rightly deserve, then leaders and managers must seek and value input from those on the front lines—both clinicians and patients. Stories from clinicians provided in this article address themes that include incident identification, disclosure and transparency, just culture, the impact of clinical workload pressures, human factors liabilities, clinicians as secondary victims, the impact of disruptive and punitive behaviors, factors affecting professional morale, and personal failings. PMID:26580146

  19. caCORRECT2: Improving the accuracy and reliability of microarray data in the presence of artifacts

    PubMed Central

    2011-01-01

    Background In previous work, we reported the development of caCORRECT, a novel microarray quality control system built to identify and correct spatial artifacts commonly found on Affymetrix arrays. We have made recent improvements to caCORRECT, including the development of a model-based data-replacement strategy and integration with typical microarray workflows via caCORRECT's web portal and caBIG grid services. In this report, we demonstrate that caCORRECT improves the reproducibility and reliability of experimental results across several common Affymetrix microarray platforms. caCORRECT represents an advance over state-of-art quality control methods such as Harshlighting, and acts to improve gene expression calculation techniques such as PLIER, RMA and MAS5.0, because it incorporates spatial information into outlier detection as well as outlier information into probe normalization. The ability of caCORRECT to recover accurate gene expressions from low quality probe intensity data is assessed using a combination of real and synthetic artifacts with PCR follow-up confirmation and the affycomp spike in data. The caCORRECT tool can be accessed at the website: http://cacorrect.bme.gatech.edu. Results We demonstrate that (1) caCORRECT's artifact-aware normalization avoids the undesirable global data warping that happens when any damaged chips are processed without caCORRECT; (2) When used upstream of RMA, PLIER, or MAS5.0, the data imputation of caCORRECT generally improves the accuracy of microarray gene expression in the presence of artifacts more than using Harshlighting or not using any quality control; (3) Biomarkers selected from artifactual microarray data which have undergone the quality control procedures of caCORRECT are more likely to be reliable, as shown by both spike in and PCR validation experiments. Finally, we present a case study of the use of caCORRECT to reliably identify biomarkers for renal cell carcinoma, yielding two diagnostic biomarkers with

  20. Improving the quantitative accuracy of cerebral oxygen saturation in monitoring the injured brain using atlas based Near Infrared Spectroscopy models.

    PubMed

    Clancy, Michael; Belli, Antonio; Davies, David; Lucas, Samuel J E; Su, Zhangjie; Dehghani, Hamid

    2016-08-01

    The application of Near Infrared Spectroscopy (NIRS) for the monitoring of the cerebral oxygen saturation within the brain is well established, albeit using temporal data that can only measure relative changes of oxygenation state of the brain from a baseline. The focus of this investigation is to demonstrate that hybridisation of existing near infrared probe designs and reconstruction techniques can pave the way to produce a system and methods that can be used to monitor the absolute oxygen saturation in the injured brain. Using registered Atlas models in simulation, a novel method is outlined by which the quantitative accuracy and practicality of NIRS for specific use in monitoring the injured brain, can be improved, with cerebral saturation being recovered to within 10.1 ± 1.8% of the expected values. PMID:27003677

  1. Using radar ground-truth to validate and improve the location accuracy of a lightning direction-finding network

    NASA Technical Reports Server (NTRS)

    Goodman, Steven J.

    1989-01-01

    A technique is described in which isolated radar echoes associated with clusters of lightning strikes are used to validate and improve the location accuracy of a lightning-direction-finding network. Using this technique, site errors of a magnetic direction-finding network for locating lightning strikes to ground were accurately determined. The technique offers advantages over existing techniques in that large sample sizes are readily attainable over a broad area on a regular basis; the technique can also provide additional constraints to redundant data methods such as that described by Orville (1987). Since most lightning strike networks have either partial or full weather radar coverage, the technique is practical for all but a few users.

  2. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. PMID:27232117

  3. Method to improve the blade tip-timing accuracy of fiber bundle sensor under varying tip clearance

    NASA Astrophysics Data System (ADS)

    Duan, Fajie; Zhang, Jilong; Jiang, Jiajia; Guo, Haotian; Ye, Dechao

    2016-01-01

    Blade vibration measurement based on the blade tip-timing method has become an industry-standard procedure. Fiber bundle sensors are widely used for tip-timing measurement. However, the variation of clearance between the sensor and the blade will bring a tip-timing error to fiber bundle sensors due to the change in signal amplitude. This article presents methods based on software and hardware to reduce the error caused by the tip clearance change. The software method utilizes both the rising and falling edges of the tip-timing signal to determine the blade arrival time, and a calibration process suitable for asymmetric tip-timing signals is presented. The hardware method uses an automatic gain control circuit to stabilize the signal amplitude. Experiments are conducted and the results prove that both methods can effectively reduce the impact of tip clearance variation on the blade tip-timing and improve the accuracy of measurements.

  4. Unsupervised HLA Peptidome Deconvolution Improves Ligand Prediction Accuracy and Predicts Cooperative Effects in Peptide-HLA Interactions.

    PubMed

    Bassani-Sternberg, Michal; Gfeller, David

    2016-09-15

    Ag presentation on HLA molecules plays a central role in infectious diseases and tumor immunology. To date, large-scale identification of (neo-)Ags from DNA sequencing data has mainly relied on predictions. In parallel, mass spectrometry analysis of HLA peptidome is increasingly performed to directly detect peptides presented on HLA molecules. In this study, we use a novel unsupervised approach to assign mass spectrometry-based HLA peptidomics data to their cognate HLA molecules. We show that incorporation of deconvoluted HLA peptidomics data in ligand prediction algorithms can improve their accuracy for HLA alleles with few ligands in existing databases. The results of our computational analysis of large datasets of naturally processed HLA peptides, together with experimental validation and protein structure analysis, further reveal how HLA-binding motifs change with peptide length and predict new cooperative effects between distant residues in HLA-B07:02 ligands. PMID:27511729

  5. Improving optical fiber current sensor accuracy using artificial neural networks to compensate temperature and minor non-ideal effects

    NASA Astrophysics Data System (ADS)

    Zimmermann, Antonio C.; Besen, Marcio; Encinas, Leonardo S.; Nicolodi, Rosane

    2011-05-01

    This article presents a practical signal processing methodology, based on Artificial Neural Networks - ANN, to process the measurement signals of typical Fiber Optic Current Sensors - FOCS, achieving higher accuracy from temperature and non-linearity compensation. The proposed idea resolve FOCS primary problems, mainly when it is difficult to determine all errors sources present in the physical phenomenon or the measurement equation becomes too nonlinear to be applied in a wide measurement range. The great benefit of ANN is to get a transfer function for the measurement system taking in account all unknowns, even those from unwanted and unknowing effects, providing a compensated output after the ANN training session. Then, the ANN training is treated like a black box, based on experimental data, where the transfer function of the measurement system, its unknowns and non-idealities are processed and compensated at once, given a fast and robust alternative to the FOCS theoretical method. A real FOCS system was built and the signals acquired from the photo-detectors are processed by the Faraday's Laws formulas and the ANN method, giving measurement results for both signal processing strategies. The coil temperature measurements are also included in the ANN signal processing. To compare these results, a current measuring instrument standard is used together with a metrological calibration procedure. Preliminary results from a variable temperature experiment shows the higher accuracy, better them 0.2% of maximum error, of the ANN methodology, resulting in a quick and robust method to hands with FOCS difficulties on of non-idealities compensation.

  6. Control over structure-specific flexibility improves anatomical accuracy for point-based deformable registration in bladder cancer radiotherapy

    SciTech Connect

    Wognum, S.; Chai, X.; Hulshof, M. C. C. M.; Bel, A.; Bondar, L.; Zolnay, A. G.; Hoogeman, M. S.

    2013-02-15

    parameters were determined for the weighted S-TPS-RPM. Results: The weighted S-TPS-RPM registration algorithm with optimal parameters significantly improved the anatomical accuracy as compared to S-TPS-RPM registration of the bladder alone and reduced the range of the anatomical errors by half as compared with the simultaneous nonweighted S-TPS-RPM registration of the bladder and tumor structures. The weighted algorithm reduced the RDE range of lipiodol markers from 0.9-14 mm after rigid bone match to 0.9-4.0 mm, compared to a range of 1.1-9.1 mm with S-TPS-RPM of bladder alone and 0.9-9.4 mm for simultaneous nonweighted registration. All registration methods resulted in good geometric accuracy on the bladder; average error values were all below 1.2 mm. Conclusions: The weighted S-TPS-RPM registration algorithm with additional weight parameter allowed indirect control over structure-specific flexibility in multistructure registrations of bladder and bladder tumor, enabling anatomically coherent registrations. The availability of an anatomically validated deformable registration method opens up the horizon for improvements in IGART for bladder cancer.

  7. Baseline correction of a correlation model for improving the prediction accuracy of infrared marker-based dynamic tumor tracking.

    PubMed

    Akimoto, Mami; Nakamura, Mitsuhiro; Mukumoto, Nobutaka; Yamada, Masahiro; Tanabe, Hiroaki; Ueki, Nami; Kaneko, Shuji; Matsuo, Yukinori; Mizowaki, Takashi; Kokubo, Masaki; Hiraoka, Masahiro

    2015-01-01

    We previously found that the baseline drift of external and internal respiratory motion reduced the prediction accuracy of infrared (IR) marker-based dynamic tumor tracking irradiation (IR Tracking) using the Vero4DRT system. Here, we proposed a baseline correction method, applied immediately before beam delivery, to improve the prediction accuracy of IR Tracking. To perform IR Tracking, a four-dimensional (4D) model was constructed at the beginning of treatment to correlate the internal and external respiratory signals, and the model was expressed using a quadratic function involving the IR marker position (x) and its velocity (v), namely function F(x,v). First, the first 4D model, F1st(x,v), was adjusted by the baseline drift of IR markers (BDIR) along the x-axis, as function F'(x,v). Next, BDdetect, that defined as the difference between the target positions indicated by the implanted fiducial markers (Pdetect) and the predicted target positions with F'(x,v) (Ppredict) was determined using orthogonal kV X-ray images at the peaks of the Pdetect of the end-inhale and end-exhale phases for 10 s just before irradiation. F'(x,v) was corrected with BDdetect to compensate for the residual error. The final corrected 4D model was expressed as Fcor(x,v) = F1st{(x-BDIR),v}-BDdetect. We retrospectively applied this function to 53 paired log files of the 4D model for 12 lung cancer patients who underwent IR Tracking. The 95th percentile of the absolute differences between Pdetect and Ppredict (|Ep|) was compared between F1st(x,v) and Fcor(x,v). The median 95th percentile of |Ep| (units: mm) was 1.0, 1.7, and 3.5 for F1st(x,v), and 0.6, 1.1, and 2.1 for Fcor(x,v) in the left-right, anterior-posterior, and superior-inferior directions, respectively. Over all treatment sessions, the 95th percentile of |Ep| peaked at 3.2 mm using Fcor(x,v) compared with 8.4 mm using F1st(x,v). Our proposed method improved the prediction accuracy of IR Tracking by correcting the baseline drift

  8. Accuracy improvement of the ice flow rate measurements on Antarctic ice sheet by DInSAR method

    NASA Astrophysics Data System (ADS)

    Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi

    2015-04-01

    to be apparent ones, the average could be a measure of flow rate estimation accuracy by DInSAR. Therefore, it is concluded that the accuracy of the ice flow rate measurement can be improved by using PRISM-DEM. In this presentation, we will show the results of the estimated flow rate of ice streams in the region of interest, and discuss the additional accuracy improvement of this method.

  9. Structuring Out-of-School Time to Improve Academic Achievement. IES Practice Guide. NCEE 2009-012

    ERIC Educational Resources Information Center

    Beckett, Megan; Borman, Geoffrey; Capizzano, Jeffrey; Parsley, Danette; Ross, Steven; Schirm, Allen; Taylor, Jessica

    2009-01-01

    Out-of-school time programs can enhance academic achievement by helping students learn outside the classroom. The purpose of this practice guide is to provide recommendations for organizing and delivering school-based out-of-school time (OST) programs to improve the academic achievement of student participants. The five recommendations in this…

  10. Improving Science Achievement and Attitudes of Students With and Without Learning Disabilities

    NASA Astrophysics Data System (ADS)

    Sanders-White, Pamela

    The primary purpose of this study was to investigate the effect of structured note-taking compared to traditional note-taking on the acquisition of scientific knowledge for students with and without learning disabilities (LD) and students with reading difficulties (RD). An additional purpose was to examine whether the two note-taking methods affected students' attitudes toward science. The sample population consisted of 203 fifth grade students across four public schools in the southern area of the United States. A standardized instrument aligned to Florida's science standards was used to measure the acquisition of scientific knowledge and the Test of Science-Related Attitudes (TOSRA) was used to measure seven distinct science-related attitudes. For meaningful analyses, students with LD and students with RD were collapsed to form a single group due to the small numbers of participants in each of the subgroups; the collapsed group was referred to as "low achievers." A three-way repeated measures ANOVA was conducted to determine the effects of the pretest-posttest Science Interim assessment by group, type of student, and gender. The pretest-posttest Science Interim assessment scores were the within-group factor, while group, type of student, and gender were the between-groups factors. Results revealed that there was a significant interaction between the pretest-posttest Science Interim assessment and group, F(1, 191) = 9.320, p = .003, indicating that scientific knowledge scores increased for the experimental group, but decreased for the control group. Results also indicated that there was a significant three-way interaction between the pretest-posttest Science Interim assessment, group, and gender, F(1, 191) = 5.197, p = .024, showing that all participants in the experimental group improved their scores; while in the control group, female scores decreased and male scores increased. Participants in the experimental and control groups did not show improved attitudes

  11. Improved single-cell culture achieved using micromolding in capillaries technology coupled with poly (HEMA)

    PubMed Central

    Ye, Fang; Jiang, Jin; Chang, Honglong; Xie, Li; Deng, Jinjun; Ma, Zhibo; Yuan, Weizheng

    2015-01-01

    Cell studies at the single-cell level are becoming more and more critical for understanding the complex biological processes. Here, we present an optimization study investigating the positioning of single cells using micromolding in capillaries technology coupled with the cytophobic biomaterial poly (2-hydroxyethyl methacrylate) (poly (HEMA)). As a cytophobic biomaterial, poly (HEMA) was used to inhibit cells, whereas the glass was used as the substrate to provide a cell adhesive background. The poly (HEMA) chemical barrier was obtained using micromolding in capillaries, and the microchannel networks used for capillarity were easily achieved by reversibly bonding the polydimethylsiloxane mold and the glass. Finally, discrete cell adhesion regions were presented on the glass surface. This method is facile and low cost, and the reagents are commercially available. We validated the cytophobic abilities of the poly (HEMA), optimized the channel parameters for higher quality and more stable poly (HEMA) patterns by investigating the effects of changing the aspect ratio and the width of the microchannel on the poly (HEMA) grid pattern, and improved the single-cell occupancy by optimizing the dimensions of the cell adhesion regions. PMID:26339307

  12. NREL Evaluates the Thermal Performance of Uninsulated Walls to Improve the Accuracy of Building Energy Simulation Tools (Fact Sheet)

    SciTech Connect

    Not Available

    2012-01-01

    This technical highlight describes NREL research to develop models of uninsulated wall assemblies that help to improve the accuracy of building energy simulation tools when modeling potential energy savings in older homes. Researchers at the National Renewable Energy Laboratory (NREL) have developed models for evaluating the thermal performance of walls in existing homes that will improve the accuracy of building energy simulation tools when predicting potential energy savings of existing homes. Uninsulated walls are typical in older homes where the wall cavities were not insulated during construction or where the insulating material has settled. Accurate calculation of heat transfer through building enclosures will help determine the benefit of energy efficiency upgrades in order to reduce energy consumption in older American homes. NREL performed detailed computational fluid dynamics (CFD) analysis to quantify the energy loss/gain through the walls and to visualize different airflow regimes within the uninsulated cavities. The effects of ambient outdoor temperature, radiative properties of building materials, and insulation level were investigated. The study showed that multi-dimensional airflows occur in walls with uninsulated cavities and that the thermal resistance is a function of the outdoor temperature - an effect not accounted for in existing building energy simulation tools. The study quantified the difference between CFD prediction and the approach currently used in building energy simulation tools over a wide range of conditions. For example, researchers found that CFD predicted lower heating loads and slightly higher cooling loads. Implementation of CFD results into building energy simulation tools such as DOE2 and EnergyPlus will likely reduce the predicted heating load of homes. Researchers also determined that a small air gap in a partially insulated cavity can lead to a significant reduction in thermal resistance. For instance, a 4-in. tall air gap

  13. Evaluating Landsat 8 Satellite Sensor Data for Improved Vegetation Mapping Accuracy of the New Hampshire Coastal Watershed Area

    NASA Astrophysics Data System (ADS)

    Ledoux, Lindsay

    the previous Landsat sensor (Landsat 7). Once classification had been performed, traditional and area-based accuracy assessments were implemented. Comparison measures were also calculated (i.e. Kappa, Z test statistic). The results from this study indicate that, while using Landsat 8 imagery is useful, the additional spectral bands provided in the Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) do not provide an improvement in vegetation classification accuracy in this study.

  14. Improvement of the accuracy of the aircraft center of gravity by employing optical fiber Bragg grating technology

    NASA Astrophysics Data System (ADS)

    Zhang, Hongtao; Wang, Pengfei; Fan, LingLing; Guan, Liang; Zhao, Qiming; Cui, Hong-Liang

    2010-04-01

    Safety flight of aircrafts requires that the aircraft center of gravity (CG) must fall within specified limits established by the manufacturer. However, the aircraft CG depends not only on the structure of planes, but also on the passengers and their luggage. The current method of estimating the weight of passengers and luggage by the average weight may result in a violation of this requirement. To reduce the discrepancy between the actual weight and estimated weight, we propose a method of improving the accuracy of calculating the CG of the plane by weighing the passengers and their personal luggage. This method is realized by a Weigh-In-Motion (WIM) system installed at boarding gates based on optical fiber Bragg grating (FBG) technology. One prototype of WIM is fabricated and tested at lab. The resolution of this system is 2 kg and can be further improved by advanced manufacture technology. With the accurate weight of passengers and luggage coming from the WIM system and the locations of passengers and luggage obtained from boarding cards, the aircraft CG can be calculated correctly. This method can be applied into other fields, such as escalators, boarding gates for ferries.

  15. Studying the Effect of Adaptive Momentum in Improving the Accuracy of Gradient Descent Back Propagation Algorithm on Classification Problems

    NASA Astrophysics Data System (ADS)

    Rehman, Muhammad Zubair; Nawi, Nazri Mohd.

    Despite being widely used in the practical problems around the world, Gradient Descent Back-propagation algorithm comes with problems like slow convergence and convergence to local minima. Previous researchers have suggested certain modifications to improve the convergence in gradient Descent Back-propagation algorithm such as careful selection of input weights and biases, learning rate, momentum, network topology, activation function and value for 'gain' in the activation function. This research proposed an algorithm for improving the working performance of back-propagation algorithm which is 'Gradient Descent with Adaptive Momentum (GDAM)' by keeping the gain value fixed during all network trials. The performance of GDAM is compared with 'Gradient Descent with fixed Momentum (GDM)' and 'Gradient Descent Method with Adaptive Gain (GDM-AG)'. The learning rate is fixed to 0.4 and maximum epochs are set to 3000 while sigmoid activation function is used for the experimentation. The results show that GDAM is a better approach than previous methods with an accuracy ratio of 1.0 for classification problems like Wine Quality, Mushroom and Thyroid disease.

  16. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  17. Does ultrasonography accurately diagnose acute cholecystitis? Improving diagnostic accuracy based on a review at a regional hospital

    PubMed Central

    Hwang, Hamish; Marsh, Ian; Doyle, Jason

    2014-01-01

    Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607

  18. Improved Accuracy of Percutaneous Biopsy Using “Cross and Push” Technique for Patients Suspected with Malignant Biliary Strictures

    SciTech Connect

    Patel, Prashant; Rangarajan, Balaji; Mangat, Kamarjit E-mail: kamarjit.mangat@nhs.net

    2015-08-15

    PurposeVarious methods have been used to sample biliary strictures, including percutaneous fine-needle aspiration biopsy, intraluminal biliary washings, and cytological analysis of drained bile. However, none of these methods has proven to be particularly sensitive in the diagnosis of biliary tract malignancy. We report improved diagnostic accuracy using a modified technique for percutaneous transluminal biopsy in patients with this disease.Materials and MethodsFifty-two patients with obstructive jaundice due to a biliary stricture underwent transluminal forceps biopsy with a modified “cross and push” technique with the use of a flexible biopsy forceps kit commonly used for cardiac biopsies. The modification entailed crossing the stricture with a 0.038-in. wire leading all the way down into the duodenum. A standard or long sheath was subsequently advanced up to the stricture over the wire. A Cook 5.2-Fr biopsy forceps was introduced alongside the wire and the cup was opened upon exiting the sheath. With the biopsy forceps open, within the stricture the sheath was used to push and advance the biopsy cup into the stricture before the cup was closed and the sample obtained. The data were analysed retrospectively.ResultsWe report the outcomes of this modified technique used on 52 consecutive patients with obstructive jaundice secondary to a biliary stricture. The sensitivity and accuracy were 93.3 and 94.2 %, respectively. There was one procedure-related late complication.ConclusionWe propose that the modified “cross and push” technique is a feasible, safe, and more accurate option over the standard technique for sampling strictures of the biliary tree.

  19. The use of film dosimetry of the penumbra region to improve the accuracy of intensity modulated radiotherapy

    SciTech Connect

    Arnfield, Mark R.; Otto, Karl; Aroumougame, Vijayan R.; Alkins, Ryan D.

    2005-01-01

    Accurate measurements of the penumbra region are important for the proper modeling of the radiation beam for linear accelerator-based intensity modulated radiation therapy. The usual data collection technique with a standard ionization chamber artificially broadens the measured beam penumbrae due to volume effects. The larger the chamber, the greater is the spurious increase in penumbra width. This leads to inaccuracies in dose calculations of small fields, including small fields or beam segments used in IMRT. This source of error can be rectified by the use of film dosimetry for penumbra measurements because of its high spatial resolution. The accuracy of IMRT calculations with a pencil beam convolution model in a commercial treatment planning system was examined using commissioning data with and without the benefit of film dosimetry of the beam penumbrae. A set of dose-spread kernels of the pencil beam model was calculated based on commissioning data that included beam profiles gathered with a 0.6-cm-i.d. ionization chamber. A second set of dose-spread kernels was calculated using the same commissioning data with the exception of the penumbrae, which were measured with radiographic film. The average decrease in the measured width of the 80%-20% penumbrae of various square fields of size 3-40 cm, at 5 cm depth in water-equivalent plastic was 0.27 cm. Calculations using the pencil beam model after it was re-commissioned using film dosimetry of the penumbrae gave better agreement with measurements of IMRT fields, including superior reproduction of high dose gradient regions and dose extrema. These results show that accurately measuring the beam penumbrae improves the accuracy of the dose distributions predicted by the treatment planning system and thus is important when commissioning beam models used for IMRT.

  20. 41 CFR 102-193.25 - What type of records management business process improvements should my agency strive to achieve?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-193.25 What type of records management business process improvements should my agency strive to... management business process improvements should my agency strive to achieve? 102-193.25 Section 102-193.25... that needed records can be found rapidly to conduct agency business, to ensure that records...