Sample records for improved model accuracy

  1. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  2. Research on navigation of satellite constellation based on an asynchronous observation model using X-ray pulsar

    NASA Astrophysics Data System (ADS)

    Guo, Pengbin; Sun, Jian; Hu, Shuling; Xue, Ju

    2018-02-01

    Pulsar navigation is a promising navigation method for high-altitude orbit space tasks or deep space exploration. At present, an important reason for restricting the development of pulsar navigation is that navigation accuracy is not high due to the slow update of the measurements. In order to improve the accuracy of pulsar navigation, an asynchronous observation model which can improve the update rate of the measurements is proposed on the basis of satellite constellation which has a broad space for development because of its visibility and reliability. The simulation results show that the asynchronous observation model improves the positioning accuracy by 31.48% and velocity accuracy by 24.75% than that of the synchronous observation model. With the new Doppler effects compensation method in the asynchronous observation model proposed in this paper, the positioning accuracy is improved by 32.27%, and the velocity accuracy is improved by 34.07% than that of the traditional method. The simulation results show that without considering the clock error will result in a filtering divergence.

  3. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  4. Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Schkolnik, Gerard S.

    1995-01-01

    Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.

  5. Medium- and Long-term Prediction of LOD Change by the Leap-step Autoregressive Model

    NASA Astrophysics Data System (ADS)

    Wang, Qijie

    2015-08-01

    The accuracy of medium- and long-term prediction of length of day (LOD) change base on combined least-square and autoregressive (LS+AR) deteriorates gradually. Leap-step autoregressive (LSAR) model can significantly reduce the edge effect of the observation sequence. Especially, LSAR model greatly improves the resolution of signals’ low-frequency components. Therefore, it can improve the efficiency of prediction. In this work, LSAR is used to forecast the LOD change. The LOD series from EOP 08 C04 provided by IERS is modeled by both the LSAR and AR models. The results of the two models are analyzed and compared. When the prediction length is between 10-30 days, the accuracy improvement is less than 10%. When the prediction length amounts to above 30 day, the accuracy improved obviously, with the maximum being around 19%. The results show that the LSAR model has higher prediction accuracy and stability in medium- and long-term prediction.

  6. Accuracy Analysis of a Box-wing Theoretical SRP Model

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui

    2016-07-01

    For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.

  7. A Final Approach Trajectory Model for Current Operations

    NASA Technical Reports Server (NTRS)

    Gong, Chester; Sadovsky, Alexander

    2010-01-01

    Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.

  8. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  9. An Improved Interacting Multiple Model Filtering Algorithm Based on the Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Zhu, Wei; Wang, Wei; Yuan, Gannan

    2016-06-01

    In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).

  10. Effect of recent popularity on heat-conduction based recommendation models

    NASA Astrophysics Data System (ADS)

    Li, Wen-Jun; Dong, Qiang; Shi, Yang-Bo; Fu, Yan; He, Jia-Lin

    2017-05-01

    Accuracy and diversity are two important measures in evaluating the performance of recommender systems. It has been demonstrated that the recommendation model inspired by the heat conduction process has high diversity yet low accuracy. Many variants have been introduced to improve the accuracy while keeping high diversity, most of which regard the current node-degree of an item as its popularity. However in this way, a few outdated items of large degree may be recommended to an enormous number of users. In this paper, we take the recent popularity (recently increased item degrees) into account in the heat-conduction based methods, and propose accordingly the improved recommendation models. Experimental results on two benchmark data sets show that the accuracy can be largely improved while keeping the high diversity compared with the original models.

  11. Cohen's Kappa and classification table metrics 2.0: An ArcView 3.x extension for accuracy assessment of spatially explicit models

    Treesearch

    Jeff Jenness; J. Judson Wynne

    2005-01-01

    In the field of spatially explicit modeling, well-developed accuracy assessment methodologies are often poorly applied. Deriving model accuracy metrics have been possible for decades, but these calculations were made by hand or with the use of a spreadsheet application. Accuracy assessments may be useful for: (1) ascertaining the quality of a model; (2) improving model...

  12. Utilizing a language model to improve online dynamic data collection in P300 spellers.

    PubMed

    Mainsah, Boyla O; Colwell, Kenneth A; Collins, Leslie M; Throckmorton, Chandra S

    2014-07-01

    P300 spellers provide a means of communication for individuals with severe physical limitations, especially those with locked-in syndrome, such as amyotrophic lateral sclerosis. However, P300 speller use is still limited by relatively low communication rates due to the multiple data measurements that are required to improve the signal-to-noise ratio of event-related potentials for increased accuracy. Therefore, the amount of data collection has competing effects on accuracy and spelling speed. Adaptively varying the amount of data collection prior to character selection has been shown to improve spelling accuracy and speed. The goal of this study was to optimize a previously developed dynamic stopping algorithm that uses a Bayesian approach to control data collection by incorporating a priori knowledge via a language model. Participants ( n = 17) completed online spelling tasks using the dynamic stopping algorithm, with and without a language model. The addition of the language model resulted in improved participant performance from a mean theoretical bit rate of 46.12 bits/min at 88.89% accuracy to 54.42 bits/min ( ) at 90.36% accuracy.

  13. Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven; Lowe, Jonathan; Woodburn, James

    2015-01-01

    Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.

  14. Correlation of ground tests and analyses of a dynamically scaled Space Station model configuration

    NASA Technical Reports Server (NTRS)

    Javeed, Mehzad; Edighoffer, Harold H.; Mcgowan, Paul E.

    1993-01-01

    Verification of analytical models through correlation with ground test results of a complex space truss structure is demonstrated. A multi-component, dynamically scaled space station model configuration is the focus structure for this work. Previously established test/analysis correlation procedures are used to develop improved component analytical models. Integrated system analytical models, consisting of updated component analytical models, are compared with modal test results to establish the accuracy of system-level dynamic predictions. Design sensitivity model updating methods are shown to be effective for providing improved component analytical models. Also, the effects of component model accuracy and interface modeling fidelity on the accuracy of integrated model predictions is examined.

  15. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  16. Automation of energy demand forecasting

    NASA Astrophysics Data System (ADS)

    Siddique, Sanzad

    Automation of energy demand forecasting saves time and effort by searching automatically for an appropriate model in a candidate model space without manual intervention. This thesis introduces a search-based approach that improves the performance of the model searching process for econometrics models. Further improvements in the accuracy of the energy demand forecasting are achieved by integrating nonlinear transformations within the models. This thesis introduces machine learning techniques that are capable of modeling such nonlinearity. Algorithms for learning domain knowledge from time series data using the machine learning methods are also presented. The novel search based approach and the machine learning models are tested with synthetic data as well as with natural gas and electricity demand signals. Experimental results show that the model searching technique is capable of finding an appropriate forecasting model. Further experimental results demonstrate an improved forecasting accuracy achieved by using the novel machine learning techniques introduced in this thesis. This thesis presents an analysis of how the machine learning techniques learn domain knowledge. The learned domain knowledge is used to improve the forecast accuracy.

  17. Accuracies of univariate and multivariate genomic prediction models in African cassava.

    PubMed

    Okeke, Uche Godfrey; Akdemir, Deniz; Rabbi, Ismail; Kulakow, Peter; Jannink, Jean-Luc

    2017-12-04

    Genomic selection (GS) promises to accelerate genetic gain in plant breeding programs especially for crop species such as cassava that have long breeding cycles. Practically, to implement GS in cassava breeding, it is necessary to evaluate different GS models and to develop suitable models for an optimized breeding pipeline. In this paper, we compared (1) prediction accuracies from a single-trait (uT) and a multi-trait (MT) mixed model for a single-environment genetic evaluation (Scenario 1), and (2) accuracies from a compound symmetric multi-environment model (uE) parameterized as a univariate multi-kernel model to a multivariate (ME) multi-environment mixed model that accounts for genotype-by-environment interaction for multi-environment genetic evaluation (Scenario 2). For these analyses, we used 16 years of public cassava breeding data for six target cassava traits and a fivefold cross-validation scheme with 10-repeat cycles to assess model prediction accuracies. In Scenario 1, the MT models had higher prediction accuracies than the uT models for all traits and locations analyzed, which amounted to on average a 40% improved prediction accuracy. For Scenario 2, we observed that the ME model had on average (across all locations and traits) a 12% improved prediction accuracy compared to the uE model. We recommend the use of multivariate mixed models (MT and ME) for cassava genetic evaluation. These models may be useful for other plant species.

  18. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  19. High-precision radiometric tracking for planetary approach and encounter in the inner solar system

    NASA Technical Reports Server (NTRS)

    Christensen, C. S.; Thurman, S. W.; Davidson, J. M.; Finger, M. H.; Folkner, W. M.

    1989-01-01

    The benefits of improved radiometric tracking data have been studied for planetary approach within the inner Solar System using the Mars Rover Sample Return trajectory as a model. It was found that the benefit of improved data to approach and encounter navigation was highly dependent on the a priori uncertainties assumed for several non-estimated parameters, including those for frame-tie, Earth orientation, troposphere delay, and station locations. With these errors at their current levels, navigational performance was found to be insensitive to enhancements in data accuracy. However, when expected improvements in these errors are modeled, performance with current-accuracy data significantly improves, with substantial further improvements possible with enhancements in data accuracy.

  20. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    NASA Astrophysics Data System (ADS)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  1. Spatial Pattern Classification for More Accurate Forecasting of Variable Energy Resources

    NASA Astrophysics Data System (ADS)

    Novakovskaia, E.; Hayes, C.; Collier, C.

    2014-12-01

    The accuracy of solar and wind forecasts is becoming increasingly essential as grid operators continue to integrate additional renewable generation onto the electric grid. Forecast errors affect rate payers, grid operators, wind and solar plant maintenance crews and energy traders through increases in prices, project down time or lost revenue. While extensive and beneficial efforts were undertaken in recent years to improve physical weather models for a broad spectrum of applications these improvements have generally not been sufficient to meet the accuracy demands of system planners. For renewables, these models are often used in conjunction with additional statistical models utilizing both meteorological observations and the power generation data. Forecast accuracy can be dependent on specific weather regimes for a given location. To account for these dependencies it is important that parameterizations used in statistical models change as the regime changes. An automated tool, based on an artificial neural network model, has been developed to identify different weather regimes as they impact power output forecast accuracy at wind or solar farms. In this study, improvements in forecast accuracy were analyzed for varying time horizons for wind farms and utility-scale PV plants located in different geographical regions.

  2. Research on Improved Depth Belief Network-Based Prediction of Cardiovascular Diseases

    PubMed Central

    Zhang, Hongpo

    2018-01-01

    Quantitative analysis and prediction can help to reduce the risk of cardiovascular disease. Quantitative prediction based on traditional model has low accuracy. The variance of model prediction based on shallow neural network is larger. In this paper, cardiovascular disease prediction model based on improved deep belief network (DBN) is proposed. Using the reconstruction error, the network depth is determined independently, and unsupervised training and supervised optimization are combined. It ensures the accuracy of model prediction while guaranteeing stability. Thirty experiments were performed independently on the Statlog (Heart) and Heart Disease Database data sets in the UCI database. Experimental results showed that the mean of prediction accuracy was 91.26% and 89.78%, respectively. The variance of prediction accuracy was 5.78 and 4.46, respectively. PMID:29854369

  3. Protein homology model refinement by large-scale energy optimization.

    PubMed

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  4. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  5. Alternative Loglinear Smoothing Models and Their Effect on Equating Function Accuracy. Research Report. ETS RR-09-48

    ERIC Educational Resources Information Center

    Moses, Tim; Holland, Paul

    2009-01-01

    This simulation study evaluated the potential of alternative loglinear smoothing strategies for improving equipercentile equating function accuracy. These alternative strategies use cues from the sample data to make automatable and efficient improvements to model fit, either through the use of indicator functions for fitting large residuals or by…

  6. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method

    PubMed Central

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-01-01

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds. PMID:27270206

  7. Dissolved oxygen content prediction in crab culture using a hybrid intelligent method.

    PubMed

    Yu, Huihui; Chen, Yingyi; Hassan, ShahbazGul; Li, Daoliang

    2016-06-08

    A precise predictive model is needed to obtain a clear understanding of the changing dissolved oxygen content in outdoor crab ponds, to assess how to reduce risk and to optimize water quality management. The uncertainties in the data from multiple sensors are a significant factor when building a dissolved oxygen content prediction model. To increase prediction accuracy, a new hybrid dissolved oxygen content forecasting model based on the radial basis function neural networks (RBFNN) data fusion method and a least squares support vector machine (LSSVM) with an optimal improved particle swarm optimization(IPSO) is developed. In the modelling process, the RBFNN data fusion method is used to improve information accuracy and provide more trustworthy training samples for the IPSO-LSSVM prediction model. The LSSVM is a powerful tool for achieving nonlinear dissolved oxygen content forecasting. In addition, an improved particle swarm optimization algorithm is developed to determine the optimal parameters for the LSSVM with high accuracy and generalizability. In this study, the comparison of the prediction results of different traditional models validates the effectiveness and accuracy of the proposed hybrid RBFNN-IPSO-LSSVM model for dissolved oxygen content prediction in outdoor crab ponds.

  8. Improved Short-Term Clock Prediction Method for Real-Time Positioning.

    PubMed

    Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan

    2017-06-06

    The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.

  9. Improved spatial accuracy of functional maps in the rat olfactory bulb using supervised machine learning approach.

    PubMed

    Murphy, Matthew C; Poplawsky, Alexander J; Vazquez, Alberto L; Chan, Kevin C; Kim, Seong-Gi; Fukuda, Mitsuhiro

    2016-08-15

    Functional MRI (fMRI) is a popular and important tool for noninvasive mapping of neural activity. As fMRI measures the hemodynamic response, the resulting activation maps do not perfectly reflect the underlying neural activity. The purpose of this work was to design a data-driven model to improve the spatial accuracy of fMRI maps in the rat olfactory bulb. This system is an ideal choice for this investigation since the bulb circuit is well characterized, allowing for an accurate definition of activity patterns in order to train the model. We generated models for both cerebral blood volume weighted (CBVw) and blood oxygen level dependent (BOLD) fMRI data. The results indicate that the spatial accuracy of the activation maps is either significantly improved or at worst not significantly different when using the learned models compared to a conventional general linear model approach, particularly for BOLD images and activity patterns involving deep layers of the bulb. Furthermore, the activation maps computed by CBVw and BOLD data show increased agreement when using the learned models, lending more confidence to their accuracy. The models presented here could have an immediate impact on studies of the olfactory bulb, but perhaps more importantly, demonstrate the potential for similar flexible, data-driven models to improve the quality of activation maps calculated using fMRI data. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions

    PubMed Central

    Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.

    2013-01-01

    Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843

  11. Adaptive time-variant models for fuzzy-time-series forecasting.

    PubMed

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  12. The value of vital sign trends for detecting clinical deterioration on the wards

    PubMed Central

    Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P

    2016-01-01

    Aim Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient’s current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Methods Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). Results A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC −0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Conclusion Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. PMID:26898412

  13. The value of vital sign trends for detecting clinical deterioration on the wards.

    PubMed

    Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P

    2016-05-01

    Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient's current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC -0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Tightly Coupled Integration of GPS Ambiguity Fixed Precise Point Positioning and MEMS-INS through a Troposphere-Constrained Adaptive Kalman Filter

    PubMed Central

    Han, Houzeng; Xu, Tianhe; Wang, Jian

    2016-01-01

    Precise Point Positioning (PPP) makes use of the undifferenced pseudorange and carrier phase measurements with ionospheric-free (IF) combinations to achieve centimeter-level positioning accuracy. Conventionally, the IF ambiguities are estimated as float values. To improve the PPP positioning accuracy and shorten the convergence time, the integer phase clock model with between-satellites single-difference (BSSD) operation is used to recover the integer property. However, the continuity and availability of stand-alone PPP is largely restricted by the observation environment. The positioning performance will be significantly degraded when GPS operates under challenging environments, if less than five satellites are present. A commonly used approach is integrating a low cost inertial sensor to improve the positioning performance and robustness. In this study, a tightly coupled (TC) algorithm is implemented by integrating PPP with inertial navigation system (INS) using an Extended Kalman filter (EKF). The navigation states, inertial sensor errors and GPS error states are estimated together. The troposphere constrained approach, which utilizes external tropospheric delay as virtual observation, is applied to further improve the ambiguity-fixed height positioning accuracy, and an improved adaptive filtering strategy is implemented to improve the covariance modelling considering the realistic noise effect. A field vehicular test with a geodetic GPS receiver and a low cost inertial sensor was conducted to validate the improvement on positioning performance with the proposed approach. The results show that the positioning accuracy has been improved with inertial aiding. Centimeter-level positioning accuracy is achievable during the test, and the PPP/INS TC integration achieves a fast re-convergence after signal outages. For troposphere constrained solutions, a significant improvement for the height component has been obtained. The overall positioning accuracies of the height component are improved by 30.36%, 16.95% and 24.07% for three different convergence times, i.e., 60, 50 and 30 min, respectively. It shows that the ambiguity-fixed horizontal positioning accuracy has been significantly improved. When compared with the conventional PPP solution, it can be seen that position accuracies are improved by 19.51%, 61.11% and 23.53% for the north, east and height components, respectively, after one hour convergence through the troposphere constraint fixed PPP/INS with adaptive covariance model. PMID:27399721

  15. Improving Fermi Orbit Determination and Prediction in an Uncertain Atmospheric Drag Environment

    NASA Technical Reports Server (NTRS)

    Vavrina, Matthew A.; Newman, Clark P.; Slojkowski, Steven E.; Carpenter, J. Russell

    2014-01-01

    Orbit determination and prediction of the Fermi Gamma-ray Space Telescope trajectory is strongly impacted by the unpredictability and variability of atmospheric density and the spacecraft's ballistic coefficient. Operationally, Global Positioning System point solutions are processed with an extended Kalman filter for orbit determination, and predictions are generated for conjunction assessment with secondary objects. When these predictions are compared to Joint Space Operations Center radar-based solutions, the close approach distance between the two predictions can greatly differ ahead of the conjunction. This work explores strategies for improving prediction accuracy and helps to explain the prediction disparities. Namely, a tuning analysis is performed to determine atmospheric drag modeling and filter parameters that can improve orbit determination as well as prediction accuracy. A 45% improvement in three-day prediction accuracy is realized by tuning the ballistic coefficient and atmospheric density stochastic models, measurement frequency, and other modeling and filter parameters.

  16. Evaluating the potential for site-specific modification of LiDAR DEM derivatives to improve environmental planning-scale wetland identification using Random Forest classification

    NASA Astrophysics Data System (ADS)

    O'Neil, Gina L.; Goodall, Jonathan L.; Watson, Layne T.

    2018-04-01

    Wetlands are important ecosystems that provide many ecological benefits, and their quality and presence are protected by federal regulations. These regulations require wetland delineations, which can be costly and time-consuming to perform. Computer models can assist in this process, but lack the accuracy necessary for environmental planning-scale wetland identification. In this study, the potential for improvement of wetland identification models through modification of digital elevation model (DEM) derivatives, derived from high-resolution and increasingly available light detection and ranging (LiDAR) data, at a scale necessary for small-scale wetland delineations is evaluated. A novel approach of flow convergence modelling is presented where Topographic Wetness Index (TWI), curvature, and Cartographic Depth-to-Water index (DTW), are modified to better distinguish wetland from upland areas, combined with ancillary soil data, and used in a Random Forest classification. This approach is applied to four study sites in Virginia, implemented as an ArcGIS model. The model resulted in significant improvement in average wetland accuracy compared to the commonly used National Wetland Inventory (84.9% vs. 32.1%), at the expense of a moderately lower average non-wetland accuracy (85.6% vs. 98.0%) and average overall accuracy (85.6% vs. 92.0%). From this, we concluded that modifying TWI, curvature, and DTW provides more robust wetland and non-wetland signatures to the models by improving accuracy rates compared to classifications using the original indices. The resulting ArcGIS model is a general tool able to modify these local LiDAR DEM derivatives based on site characteristics to identify wetlands at a high resolution.

  17. Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning.

    PubMed

    Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li

    2016-06-07

    Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer.

  18. Accounting for speed-accuracy tradeoff in perceptual learning

    PubMed Central

    Liu, Charles C.; Watanabe, Takeo

    2011-01-01

    In the perceptual learning (PL) literature, researchers typically focus on improvements in accuracy, such as d’. In contrast, researchers who investigate the practice of cognitive skills focus on improvements in response times (RT). Here, we argue for the importance of accounting for both accuracy and RT in PL experiments, due to the phenomenon of speed-accuracy tradeoff (SAT): at a given level of discriminability, faster responses tend to produce more errors. A formal model of the decision process, such as the diffusion model, can explain the SAT. In this model, a parameter known as the drift rate represents the perceptual strength of the stimulus, where higher drift rates lead to more accurate and faster responses. We applied the diffusion model to analyze responses from a yes-no coherent motion detection task. The results indicate that observers do not use a fixed threshold for evidence accumulation, so changes in the observed accuracy may not provide the most appropriate estimate of learning. Instead, our results suggest that SAT can be accounted for by a modeling approach, and that drift rates offer a promising index of PL. PMID:21958757

  19. Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning

    PubMed Central

    Wang, Jinhua; Yang, Xi; Cai, Hongmin; Tan, Wanchang; Jin, Cangzheng; Li, Li

    2016-01-01

    Microcalcification is an effective indicator of early breast cancer. To improve the diagnostic accuracy of microcalcifications, this study evaluates the performance of deep learning-based models on large datasets for its discrimination. A semi-automated segmentation method was used to characterize all microcalcifications. A discrimination classifier model was constructed to assess the accuracies of microcalcifications and breast masses, either in isolation or combination, for classifying breast lesions. Performances were compared to benchmark models. Our deep learning model achieved a discriminative accuracy of 87.3% if microcalcifications were characterized alone, compared to 85.8% with a support vector machine. The accuracies were 61.3% for both methods with masses alone and improved to 89.7% and 85.8% after the combined analysis with microcalcifications. Image segmentation with our deep learning model yielded 15, 26 and 41 features for the three scenarios, respectively. Overall, deep learning based on large datasets was superior to standard methods for the discrimination of microcalcifications. Accuracy was increased by adopting a combinatorial approach to detect microcalcifications and masses simultaneously. This may have clinical value for early detection and treatment of breast cancer. PMID:27273294

  20. Establishment of a Site-Specific Tropospheric Model Based on Ground Meteorological Parameters over the China Region.

    PubMed

    Zhou, Chongchong; Peng, Bibo; Li, Wei; Zhong, Shiming; Ou, Jikun; Chen, Runjing; Zhao, Xinglong

    2017-07-27

    China is a country of vast territory with complicated geographical environment and climate conditions. With the rapid progress of the Chinese BeiDou satellite navigation system (BDS); more accurate tropospheric models must be applied to improve the accuracy of navigation and positioning. Based on the formula of the Saastamoinen and Callahan models; this study develops two single-site tropospheric models (named SAAS_S and CH_S models) for the Chinese region using radiosonde data from 2005 to 2012. We assess the two single-site tropospheric models with radiosonde data for 2013 and zenith tropospheric delay (ZTD) data from four International GNSS Service (IGS) stations and compare them to the results of the Saastamoinen and Callahan models. The experimental results show that: the mean accuracy of the SAAS_S model (bias: 0.19 cm; RMS: 3.19 cm) at all radiosonde stations is superior to those of the Saastamoinen (bias: 0.62 cm; RMS: 3.62 cm) and CH_S (bias: -0.05 cm; RMS: 3.38 cm) models. In most Chinese regions; the RMS values of the SAAS_S and CH_S models are about 0.51~2.12 cm smaller than those of their corresponding source models. The SAAS_S model exhibits a clear improvement in the accuracy over the Saastamoinen model in low latitude regions. When the SAAS_S model is replaced by the SAAS model in the positioning of GNSS; the mean accuracy of vertical direction in the China region can be improved by 1.12~1.55 cm and the accuracy of vertical direction in low latitude areas can be improved by 1.33~7.63 cm. The residuals of the SAAS_S model are closer to a normal distribution compared to those of the Saastamoinen model. Single-site tropospheric models based on the short period of the most recent data (for example 2 years) can also achieve a satisfactory accuracy. The average performance of the SAAS_S model (bias: 0.83 cm; RMS: 3.24 cm) at four IGS stations is superior to that of the Saastamoinen (bias: -0.86 cm; RMS: 3.59 cm) and CH_S (bias: 0.45 cm; RMS: 3.38 cm) models.

  1. Establishment of a Site-Specific Tropospheric Model Based on Ground Meteorological Parameters over the China Region

    PubMed Central

    Zhou, Chongchong; Peng, Bibo; Li, Wei; Zhong, Shiming; Ou, Jikun; Chen, Runjing; Zhao, Xinglong

    2017-01-01

    China is a country of vast territory with complicated geographical environment and climate conditions. With the rapid progress of the Chinese BeiDou satellite navigation system (BDS); more accurate tropospheric models must be applied to improve the accuracy of navigation and positioning. Based on the formula of the Saastamoinen and Callahan models; this study develops two single-site tropospheric models (named SAAS_S and CH_S models) for the Chinese region using radiosonde data from 2005 to 2012. We assess the two single-site tropospheric models with radiosonde data for 2013 and zenith tropospheric delay (ZTD) data from four International GNSS Service (IGS) stations and compare them to the results of the Saastamoinen and Callahan models. The experimental results show that: the mean accuracy of the SAAS_S model (bias: 0.19 cm; RMS: 3.19 cm) at all radiosonde stations is superior to those of the Saastamoinen (bias: 0.62 cm; RMS: 3.62 cm) and CH_S (bias: −0.05 cm; RMS: 3.38 cm) models. In most Chinese regions; the RMS values of the SAAS_S and CH_S models are about 0.51~2.12 cm smaller than those of their corresponding source models. The SAAS_S model exhibits a clear improvement in the accuracy over the Saastamoinen model in low latitude regions. When the SAAS_S model is replaced by the SAAS model in the positioning of GNSS; the mean accuracy of vertical direction in the China region can be improved by 1.12~1.55 cm and the accuracy of vertical direction in low latitude areas can be improved by 1.33~7.63 cm. The residuals of the SAAS_S model are closer to a normal distribution compared to those of the Saastamoinen model. Single-site tropospheric models based on the short period of the most recent data (for example 2 years) can also achieve a satisfactory accuracy. The average performance of the SAAS_S model (bias: 0.83 cm; RMS: 3.24 cm) at four IGS stations is superior to that of the Saastamoinen (bias: −0.86 cm; RMS: 3.59 cm) and CH_S (bias: 0.45 cm; RMS: 3.38 cm) models. PMID:28749429

  2. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  3. A drift line bias estimator: ARMA-based filter or calibration method, and its application in BDS/GPS-based attitude determination

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Yanqing, Hou; Jie, Wu

    2016-12-01

    The multi-antenna synchronized receiver (using a common clock) is widely applied in GNSS-based attitude determination (AD) or terrain deformations monitoring, and many other applications, since the high-accuracy single-differenced carrier phase can be used to improve the positioning or AD accuracy. Thus, the line bias (LB) parameter (fractional bias isolating) should be calibrated in the single-differenced phase equations. In the past decades, all researchers estimated the LB as a constant parameter in advance and compensated it in real time. However, the constant LB assumption is inappropriate in practical applications because of the physical length and permittivity changes of the cables, caused by the environmental temperature variation and the instability of receiver-self inner circuit transmitting delay. Considering the LB drift (or colored LB) in practical circumstances, this paper initiates a real-time estimator using auto regressive moving average-based (ARMA) prediction/whitening filter model or Moving average-based (MA) constant calibration model. In the ARMA-based filter model, four cases namely AR(1), ARMA(1, 1), AR(2) and ARMA(2, 1) are applied for the LB prediction. The real-time relative positioning model using the ARMA-based predicting LB is derived and it is theoretically proved that the positioning accuracy is better than the traditional double difference carrier phase (DDCP) model. The drifting LB is defined with a phase temperature changing rate integral function, which is a random walk process if the phase temperature changing rate is white noise, and is validated by the analysis of the AR model coefficient. The auto covariance function shows that the LB is indeed varying in time and estimating it as a constant is not safe, which is also demonstrated by the analysis on LB variation of each visible satellite during a zero and short baseline BDS/GPS experiment. Compared to the DDCP approach, in the zero-baseline experiment, the LB constant calibration (LBCC) and MA approaches improved the positioning accuracy of the vertical component, while slightly degrading the accuracy of the horizontal components. The ARMA(1, 0) model, however, improved the positioning accuracy of all three components, with 40 and 50 % improvement of the vertical component for BDS and GPS, respectively. In the short baseline experiment, compared to the DDCP approach, the LBCC approach yielded bad positioning solutions and degraded the AD accuracy; both MA and ARMA-based filter approaches improved the AD accuracy. Moreover, the ARMA(1, 0) and ARMA(1, 1) models have relatively better performance, improving to 55 % and 48 % the elevation angle in ARMA(1, 1) and MA model for GPS, respectively. Furthermore, the drifting LB variation is found to be continuous and slowly cumulative; the variation magnitudes in the unit of length are almost identical on different frequency carrier phases, so the LB variation does not show obvious correlation between different frequencies. Consequently, the wide-lane LB in the unit of cycle is very stable, while the narrow-lane LB varies largely in time. This reasoning probably also explains the phenomenon that the wide-lane LB originating in the satellites is stable, while the narrow-lane LB varies. The results of ARMA-based filters are better than the MA model, which probably implies that the modeling for drifting LB can further improve the precise point positioning accuracy.

  4. Further development of the attitude difference method for estimating deflections of the vertical in real time

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Zhou, Zebo; Li, Yong; Rizos, Chris; Wang, Xingshu

    2016-07-01

    An improvement of the attitude difference method (ADM) to estimate deflections of the vertical (DOV) in real time is described in this paper. The ADM without offline processing estimates the DOV with a limited accuracy due to the response delay. The proposed model selection-based self-adaptive delay feedback (SDF) method takes the results of the ADM as the a priori information, then uses fitting and extrapolation to estimate the DOV at the current epoch. The active region selection factor F th is used to take full advantage of the Earth model EGM2008 and the SDF with different DOV exhibitions. The factors which affect the DOV estimation accuracy are analyzed and modeled. An external observation which is specified by the velocity difference between the global navigation satellite system (GNSS) and the inertial navigation system (INS) with DOV compensated is used to select the optimal model. The response delay induced by the weak observability of an integrated INS/GNSS to the violent DOV disturbances in the ADM is compensated. The DOV estimation accuracy of the SDF method is improved by approximately 40% and 50% respectively compared to that of the EGM2008 and the ADM. With an increase in GNSS accuracy, the DOV estimation accuracy could improve further.

  5. Development of a two-fluid drag law for clustered particles using direct numerical simulation and validation through experiments

    NASA Astrophysics Data System (ADS)

    Abbasi Baharanchi, Ahmadreza

    This dissertation focused on development and utilization of numerical and experimental approaches to improve the CFD modeling of fluidization flow of cohesive micron size particles. The specific objectives of this research were: (1) Developing a cluster prediction mechanism applicable to Two-Fluid Modeling (TFM) of gas-solid systems (2) Developing more accurate drag models for Two-Fluid Modeling (TFM) of gas-solid fluidization flow with the presence of cohesive interparticle forces (3) using the developed model to explore the improvement of accuracy of TFM in simulation of fluidization flow of cohesive powders (4) Understanding the causes and influential factor which led to improvements and quantification of improvements (5) Gathering data from a fast fluidization flow and use these data for benchmark validations. Simulation results with two developed cluster-aware drag models showed that cluster prediction could effectively influence the results in both the first and second cluster-aware models. It was proven that improvement of accuracy of TFM modeling using three versions of the first hybrid model was significant and the best improvements were obtained by using the smallest values of the switch parameter which led to capturing the smallest chances of cluster prediction. In the case of the second hybrid model, dependence of critical model parameter on only Reynolds number led to the fact that improvement of accuracy was significant only in dense section of the fluidized bed. This finding may suggest that a more sophisticated particle resolved DNS model, which can span wide range of solid volume fraction, can be used in the formulation of the cluster-aware drag model. The results of experiment suing high speed imaging indicated the presence of particle clusters in the fluidization flow of FCC inside the riser of FIU-CFB facility. In addition, pressure data was successfully captured along the fluidization column of the facility and used as benchmark validation data for the second hybrid model developed in the present dissertation. It was shown the second hybrid model could predict the pressure data in the dense section of the fluidization column with better accuracy.

  6. Word associations contribute to machine learning in automatic scoring of degree of emotional tones in dream reports.

    PubMed

    Amini, Reza; Sabourin, Catherine; De Koninck, Joseph

    2011-12-01

    Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Study on Parameter Identification of Assembly Robot based on Screw Theory

    NASA Astrophysics Data System (ADS)

    Yun, Shi; Xiaodong, Zhang

    2017-11-01

    The kinematic model of assembly robot is one of the most important factors affecting repetitive precision. In order to improve the accuracy of model positioning, this paper first establishes the exponential product model of ER16-1600 assembly robot on the basis of screw theory, and then based on iterative least squares method, using ER16-1600 model robot parameter identification. By comparing the experiment before and after the calibration, it is proved that the method has obvious improvement on the positioning accuracy of the assembly robot.

  8. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  9. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  10. Explanation Generation, Not Explanation Expectancy, Improves Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Fukaya, Tatsushi

    2013-01-01

    The ability to monitor the status of one's own understanding is important to accomplish academic tasks proficiently. Previous studies have shown that comprehension monitoring (metacomprehension accuracy) is generally poor, but improves when readers engage in activities that access valid cues reflecting their situation model (activities such as…

  11. Accounting for speed-accuracy tradeoff in perceptual learning.

    PubMed

    Liu, Charles C; Watanabe, Takeo

    2012-05-15

    In the perceptual learning (PL) literature, researchers typically focus on improvements in accuracy, such as d'. In contrast, researchers who investigate the practice of cognitive skills focus on improvements in response times (RT). Here, we argue for the importance of accounting for both accuracy and RT in PL experiments, due to the phenomenon of speed-accuracy tradeoff (SAT): at a given level of discriminability, faster responses tend to produce more errors. A formal model of the decision process, such as the diffusion model, can explain the SAT. In this model, a parameter known as the drift rate represents the perceptual strength of the stimulus, where higher drift rates lead to more accurate and faster responses. We applied the diffusion model to analyze responses from a yes-no coherent motion detection task. The results indicate that observers do not use a fixed threshold for evidence accumulation, so changes in the observed accuracy may not provide the most appropriate estimate of learning. Instead, our results suggest that SAT can be accounted for by a modeling approach, and that drift rates offer a promising index of PL. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Improvement of PM concentration predictability using WRF-CMAQ-DLM coupled system and its applications

    NASA Astrophysics Data System (ADS)

    Lee, Soon Hwan; Kim, Ji Sun; Lee, Kang Yeol; Shon, Keon Tae

    2017-04-01

    Air quality due to increasing Particulate Matter(PM) in Korea in Asia is getting worse. At present, the PM forecast is announced based on the PM concentration predicted from the air quality prediction numerical model. However, forecast accuracy is not as high as expected due to various uncertainties for PM physical and chemical characteristics. The purpose of this study was to develop a numerical-statistically ensemble models to improve the accuracy of prediction of PM10 concentration. Numerical models used in this study are the three dimensional atmospheric model Weather Research and Forecasting(WRF) and the community multiscale air quality model (CMAQ). The target areas for the PM forecast are Seoul, Busan, Daegu, and Daejeon metropolitan areas in Korea. The data used in the model development are PM concentration and CMAQ predictions and the data period is 3 months (March 1 - May 31, 2014). The dynamic-statistical technics for reducing the systematic error of the CMAQ predictions was applied to the dynamic linear model(DLM) based on the Baysian Kalman filter technic. As a result of applying the metrics generated from the dynamic linear model to the forecasting of PM concentrations accuracy was improved. Especially, at the high PM concentration where the damage is relatively large, excellent improvement results are shown.

  13. Automatic anatomy recognition via multiobject oriented active shape models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A

    2010-12-01

    This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.

  14. Method to improve accuracy of positioning object by eLoran system with applying standard Kalman filter

    NASA Astrophysics Data System (ADS)

    Grunin, A. P.; Kalinov, G. A.; Bolokhovtsev, A. V.; Sai, S. V.

    2018-05-01

    This article reports on a novel method to improve the accuracy of positioning an object by a low frequency hyperbolic radio navigation system like an eLoran. This method is based on the application of the standard Kalman filter. Investigations of an affection of the filter parameters and the type of the movement on accuracy of the vehicle position estimation are carried out. Evaluation of the method accuracy was investigated by separating data from the semi-empirical movement model to different types of movements.

  15. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  16. Genomic Prediction Accounting for Residual Heteroskedasticity

    PubMed Central

    Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.

    2015-01-01

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950

  17. Prediction of Industrial Electric Energy Consumption in Anhui Province Based on GA-BP Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, Jiajing; Yin, Guodong; Ni, Youcong; Chen, Jinlan

    2018-01-01

    In order to improve the prediction accuracy of industrial electrical energy consumption, a prediction model of industrial electrical energy consumption was proposed based on genetic algorithm and neural network. The model use genetic algorithm to optimize the weights and thresholds of BP neural network, and the model is used to predict the energy consumption of industrial power in Anhui Province, to improve the prediction accuracy of industrial electric energy consumption in Anhui province. By comparing experiment of GA-BP prediction model and BP neural network model, the GA-BP model is more accurate with smaller number of neurons in the hidden layer.

  18. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-08-31

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  19. Assessing Predictive Properties of Genome-Wide Selection in Soybeans

    PubMed Central

    Xavier, Alencar; Muir, William M.; Rainey, Katy Martin

    2016-01-01

    Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set. PMID:27317786

  20. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  1. A Coupled Surface Nudging Scheme for use in Retrospective ...

    EPA Pesticide Factsheets

    A surface analysis nudging scheme coupling atmospheric and land surface thermodynamic parameters has been implemented into WRF v3.8 (latest version) for use with retrospective weather and climate simulations, as well as for applications in air quality, hydrology, and ecosystem modeling. This scheme is known as the flux-adjusting surface data assimilation system (FASDAS) developed by Alapaty et al. (2008). This scheme provides continuous adjustments for soil moisture and temperature (via indirect nudging) and for surface air temperature and water vapor mixing ratio (via direct nudging). The simultaneous application of indirect and direct nudging maintains greater consistency between the soil temperature–moisture and the atmospheric surface layer mass-field variables. The new method, FASDAS, consistently improved the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as well as for high resolution regional climate predictions. This new capability has been released in WRF Version 3.8 as option grid_sfdda = 2. This new capability increased the accuracy of atmospheric inputs for use air quality, hydrology, and ecosystem modeling research to improve the accuracy of respective end-point research outcome. IMPACT: A new method, FASDAS, was implemented into the WRF model to consistently improve the accuracy of the model simulations at weather prediction scales for different horizontal grid resolutions, as wel

  2. Scientific analysis of satellite ranging data

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    1994-01-01

    A network of satellite laser ranging (SLR) tracking systems with continuously improving accuracies is challenging the modelling capabilities of analysts worldwide. Various data analysis techniques have yielded many advances in the development of orbit, instrument and Earth models. The direct measurement of the distance to the satellite provided by the laser ranges has given us a simple metric which links the results obtained by diverse approaches. Different groups have used SLR data, often in combination with observations from other space geodetic techniques, to improve models of the static geopotential, the solid Earth, ocean tides, and atmospheric drag models for low Earth satellites. Radiation pressure models and other non-conservative forces for satellite orbits above the atmosphere have been developed to exploit the full accuracy of the latest SLR instruments. SLR is the baseline tracking system for the altimeter missions TOPEX/Poseidon, and ERS-1 and will play an important role in providing the reference frame for locating the geocentric position of the ocean surface, in providing an unchanging range standard for altimeter calibration, and for improving the geoid models to separate gravitational from ocean circulation signals seen in the sea surface. However, even with the many improvements in the models used to support the orbital analysis of laser observations, there remain systematic effects which limit the full exploitation of SLR accuracy today.

  3. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    PubMed Central

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  4. Use of temperature to improve West Nile virus forecasts

    PubMed Central

    Schneider, Zachary D.; Caillouet, Kevin A.; Campbell, Scott R.; Damian, Dan; Irwin, Patrick; Jones, Herff M. P.; Townsend, John

    2018-01-01

    Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that on average increased absolute forecast accuracy 5%, 10%, 12%, and 6%, respectively, over the non-temperature forced baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperature influences rates of WNV transmission. The findings provide a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs. PMID:29522514

  5. The impact of modeling the dependencies among patient findings on classification accuracy and calibration.

    PubMed Central

    Monti, S.; Cooper, G. F.

    1998-01-01

    We present a new Bayesian classifier for computer-aided diagnosis. The new classifier builds upon the naive-Bayes classifier, and models the dependencies among patient findings in an attempt to improve its performance, both in terms of classification accuracy and in terms of calibration of the estimated probabilities. This work finds motivation in the argument that highly calibrated probabilities are necessary for the clinician to be able to rely on the model's recommendations. Experimental results are presented, supporting the conclusion that modeling the dependencies among findings improves calibration. PMID:9929288

  6. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Wick, Gary A.; Emery, William J.; Castro, Sandra L.; Lindstrom, Eric (Technical Monitor)

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work was performed in two different major areas. The first centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. The second involved a modeling and data analysis effort whereby modeled near-surface temperature profiles were integrated into the retrieval of bulk SST estimates from existing satellite data. Under the first work area, two different seagoing infrared radiometers were designed and fabricated and the first of these was deployed on research ships during two major experiments. Analyses of these data contributed significantly to the Ph.D. thesis of one graduate student and these results are currently being converted into a journal publication. The results of the second portion of work demonstrated that, with presently available models and heat flux estimates, accuracy improvements in SST retrievals associated with better physical treatment of the near-surface layer were partially balanced by uncertainties in the models and extra required input data. While no significant accuracy improvement was observed in this experiment, the results are very encouraging for future applications where improved models and coincident environmental data will be available. These results are included in a manuscript undergoing final review with the Journal of Atmospheric and Oceanic Technology.

  7. Modeling of profilometry with laser focus sensors

    NASA Astrophysics Data System (ADS)

    Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner

    2011-05-01

    Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.

  8. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  9. Real-Time Tropospheric Product Establishment and Accuracy Assessment in China

    NASA Astrophysics Data System (ADS)

    Chen, M.; Guo, J.; Wu, J.; Song, W.; Zhang, D.

    2018-04-01

    Tropospheric delay has always been an important issue in Global Navigation Satellite System (GNSS) processing. Empirical tropospheric delay models are difficult to simulate complex and volatile atmospheric environments, resulting in poor accuracy of the empirical model and difficulty in meeting precise positioning demand. In recent years, some scholars proposed to establish real-time tropospheric product by using real-time or near-real-time GNSS observations in a small region, and achieved some good results. This paper uses real-time observing data of 210 Chinese national GNSS reference stations to estimate the tropospheric delay, and establishes ZWD grid model in the country wide. In order to analyze the influence of tropospheric grid product on wide-area real-time PPP, this paper compares the method of taking ZWD grid product as a constraint with the model correction method. The results show that the ZWD grid product estimated based on the national reference stations can improve PPP accuracy and convergence speed. The accuracy in the north (N), east (E) and up (U) direction increase by 31.8 %,15.6 % and 38.3 %, respectively. As with the convergence speed, the accuracy of U direction experiences the most improvement.

  10. Fusion with Language Models Improves Spelling Accuracy for ERP-based Brain Computer Interface Spellers

    PubMed Central

    Orhan, Umut; Erdogmus, Deniz; Roark, Brian; Purwar, Shalini; Hild, Kenneth E.; Oken, Barry; Nezamfar, Hooman; Fried-Oken, Melanie

    2013-01-01

    Event related potentials (ERP) corresponding to a stimulus in electroencephalography (EEG) can be used to detect the intent of a person for brain computer interfaces (BCI). This paradigm is widely utilized to build letter-by-letter text input systems using BCI. Nevertheless using a BCI-typewriter depending only on EEG responses will not be sufficiently accurate for single-trial operation in general, and existing systems utilize many-trial schemes to achieve accuracy at the cost of speed. Hence incorporation of a language model based prior or additional evidence is vital to improve accuracy and speed. In this paper, we study the effects of Bayesian fusion of an n-gram language model with a regularized discriminant analysis ERP detector for EEG-based BCIs. The letter classification accuracies are rigorously evaluated for varying language model orders as well as number of ERP-inducing trials. The results demonstrate that the language models contribute significantly to letter classification accuracy. Specifically, we find that a BCI-speller supported by a 4-gram language model may achieve the same performance using 3-trial ERP classification for the initial letters of the words and using single trial ERP classification for the subsequent ones. Overall, fusion of evidence from EEG and language models yields a significant opportunity to increase the word rate of a BCI based typing system. PMID:22255652

  11. Utilisation of three-dimensional printed heart models for operative planning of complex congenital heart defects.

    PubMed

    Olejník, Peter; Nosal, Matej; Havran, Tomas; Furdova, Adriana; Cizmar, Maros; Slabej, Michal; Thurzo, Andrej; Vitovic, Pavol; Klvac, Martin; Acel, Tibor; Masura, Jozef

    2017-01-01

    To evaluate the accuracy of the three-dimensional (3D) printing of cardiovascular structures. To explore whether utilisation of 3D printed heart replicas can improve surgical and catheter interventional planning in patients with complex congenital heart defects. Between December 2014 and November 2015 we fabricated eight cardiovascular models based on computed tomography data in patients with complex spatial anatomical relationships of cardiovascular structures. A Bland-Altman analysis was used to assess the accuracy of 3D printing by comparing dimension measurements at analogous anatomical locations between the printed models and digital imagery data, as well as between printed models and in vivo surgical findings. The contribution of 3D printed heart models for perioperative planning improvement was evaluated in the four most representative patients. Bland-Altman analysis confirmed the high accuracy of 3D cardiovascular printing. Each printed model offered an improved spatial anatomical orientation of cardiovascular structures. Current 3D printers can produce authentic copies of patients` cardiovascular systems from computed tomography data. The use of 3D printed models can facilitate surgical or catheter interventional procedures in patients with complex congenital heart defects due to better preoperative planning and intraoperative orientation.

  12. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems.

    PubMed

    Lyu, Weiwei; Cheng, Xianghong

    2017-11-28

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method.

  13. Research on the error model of airborne celestial/inertial integrated navigation system

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaoqiang; Deng, Xiaoguo; Yang, Xiaoxu; Dong, Qiang

    2015-02-01

    Celestial navigation subsystem of airborne celestial/inertial integrated navigation system periodically correct the positioning error and heading drift of the inertial navigation system, by which the inertial navigation system can greatly improve the accuracy of long-endurance navigation. Thus the navigation accuracy of airborne celestial navigation subsystem directly decides the accuracy of the integrated navigation system if it works for long time. By building the mathematical model of the airborne celestial navigation system based on the inertial navigation system, using the method of linear coordinate transformation, we establish the error transfer equation for the positioning algorithm of airborne celestial system. Based on these we built the positioning error model of the celestial navigation. And then, based on the positioning error model we analyze and simulate the positioning error which are caused by the error of the star tracking platform with the MATLAB software. Finally, the positioning error model is verified by the information of the star obtained from the optical measurement device in range and the device whose location are known. The analysis and simulation results show that the level accuracy and north accuracy of tracking platform are important factors that limit airborne celestial navigation systems to improve the positioning accuracy, and the positioning error have an approximate linear relationship with the level error and north error of tracking platform. The error of the verification results are in 1000m, which shows that the model is correct.

  14. Factoring vs linear modeling in rate estimation: a simulation study of relative accuracy.

    PubMed

    Maldonado, G; Greenland, S

    1998-07-01

    A common strategy for modeling dose-response in epidemiology is to transform ordered exposures and covariates into sets of dichotomous indicator variables (that is, to factor the variables). Factoring tends to increase estimation variance, but it also tends to decrease bias and thus may increase or decrease total accuracy. We conducted a simulation study to examine the impact of factoring on the accuracy of rate estimation. Factored and unfactored Poisson regression models were fit to follow-up study datasets that were randomly generated from 37,500 population model forms that ranged from subadditive to supramultiplicative. In the situations we examined, factoring sometimes substantially improved accuracy relative to fitting the corresponding unfactored model, sometimes substantially decreased accuracy, and sometimes made little difference. The difference in accuracy between factored and unfactored models depended in a complicated fashion on the difference between the true and fitted model forms, the strength of exposure and covariate effects in the population, and the study size. It may be difficult in practice to predict when factoring is increasing or decreasing accuracy. We recommend, therefore, that the strategy of factoring variables be supplemented with other strategies for modeling dose-response.

  15. Improving the Accuracy of Mapping Urban Vegetation Carbon Density by Combining Shadow Remove, Spectral Unmixing Analysis and Spatial Modeling

    NASA Astrophysics Data System (ADS)

    Qie, G.; Wang, G.; Wang, M.

    2016-12-01

    Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images

  16. Study design requirements for RNA sequencing-based breast cancer diagnostics.

    PubMed

    Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias

    2016-02-01

    Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.

  17. Massive metrology using fast e-beam technology improves OPC model accuracy by >2x at faster turnaround time

    NASA Astrophysics Data System (ADS)

    Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei

    2018-03-01

    Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.

  18. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  19. Research of autonomous celestial navigation based on new measurement model of stellar refraction

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Tian, Hong; Zhang, Hui; Xu, Bo

    2014-09-01

    Autonomous celestial navigation based on stellar refraction has attracted widespread attention for its high accuracy and full autonomy.In this navigation method, establishment of accurate stellar refraction measurement model is the fundament and key issue to achieve high accuracy navigation. However, the existing measurement models are limited due to the uncertainty of atmospheric parameters. Temperature, pressure and other factors which affect the stellar refraction within the height of earth's stratosphere are researched, and the varying model of atmosphere with altitude is derived on the basis of standard atmospheric data. Furthermore, a novel measurement model of stellar refraction in a continuous range of altitudes from 20 km to 50 km is produced by modifying the fixed altitude (25 km) measurement model, and equation of state with the orbit perturbations is established, then a simulation is performed using the improved Extended Kalman Filter. The results show that the new model improves the navigation accuracy, which has a certain practical application value.

  20. Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils

    PubMed Central

    Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng

    2017-01-01

    This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412

  1. GPS/GLONASS Combined Precise Point Positioning with Receiver Clock Modeling

    PubMed Central

    Wang, Fuhong; Chen, Xinghan; Guo, Fei

    2015-01-01

    Research has demonstrated that receiver clock modeling can reduce the correlation coefficients among the parameters of receiver clock bias, station height and zenith tropospheric delay. This paper introduces the receiver clock modeling to GPS/GLONASS combined precise point positioning (PPP), aiming to better separate the receiver clock bias and station coordinates and therefore improve positioning accuracy. Firstly, the basic mathematic models including the GPS/GLONASS observation equations, stochastic model, and receiver clock model are briefly introduced. Then datasets from several IGS stations equipped with high-stability atomic clocks are used for kinematic PPP tests. To investigate the performance of PPP, including the positioning accuracy and convergence time, a week of (1–7 January 2014) GPS/GLONASS data retrieved from these IGS stations are processed with different schemes. The results indicate that the positioning accuracy as well as convergence time can benefit from the receiver clock modeling. This is particularly pronounced for the vertical component. Statistic RMSs show that the average improvement of three-dimensional positioning accuracy reaches up to 30%–40%. Sometimes, it even reaches over 60% for specific stations. Compared to the GPS-only PPP, solutions of the GPS/GLONASS combined PPP are much better no matter if the receiver clock offsets are modeled or not, indicating that the positioning accuracy and reliability are significantly improved with the additional GLONASS satellites in the case of insufficient number of GPS satellites or poor geometry conditions. In addition to the receiver clock modeling, the impacts of different inter-system timing bias (ISB) models are investigated. For the case of a sufficient number of satellites with fairly good geometry, the PPP performances are not seriously affected by the ISB model due to the low correlation between the ISB and the other parameters. However, the refinement of ISB model weakens the correlation between coordinates and ISB estimates and finally enhance the PPP performance in the case of poor observation conditions. PMID:26134106

  2. Privacy-Preserving Accountable Accuracy Management Systems (PAAMS)

    NASA Astrophysics Data System (ADS)

    Thomas, Roshan K.; Sandhu, Ravi; Bertino, Elisa; Arpinar, Budak; Xu, Shouhuai

    We argue for the design of “Privacy-preserving Accountable Accuracy Management Systems (PAAMS)”. The designs of such systems recognize from the onset that accuracy, accountability, and privacy management are intertwined. As such, these systems have to dynamically manage the tradeoffs between these (often conflicting) objectives. For example, accuracy in such systems can be improved by providing better accountability links between structured and unstructured information. Further, accuracy may be enhanced if access to private information is allowed in controllable and accountable ways. Our proposed approach involves three key elements. First, a model to link unstructured information such as that found in email, image and document repositories with structured information such as that in traditional databases. Second, a model for accuracy management and entity disambiguation by proactively preventing, detecting and tracing errors in information bases. Third, a model to provide privacy-governed operation as accountability and accuracy are managed.

  3. Genomic Prediction Accounting for Residual Heteroskedasticity.

    PubMed

    Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M

    2015-11-12

    Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.

  4. FOG Random Drift Signal Denoising Based on the Improved AR Model and Modified Sage-Husa Adaptive Kalman Filter.

    PubMed

    Sun, Jin; Xu, Xiaosu; Liu, Yiting; Zhang, Tao; Li, Yao

    2016-07-12

    In order to reduce the influence of fiber optic gyroscope (FOG) random drift error on inertial navigation systems, an improved auto regressive (AR) model is put forward in this paper. First, based on real-time observations at each restart of the gyroscope, the model of FOG random drift can be established online. In the improved AR model, the FOG measured signal is employed instead of the zero mean signals. Then, the modified Sage-Husa adaptive Kalman filter (SHAKF) is introduced, which can directly carry out real-time filtering on the FOG signals. Finally, static and dynamic experiments are done to verify the effectiveness. The filtering results are analyzed with Allan variance. The analysis results show that the improved AR model has high fitting accuracy and strong adaptability, and the minimum fitting accuracy of single noise is 93.2%. Based on the improved AR(3) model, the denoising method of SHAKF is more effective than traditional methods, and its effect is better than 30%. The random drift error of FOG is reduced effectively, and the precision of the FOG is improved.

  5. The availability of prior ECGs improves paramedic accuracy in recognizing ST-segment elevation myocardial infarction.

    PubMed

    O'Donnell, Daniel; Mancera, Mike; Savory, Eric; Christopher, Shawn; Schaffer, Jason; Roumpf, Steve

    2015-01-01

    Early and accurate identification of ST-elevation myocardial infarction (STEMI) by prehospital providers has been shown to significantly improve door to balloon times and improve patient outcomes. Previous studies have shown that paramedic accuracy in reading 12 lead ECGs can range from 86% to 94%. However, recent studies have demonstrated that accuracy diminishes for the more uncommon STEMI presentations (e.g. lateral). Unlike hospital physicians, paramedics rarely have the ability to review previous ECGs for comparison. Whether or not a prior ECG can improve paramedic accuracy is not known. The availability of prior ECGs improves paramedic accuracy in ECG interpretation. 130 paramedics were given a single clinical scenario. Then they were randomly assigned 12 computerized prehospital ECGs, 6 with and 6 without an accompanying prior ECG. All ECGs were obtained from a local STEMI registry. For each ECG paramedics were asked to determine whether or not there was a STEMI and to rate their confidence in their interpretation. To determine if the old ECGs improved accuracy we used a mixed effects logistic regression model to calculate p-values between the control and intervention. The addition of a previous ECG improved the accuracy of identifying STEMIs from 75.5% to 80.5% (p=0.015). A previous ECG also increased paramedic confidence in their interpretation (p=0.011). The availability of previous ECGs improves paramedic accuracy and enhances their confidence in interpreting STEMIs. Further studies are needed to evaluate this impact in a clinical setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Investigation for improving Global Positioning System (GPS) orbits using a discrete sequential estimator and stochastic models of selected physical processes

    NASA Technical Reports Server (NTRS)

    Goad, Clyde C.; Chadwell, C. David

    1993-01-01

    GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the GEODYNII measurement partial (FTN90) and variational (FTN80, V-matrix) files are generated. These two files along with a control statement file and a satellite identification and mass file are passed to the filter/smoother to estimate time-varying parameter states at each epoch, improved satellite initial elements, and improved estimates of constant parameters.

  7. Double-Windows-Based Motion Recognition in Multi-Floor Buildings Assisted by a Built-In Barometer.

    PubMed

    Liu, Maolin; Li, Huaiyu; Wang, Yuan; Li, Fei; Chen, Xiuwan

    2018-04-01

    Accelerometers, gyroscopes and magnetometers in smartphones are often used to recognize human motions. Since it is difficult to distinguish between vertical motions and horizontal motions in the data provided by these built-in sensors, the vertical motion recognition accuracy is relatively low. The emergence of a built-in barometer in smartphones improves the accuracy of motion recognition in the vertical direction. However, there is a lack of quantitative analysis and modelling of the barometer signals, which is the basis of barometer's application to motion recognition, and a problem of imbalanced data also exists. This work focuses on using the barometers inside smartphones for vertical motion recognition in multi-floor buildings through modelling and feature extraction of pressure signals. A novel double-windows pressure feature extraction method, which adopts two sliding time windows of different length, is proposed to balance recognition accuracy and response time. Then, a random forest classifier correlation rule is further designed to weaken the impact of imbalanced data on recognition accuracy. The results demonstrate that the recognition accuracy can reach 95.05% when pressure features and the improved random forest classifier are adopted. Specifically, the recognition accuracy of the stair and elevator motions is significantly improved with enhanced response time. The proposed approach proves effective and accurate, providing a robust strategy for increasing accuracy of vertical motions.

  8. Improving the performance of the mass transfer-based reference evapotranspiration estimation approaches through a coupled wavelet-random forest methodology

    NASA Astrophysics Data System (ADS)

    Shiri, Jalal

    2018-06-01

    Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.

  9. Effects of using the developing nurses' thinking model on nursing students' diagnostic accuracy.

    PubMed

    Tesoro, Mary Gay

    2012-08-01

    This quasi-experimental study tested the effectiveness of an educational model, Developing Nurses' Thinking (DNT), on nursing students' clinical reasoning to achieve patient safety. Teaching nursing students to develop effective thinking habits that promote positive patient outcomes and patient safety is a challenging endeavor. Positive patient outcomes and safety are achieved when nurses accurately interpret data and subsequently implement appropriate plans of care. This study's pretest-posttest design determined whether use of the DNT model during 2 weeks of clinical postconferences improved nursing students' (N = 83) diagnostic accuracy. The DNT model helps students to integrate four constructs-patient safety, domain knowledge, critical thinking processes, and repeated practice-to guide their thinking when interpreting patient data and developing effective plans of care. The posttest scores of students from the intervention group showed statistically significant improvement in accuracy. Copyright 2012, SLACK Incorporated.

  10. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    PubMed

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  11. Modeling of the Mode S tracking system in support of aircraft safety research

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Goka, T.

    1982-01-01

    This report collects, documents, and models data relating the expected accuracies of tracking variables to be obtained from the FAA's Mode S Secondary Surveillance Radar system. The data include measured range and azimuth to the tracked aircraft plus the encoded altitude transmitted via the Mode S data link. A brief summary is made of the Mode S system status and its potential applications for aircraft safety improvement including accident analysis. FAA flight test results are presented demonstrating Mode S range and azimuth accuracy and error characteristics and comparing Mode S to the current ATCRBS radar tracking system. Data are also presented that describe the expected accuracy and error characteristics of encoded altitude. These data are used to formulate mathematical error models of the Mode S variables and encoded altitude. A brief analytical assessment is made of the real-time tracking accuracy available from using Mode S and how it could be improved with down-linked velocity.

  12. A GIS Tool for evaluating and improving NEXRAD and its application in distributed hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Srinivasan, R.

    2008-12-01

    In this study, a user friendly GIS tool was developed for evaluating and improving NEXRAD using raingauge data. This GIS tool can automatically read in raingauge and NEXRAD data, evaluate the accuracy of NEXRAD for each time unit, implement several geostatistical methods to improve the accuracy of NEXRAD through raingauge data, and output spatial precipitation map for distributed hydrologic model. The geostatistical methods incorporated in this tool include Simple Kriging with varying local means, Kriging with External Drift, Regression Kriging, Co-Kriging, and a new geostatistical method that was newly developed by Li et al. (2008). This tool was applied in two test watersheds at hourly and daily temporal scale. The preliminary cross-validation results show that incorporating raingauge data to calibrate NEXRAD can pronouncedly change the spatial pattern of NEXRAD and improve its accuracy. Using different geostatistical methods, the GIS tool was applied to produce long term precipitation input for a distributed hydrologic model - Soil and Water Assessment Tool (SWAT). Animated video was generated to vividly illustrate the effect of using different precipitation input data on distributed hydrologic modeling. Currently, this GIS tool is developed as an extension of SWAT, which is used as water quantity and quality modeling tool by USDA and EPA. The flexible module based design of this tool also makes it easy to be adapted for other hydrologic models for hydrological modeling and water resources management.

  13. Assessing the accuracy of predictive models for numerical data: Not r nor r2, why not? Then what?

    PubMed Central

    2017-01-01

    Assessing the accuracy of predictive models is critical because predictive models have been increasingly used across various disciplines and predictive accuracy determines the quality of resultant predictions. Pearson product-moment correlation coefficient (r) and the coefficient of determination (r2) are among the most widely used measures for assessing predictive models for numerical data, although they are argued to be biased, insufficient and misleading. In this study, geometrical graphs were used to illustrate what were used in the calculation of r and r2 and simulations were used to demonstrate the behaviour of r and r2 and to compare three accuracy measures under various scenarios. Relevant confusions about r and r2, has been clarified. The calculation of r and r2 is not based on the differences between the predicted and observed values. The existing error measures suffer various limitations and are unable to tell the accuracy. Variance explained by predictive models based on cross-validation (VEcv) is free of these limitations and is a reliable accuracy measure. Legates and McCabe’s efficiency (E1) is also an alternative accuracy measure. The r and r2 do not measure the accuracy and are incorrect accuracy measures. The existing error measures suffer limitations. VEcv and E1 are recommended for assessing the accuracy. The applications of these accuracy measures would encourage accuracy-improved predictive models to be developed to generate predictions for evidence-informed decision-making. PMID:28837692

  14. Improved Accuracy Using Recursive Bayesian Estimation Based Language Model Fusion in ERP-Based BCI Typing Systems

    PubMed Central

    Orhan, U.; Erdogmus, D.; Roark, B.; Oken, B.; Purwar, S.; Hild, K. E.; Fowler, A.; Fried-Oken, M.

    2013-01-01

    RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed as an assistive technology for the communication needs of people with locked-in syndrome (LIS). It relies on rapid serial visual presentation (RSVP) and does not require precise eye gaze control. Existing BCI typing systems which uses event related potentials (ERP) in EEG suffer from low accuracy due to low signal-to-noise ratio. Henceforth, RSVP Keyboard™ utilizes a context based decision making via incorporating a language model, to improve the accuracy of letter decisions. To further improve the contributions of the language model, we propose recursive Bayesian estimation, which relies on non-committing string decisions, and conduct an offline analysis, which compares it with the existing naïve Bayesian fusion approach. The results indicate the superiority of the recursive Bayesian fusion and in the next generation of RSVP Keyboard™ we plan to incorporate this new approach. PMID:23366432

  15. Adjusting the Stems Regional Forest Growth Model to Improve Local Predictions

    Treesearch

    W. Brad Smith

    1983-01-01

    A simple procedure using double sampling is described for adjusting growth in the STEMS regional forest growth model to compensate for subregional variations. Predictive accuracy of the STEMS model (a distance-independent, individual tree growth model for Lake States forests) was improved by using this procedure

  16. Apollo oxygen tank stratification analysis, volume 2

    NASA Technical Reports Server (NTRS)

    Barton, J. E.; Patterson, H. W.

    1972-01-01

    An analysis of flight performance of the Apollo 15 cryogenic oxygen tanks was conducted with the variable grid stratification math model developed earlier in the program. Flight conditions investigated were the CMP-EVA and one passive thermal control period which exhibited heater temperature characteristics not previously observed. Heater temperatures for these periods were simulated with the math model using flight acceleration data. Simulation results (heater temperature and tank pressure) compared favorably with the Apollo 15 flight data, and it was concluded that tank performance was nominal. Math model modifications were also made to improve the simulation accuracy. The modifications included the addition of the effects of the tank wall thermal mass and an improved system flow distribution model. The modifications improved the accuracy of simulated pressure response based on comparisons with flight data.

  17. Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin

    2018-04-01

    ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.

  18. Improving CNN Performance Accuracies With Min-Max Objective.

    PubMed

    Shi, Weiwei; Gong, Yihong; Tao, Xiaoyu; Wang, Jinjun; Zheng, Nanning

    2017-06-09

    We propose a novel method for improving performance accuracies of convolutional neural network (CNN) without the need to increase the network complexity. We accomplish the goal by applying the proposed Min-Max objective to a layer below the output layer of a CNN model in the course of training. The Min-Max objective explicitly ensures that the feature maps learned by a CNN model have the minimum within-manifold distance for each object manifold and the maximum between-manifold distances among different object manifolds. The Min-Max objective is general and able to be applied to different CNNs with insignificant increases in computation cost. Moreover, an incremental minibatch training procedure is also proposed in conjunction with the Min-Max objective to enable the handling of large-scale training data. Comprehensive experimental evaluations on several benchmark data sets with both the image classification and face verification tasks reveal that employing the proposed Min-Max objective in the training process can remarkably improve performance accuracies of a CNN model in comparison with the same model trained without using this objective.

  19. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.

    PubMed

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-09-09

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.

  20. Risk-adjusted capitation based on the Diagnostic Cost Group Model: an empirical evaluation with health survey information.

    PubMed Central

    Lamers, L M

    1999-01-01

    OBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness funds provide compulsory health insurance coverage for the 60 percent of the population in the lowest income brackets. STUDY DESIGN: A demographic model and DCG capitation models are estimated by means of ordinary least squares, with an individual's annual healthcare expenditures in 1994 as the dependent variable. For subgroups based on health survey information, costs predicted by the models are compared with actual costs. Using stepwise regression procedures a subset of relevant survey variables that could improve the predictive accuracy of the three-year DCG model was identified. Capitation models were extended with these variables. DATA COLLECTION/EXTRACTION METHODS: For the empirical analysis, panel data of sickness fund members were used that contained demographic information, annual healthcare expenditures, and diagnostic information from hospitalizations for each member. In 1993, a mailed health survey was conducted among a random sample of 15,000 persons in the panel data set, with a 70 percent response rate. PRINCIPAL FINDINGS: The predictive accuracy of the demographic model improves when it is extended with diagnostic information from prior hospitalizations (DCGs). A subset of survey variables further improves the predictive accuracy of the DCG capitation models. The predictable profits and losses based on survey information for the DCG models are smaller than for the demographic model. Most persons with predictable losses based on health survey information were not hospitalized in the preceding year. CONCLUSIONS: The use of diagnostic information from prior hospitalizations is a promising option for improving the demographic capitation payment formula. This study suggests that diagnostic information from outpatient utilization is complementary to DCGs in predicting future costs. PMID:10029506

  1. The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1981-01-01

    Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.

  2. Effect of tropospheric models on derived precipitable water vapor over Southeast Asia

    NASA Astrophysics Data System (ADS)

    Rahimi, Zhoobin; Mohd Shafri, Helmi Zulhaidi; Othman, Faridah; Norman, Masayu

    2017-05-01

    An interesting subject in the field of GPS technology is estimating variation of precipitable water vapor (PWV). This estimation can be used as a data source to assess and monitor rapid changes in meteorological conditions. So far, numerous GPS stations are distributed across the world and the number of GPS networks is increasing. Despite these developments, a challenging aspect of estimating PWV through GPS networks is the need of tropospheric parameters such as temperature, pressure, and relative humidity (Liu et al., 2015). To estimate the tropospheric parameters, global pressure temperature (GPT) model developed by Boehm et al. (2007) is widely used in geodetic analysis for GPS observations. To improve the accuracy, Lagler et al. (2013) introduced GPT2 model by adding annual and semi-annual variation effects to GPT model. Furthermore, Boehm et al. (2015) proposed the GPT2 wet (GPT2w) model which uses water vapor pressure to improve the calculations. The global accuracy of GPT2 and GPT2w models has been evaluated by previous researches (Fund et al., 2011; Munekane and Boehm, 2010); however, investigations to assess the accuracy of global tropospheric models in tropical regions such as Southeast Asia is not sufficient. This study tests and examines the accuracy of GPT2w as one of the most recent versions of tropospheric models (Boehm et al., 2015). We developed a new regional model called Malaysian Pressure Temperature (MPT) model, and compared this model with GPT2w model. The compared results at one international GNSS service (IGS) station located in the south of Peninsula Malaysia shows that MPT model has a better performance than GPT2w model to produce PWV during monsoon season. According to the results, MPT has improved the accuracy of estimated pressure and temperature by 30% and 10%, respectively, in comparison with GPT2w model. These results indicate that MPT model can be a good alternative tool in the absence of meteorological sensors at GPS stations in Peninsula Malaysia. Therefore, for GPS-based studies, we recommend MPT model to be used as a complementary tool for the Malaysia Real-Time Kinematic Network to develop a real-time PWV monitoring system.

  3. The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.

    PubMed

    Debry, Ronald W

    2005-06-01

    The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.

  4. Improvement of shallow landslide prediction accuracy using soil parameterisation for a granite area in South Korea

    NASA Astrophysics Data System (ADS)

    Kim, M. S.; Onda, Y.; Kim, J. K.

    2015-01-01

    SHALSTAB model applied to shallow landslides induced by rainfall to evaluate soil properties related with the effect of soil depth for a granite area in Jinbu region, Republic of Korea. Soil depth measured by a knocking pole test and two soil parameters from direct shear test (a and b) as well as one soil parameters from a triaxial compression test (c) were collected to determine the input parameters for the model. Experimental soil data were used for the first simulation (Case I) and, soil data represented the effect of measured soil depth and average soil depth from soil data of Case I were used in the second (Case II) and third simulations (Case III), respectively. All simulations were analysed using receiver operating characteristic (ROC) analysis to determine the accuracy of prediction. ROC analysis results for first simulation showed the low ROC values under 0.75 may be due to the internal friction angle and particularly the cohesion value. Soil parameters calculated from a stochastic hydro-geomorphological model were applied to the SHALSTAB model. The accuracy of Case II and Case III using ROC analysis showed higher accuracy values rather than first simulation. Our results clearly demonstrate that the accuracy of shallow landslide prediction can be improved when soil parameters represented the effect of soil thickness.

  5. Modeling tropospheric wet delays with national GNSS reference network in China for BeiDou precise point positioning

    NASA Astrophysics Data System (ADS)

    Zheng, Fu; Lou, Yidong; Gu, Shengfeng; Gong, Xiaopeng; Shi, Chuang

    2017-10-01

    During past decades, precise point positioning (PPP) has been proven to be a well-known positioning technique for centimeter or decimeter level accuracy. However, it needs long convergence time to get high-accuracy positioning, which limits the prospects of PPP, especially in real-time applications. It is expected that the PPP convergence time can be reduced by introducing high-quality external information, such as ionospheric or tropospheric corrections. In this study, several methods for tropospheric wet delays modeling over wide areas are investigated. A new, improved model is developed, applicable in real-time applications in China. Based on the GPT2w model, a modified parameter of zenith wet delay exponential decay wrt. height is introduced in the modeling of the real-time tropospheric delay. The accuracy of this tropospheric model and GPT2w model in different seasons is evaluated with cross-validation, the root mean square of the zenith troposphere delay (ZTD) is 1.2 and 3.6 cm on average, respectively. On the other hand, this new model proves to be better than the tropospheric modeling based on water-vapor scale height; it can accurately express tropospheric delays up to 10 km altitude, which potentially has benefits in many real-time applications. With the high-accuracy ZTD model, the augmented PPP convergence performance for BeiDou navigation satellite system (BDS) and GPS is evaluated. It shows that the contribution of the high-quality ZTD model on PPP convergence performance has relation with the constellation geometry. As BDS constellation geometry is poorer than GPS, the improvement for BDS PPP is more significant than that for GPS PPP. Compared with standard real-time PPP, the convergence time is reduced by 2-7 and 20-50% for the augmented BDS PPP, while GPS PPP only improves about 6 and 18% (on average), in horizontal and vertical directions, respectively. When GPS and BDS are combined, the geometry is greatly improved, which is good enough to get a reliable PPP solution, the augmentation PPP improves insignificantly comparing with standard PPP.

  6. Gastric precancerous diseases classification using CNN with a concise model.

    PubMed

    Zhang, Xu; Hu, Weiling; Chen, Fei; Liu, Jiquan; Yang, Yuanhang; Wang, Liangjing; Duan, Huilong; Si, Jianmin

    2017-01-01

    Gastric precancerous diseases (GPD) may deteriorate into early gastric cancer if misdiagnosed, so it is important to help doctors recognize GPD accurately and quickly. In this paper, we realize the classification of 3-class GPD, namely, polyp, erosion, and ulcer using convolutional neural networks (CNN) with a concise model called the Gastric Precancerous Disease Network (GPDNet). GPDNet introduces fire modules from SqueezeNet to reduce the model size and parameters about 10 times while improving speed for quick classification. To maintain classification accuracy with fewer parameters, we propose an innovative method called iterative reinforced learning (IRL). After training GPDNet from scratch, we apply IRL to fine-tune the parameters whose values are close to 0, and then we take the modified model as a pretrained model for the next training. The result shows that IRL can improve the accuracy about 9% after 6 iterations. The final classification accuracy of our GPDNet was 88.90%, which is promising for clinical GPD recognition.

  7. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    PubMed

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. The Fukushima-137Cs deposition case study: properties of the multi-model ensemble.

    PubMed

    Solazzo, E; Galmarini, S

    2015-01-01

    In this paper we analyse the properties of an eighteen-member ensemble generated by the combination of five atmospheric dispersion modelling systems and six meteorological data sets. The models have been applied to the total deposition of (137)Cs, following the nuclear accident of the Fukushima power plant in March 2011. Analysis is carried out with the scope of determining whether the ensemble is reliable, sufficiently diverse and if its accuracy and precision can be improved. Although ensemble practice is becoming more and more popular in many geophysical applications, good practice guidelines are missing as to how models should be combined for the ensembles to offer an improvement over single model realisations. We show that the ensemble of models share large portions of bias and variance and make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble mean with the advantage of being poorly correlated, allowing to save computational resources and reduce noise (and thus improving accuracy). We further propose and discuss two methods for selecting subsets of skilful and diverse members, and prove that, in the contingency of the present analysis, their mean outscores the full ensemble mean in terms of both accuracy (error) and precision (variance). Copyright © 2014. Published by Elsevier Ltd.

  9. Effects of field plot size on prediction accuracy of aboveground biomass in airborne laser scanning-assisted inventories in tropical rain forests of Tanzania.

    PubMed

    Mauya, Ernest William; Hansen, Endre Hofstad; Gobakken, Terje; Bollandsås, Ole Martin; Malimbwi, Rogers Ernest; Næsset, Erik

    2015-12-01

    Airborne laser scanning (ALS) has recently emerged as a promising tool to acquire auxiliary information for improving aboveground biomass (AGB) estimation in sample-based forest inventories. Under design-based and model-assisted inferential frameworks, the estimation relies on a model that relates the auxiliary ALS metrics to AGB estimated on ground plots. The size of the field plots has been identified as one source of model uncertainty because of the so-called boundary effects which increases with decreasing plot size. Recent research in tropical forests has aimed to quantify the boundary effects on model prediction accuracy, but evidence of the consequences for the final AGB estimates is lacking. In this study we analyzed the effect of field plot size on model prediction accuracy and its implication when used in a model-assisted inferential framework. The results showed that the prediction accuracy of the model improved as the plot size increased. The adjusted R 2 increased from 0.35 to 0.74 while the relative root mean square error decreased from 63.6 to 29.2%. Indicators of boundary effects were identified and confirmed to have significant effects on the model residuals. Variance estimates of model-assisted mean AGB relative to corresponding variance estimates of pure field-based AGB, decreased with increasing plot size in the range from 200 to 3000 m 2 . The variance ratio of field-based estimates relative to model-assisted variance ranged from 1.7 to 7.7. This study showed that the relative improvement in precision of AGB estimation when increasing field-plot size, was greater for an ALS-assisted inventory compared to that of a pure field-based inventory.

  10. Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS)

    DTIC Science & Technology

    2016-09-01

    Laboratory Change in Weather Research and Forecasting (WRF) Model Accuracy with Age of Input Data from the Global Forecast System (GFS) by JL Cogan...analysis. As expected, accuracy generally tended to decline as the large-scale data aged , but appeared to improve slightly as the age of the large...19 Table 7 Minimum and maximum mean RMDs for each WRF time (or GFS data age ) category. Minimum and

  11. An Innovative Approach to Improving the Accuracy of Delirium Assessments Using the Confusion Assessment Method for the Intensive Care Unit.

    PubMed

    DiLibero, Justin; O'Donoghue, Sharon C; DeSanto-Madeya, Susan; Felix, Janice; Ninobla, Annalyn; Woods, Allison

    2016-01-01

    Delirium occurs in up to 80% of intensive care unit (ICU) patients. Despite its prevalence in this population, there continues to be inaccuracies in delirium assessments. In the absence of accurate delirium assessments, delirium in critically ill ICU patients will remain unrecognized and will lead to negative clinical and organizational outcomes. The goal of this quality improvement project was to facilitate sustained improvement in the accuracy of delirium assessments among all ICU patients including those who were sedate or agitated. A pretest-posttest design was used to evaluate the effectiveness of a program to improve the accuracy of delirium screenings among patients admitted to a medical ICU or coronary care unit. Two hundred thirty-six delirium assessment audits were completed during the baseline period and 535 during the postintervention period. Compliance with performing at least 1 delirium assessment every shift was 85% at baseline and improved to 99% during the postintervention period. Baseline assessment accuracy was 70.31% among all patients and 53.49% among sedate and agitated patients. Postintervention assessment accuracy improved to 95.51% for all patients and 89.23% among sedate and agitated patients. The results from this project suggest the effectiveness of the program in improving assessment accuracy among difficult-to-assess patients. Further research is needed to demonstrate the effectiveness of this model across other critical care units, patient populations, and organizations.

  12. Efficiency improvement by navigated safety inspection involving visual clutter based on the random search model.

    PubMed

    Sun, Xinlu; Chong, Heap-Yih; Liao, Pin-Chao

    2018-06-25

    Navigated inspection seeks to improve hazard identification (HI) accuracy. With tight inspection schedule, HI also requires efficiency. However, lacking quantification of HI efficiency, navigated inspection strategies cannot be comprehensively assessed. This work aims to determine inspection efficiency in navigated safety inspection, controlling for the HI accuracy. Based on a cognitive method of the random search model (RSM), an experiment was conducted to observe the HI efficiency in navigation, for a variety of visual clutter (VC) scenarios, while using eye-tracking devices to record the search process and analyze the search performance. The results show that the RSM is an appropriate instrument, and VC serves as a hazard classifier for navigation inspection in improving inspection efficiency. This suggests a new and effective solution for addressing the low accuracy and efficiency of manual inspection through navigated inspection involving VC and the RSM. It also provides insights into the inspectors' safety inspection ability.

  13. A Novel Adaptive H∞ Filtering Method with Delay Compensation for the Transfer Alignment of Strapdown Inertial Navigation Systems

    PubMed Central

    Lyu, Weiwei

    2017-01-01

    Transfer alignment is always a key technology in a strapdown inertial navigation system (SINS) because of its rapidity and accuracy. In this paper a transfer alignment model is established, which contains the SINS error model and the measurement model. The time delay in the process of transfer alignment is analyzed, and an H∞ filtering method with delay compensation is presented. Then the H∞ filtering theory and the robust mechanism of H∞ filter are deduced and analyzed in detail. In order to improve the transfer alignment accuracy in SINS with time delay, an adaptive H∞ filtering method with delay compensation is proposed. Since the robustness factor plays an important role in the filtering process and has effect on the filtering accuracy, the adaptive H∞ filter with delay compensation can adjust the value of robustness factor adaptively according to the dynamic external environment. The vehicle transfer alignment experiment indicates that by using the adaptive H∞ filtering method with delay compensation, the transfer alignment accuracy and the pure inertial navigation accuracy can be dramatically improved, which demonstrates the superiority of the proposed filtering method. PMID:29182592

  14. Improving IMES Localization Accuracy by Integrating Dead Reckoning Information

    PubMed Central

    Fujii, Kenjiro; Arie, Hiroaki; Wang, Wei; Kaneko, Yuto; Sakamoto, Yoshihiro; Schmitz, Alexander; Sugano, Shigeki

    2016-01-01

    Indoor positioning remains an open problem, because it is difficult to achieve satisfactory accuracy within an indoor environment using current radio-based localization technology. In this study, we investigate the use of Indoor Messaging System (IMES) radio for high-accuracy indoor positioning. A hybrid positioning method combining IMES radio strength information and pedestrian dead reckoning information is proposed in order to improve IMES localization accuracy. For understanding the carrier noise ratio versus distance relation for IMES radio, the signal propagation of IMES radio is modeled and identified. Then, trilateration and extended Kalman filtering methods using the radio propagation model are developed for position estimation. These methods are evaluated through robot localization and pedestrian localization experiments. The experimental results show that the proposed hybrid positioning method achieved average estimation errors of 217 and 1846 mm in robot localization and pedestrian localization, respectively. In addition, in order to examine the reason for the positioning accuracy of pedestrian localization being much lower than that of robot localization, the influence of the human body on the radio propagation is experimentally evaluated. The result suggests that the influence of the human body can be modeled. PMID:26828492

  15. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  16. Improving risk prediction accuracy for new soldiers in the U.S. Army by adding self-report survey data to administrative data.

    PubMed

    Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C

    2018-04-03

    High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.

  17. Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase

    PubMed Central

    Lu, Kelin; Zhou, Rui

    2016-01-01

    A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications. PMID:27537883

  18. Sensor Fusion of Gaussian Mixtures for Ballistic Target Tracking in the Re-Entry Phase.

    PubMed

    Lu, Kelin; Zhou, Rui

    2016-08-15

    A sensor fusion methodology for the Gaussian mixtures model is proposed for ballistic target tracking with unknown ballistic coefficients. To improve the estimation accuracy, a track-to-track fusion architecture is proposed to fuse tracks provided by the local interacting multiple model filters. During the fusion process, the duplicate information is removed by considering the first order redundant information between the local tracks. With extensive simulations, we show that the proposed algorithm improves the tracking accuracy in ballistic target tracking in the re-entry phase applications.

  19. Improved water-level forecasting for the Northwest European Shelf and North Sea through direct modelling of tide, surge and non-linear interaction

    NASA Astrophysics Data System (ADS)

    Zijl, Firmijn; Verlaan, Martin; Gerritsen, Herman

    2013-07-01

    In real-time operational coastal forecasting systems for the northwest European shelf, the representation accuracy of tide-surge models commonly suffers from insufficiently accurate tidal representation, especially in shallow near-shore areas with complex bathymetry and geometry. Therefore, in conventional operational systems, the surge component from numerical model simulations is used, while the harmonically predicted tide, accurately known from harmonic analysis of tide gauge measurements, is added to forecast the full water-level signal at tide gauge locations. Although there are errors associated with this so-called astronomical correction (e.g. because of the assumption of linearity of tide and surge), for current operational models, astronomical correction has nevertheless been shown to increase the representation accuracy of the full water-level signal. The simulated modulation of the surge through non-linear tide-surge interaction is affected by the poor representation of the tide signal in the tide-surge model, which astronomical correction does not improve. Furthermore, astronomical correction can only be applied to locations where the astronomic tide is known through a harmonic analysis of in situ measurements at tide gauge stations. This provides a strong motivation to improve both tide and surge representation of numerical models used in forecasting. In the present paper, we propose a new generation tide-surge model for the northwest European Shelf (DCSMv6). This is the first application on this scale in which the tidal representation is such that astronomical correction no longer improves the accuracy of the total water-level representation and where, consequently, the straightforward direct model forecasting of total water levels is better. The methodology applied to improve both tide and surge representation of the model is discussed, with emphasis on the use of satellite altimeter data and data assimilation techniques for reducing parameter uncertainty. Historic DCSMv6 model simulations are compared against shelf wide observations for a full calendar year. For a selection of stations, these results are compared to those with astronomical correction, which confirms that the tide representation in coastal regions has sufficient accuracy, and that forecasting total water levels directly yields superior results.

  20. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  1. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    PubMed Central

    Juric, Matjaz B.

    2018-01-01

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352

  2. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    NASA Astrophysics Data System (ADS)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  3. A dental vision system for accurate 3D tooth modeling.

    PubMed

    Zhang, Li; Alemzadeh, K

    2006-01-01

    This paper describes an active vision system based reverse engineering approach to extract the three-dimensional (3D) geometric information from dental teeth and transfer this information into Computer-Aided Design/Computer-Aided Manufacture (CAD/CAM) systems to improve the accuracy of 3D teeth models and at the same time improve the quality of the construction units to help patient care. The vision system involves the development of a dental vision rig, edge detection, boundary tracing and fast & accurate 3D modeling from a sequence of sliced silhouettes of physical models. The rig is designed using engineering design methods such as a concept selection matrix and weighted objectives evaluation chart. Reconstruction results and accuracy evaluation are presented on digitizing different teeth models.

  4. Electroencephalography Based Fusion Two-Dimensional (2D)-Convolution Neural Networks (CNN) Model for Emotion Recognition System.

    PubMed

    Kwon, Yea-Hoon; Shin, Sae-Byuk; Kim, Shin-Dug

    2018-04-30

    The purpose of this study is to improve human emotional classification accuracy using a convolution neural networks (CNN) model and to suggest an overall method to classify emotion based on multimodal data. We improved classification performance by combining electroencephalogram (EEG) and galvanic skin response (GSR) signals. GSR signals are preprocessed using by the zero-crossing rate. Sufficient EEG feature extraction can be obtained through CNN. Therefore, we propose a suitable CNN model for feature extraction by tuning hyper parameters in convolution filters. The EEG signal is preprocessed prior to convolution by a wavelet transform while considering time and frequency simultaneously. We use a database for emotion analysis using the physiological signals open dataset to verify the proposed process, achieving 73.4% accuracy, showing significant performance improvement over the current best practice models.

  5. Remote sensing-based predictors improve distribution models of rare, early successional and broadleaf tree species in Utah

    USGS Publications Warehouse

    Zimmermann, N.E.; Edwards, T.C.; Moisen, Gretchen G.; Frescino, T.S.; Blackard, J.A.

    2007-01-01

    1. Compared to bioclimatic variables, remote sensing predictors are rarely used for predictive species modelling. When used, the predictors represent typically habitat classifications or filters rather than gradual spectral, surface or biophysical properties. Consequently, the full potential of remotely sensed predictors for modelling the spatial distribution of species remains unexplored. Here we analysed the partial contributions of remotely sensed and climatic predictor sets to explain and predict the distribution of 19 tree species in Utah. We also tested how these partial contributions were related to characteristics such as successional types or species traits. 2. We developed two spatial predictor sets of remotely sensed and topo-climatic variables to explain the distribution of tree species. We used variation partitioning techniques applied to generalized linear models to explore the combined and partial predictive powers of the two predictor sets. Non-parametric tests were used to explore the relationships between the partial model contributions of both predictor sets and species characteristics. 3. More than 60% of the variation explained by the models represented contributions by one of the two partial predictor sets alone, with topo-climatic variables outperforming the remotely sensed predictors. However, the partial models derived from only remotely sensed predictors still provided high model accuracies, indicating a significant correlation between climate and remote sensing variables. The overall accuracy of the models was high, but small sample sizes had a strong effect on cross-validated accuracies for rare species. 4. Models of early successional and broadleaf species benefited significantly more from adding remotely sensed predictors than did late seral and needleleaf species. The core-satellite species types differed significantly with respect to overall model accuracies. Models of satellite and urban species, both with low prevalence, benefited more from use of remotely sensed predictors than did the more frequent core species. 5. Synthesis and applications. If carefully prepared, remotely sensed variables are useful additional predictors for the spatial distribution of trees. Major improvements resulted for deciduous, early successional, satellite and rare species. The ability to improve model accuracy for species having markedly different life history strategies is a crucial step for assessing effects of global change. ?? 2007 The Authors.

  6. Remote sensing-based predictors improve distribution models of rare, early successional and broadleaf tree species in Utah

    PubMed Central

    ZIMMERMANN, N E; EDWARDS, T C; MOISEN, G G; FRESCINO, T S; BLACKARD, J A

    2007-01-01

    Compared to bioclimatic variables, remote sensing predictors are rarely used for predictive species modelling. When used, the predictors represent typically habitat classifications or filters rather than gradual spectral, surface or biophysical properties. Consequently, the full potential of remotely sensed predictors for modelling the spatial distribution of species remains unexplored. Here we analysed the partial contributions of remotely sensed and climatic predictor sets to explain and predict the distribution of 19 tree species in Utah. We also tested how these partial contributions were related to characteristics such as successional types or species traits. We developed two spatial predictor sets of remotely sensed and topo-climatic variables to explain the distribution of tree species. We used variation partitioning techniques applied to generalized linear models to explore the combined and partial predictive powers of the two predictor sets. Non-parametric tests were used to explore the relationships between the partial model contributions of both predictor sets and species characteristics. More than 60% of the variation explained by the models represented contributions by one of the two partial predictor sets alone, with topo-climatic variables outperforming the remotely sensed predictors. However, the partial models derived from only remotely sensed predictors still provided high model accuracies, indicating a significant correlation between climate and remote sensing variables. The overall accuracy of the models was high, but small sample sizes had a strong effect on cross-validated accuracies for rare species. Models of early successional and broadleaf species benefited significantly more from adding remotely sensed predictors than did late seral and needleleaf species. The core-satellite species types differed significantly with respect to overall model accuracies. Models of satellite and urban species, both with low prevalence, benefited more from use of remotely sensed predictors than did the more frequent core species. Synthesis and applications. If carefully prepared, remotely sensed variables are useful additional predictors for the spatial distribution of trees. Major improvements resulted for deciduous, early successional, satellite and rare species. The ability to improve model accuracy for species having markedly different life history strategies is a crucial step for assessing effects of global change. PMID:18642470

  7. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    NASA Astrophysics Data System (ADS)

    Hersch, Roger David; Crété, Frédérique

    2004-12-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In the case of offset prints, the mean difference between predictions and measurements expressed in CIE-LAB CIE-94 ΔE94 values is reduced at 100 lpi from 1.54 to 0.90 (accuracy improvement factor: 1.7) and at 150 lpi it is reduced from 1.87 to 1.00 (accuracy improvement factor: 1.8). Similar improvements have been observed for a thermal transfer printer at 600 dpi, at lineatures of 50 and 75 lpi. In the case of an ink-jet printer at 600 dpi, the mean ΔE94 value is reduced at 75 lpi from 3.03 to 0.90 (accuracy improvement factor: 3.4) and at 100 lpi from 3.08 to 0.91 (accuracy improvement factor: 3.4).

  8. Improving the Yule-Nielsen modified Neugebauer model by dot surface coverages depending on the ink superposition conditions

    NASA Astrophysics Data System (ADS)

    Hersch, Roger David; Crete, Frederique

    2005-01-01

    Dot gain is different when dots are printed alone, printed in superposition with one ink or printed in superposition with two inks. In addition, the dot gain may also differ depending on which solid ink the considered halftone layer is superposed. In a previous research project, we developed a model for computing the effective surface coverage of a dot according to its superposition conditions. In the present contribution, we improve the Yule-Nielsen modified Neugebauer model by integrating into it our effective dot surface coverage computation model. Calibration of the reproduction curves mapping nominal to effective surface coverages in every superposition condition is carried out by fitting effective dot surfaces which minimize the sum of square differences between the measured reflection density spectra and reflection density spectra predicted according to the Yule-Nielsen modified Neugebauer model. In order to predict the reflection spectrum of a patch, its known nominal surface coverage values are converted into effective coverage values by weighting the contributions from different reproduction curves according to the weights of the contributing superposition conditions. We analyze the colorimetric prediction improvement brought by our extended dot surface coverage model for clustered-dot offset prints, thermal transfer prints and ink-jet prints. The color differences induced by the differences between measured reflection spectra and reflection spectra predicted according to the new dot surface estimation model are quantified on 729 different cyan, magenta, yellow patches covering the full color gamut. As a reference, these differences are also computed for the classical Yule-Nielsen modified spectral Neugebauer model incorporating a single halftone reproduction curve for each ink. Taking into account dot surface coverages according to different superposition conditions considerably improves the predictions of the Yule-Nielsen modified Neugebauer model. In the case of offset prints, the mean difference between predictions and measurements expressed in CIE-LAB CIE-94 ΔE94 values is reduced at 100 lpi from 1.54 to 0.90 (accuracy improvement factor: 1.7) and at 150 lpi it is reduced from 1.87 to 1.00 (accuracy improvement factor: 1.8). Similar improvements have been observed for a thermal transfer printer at 600 dpi, at lineatures of 50 and 75 lpi. In the case of an ink-jet printer at 600 dpi, the mean ΔE94 value is reduced at 75 lpi from 3.03 to 0.90 (accuracy improvement factor: 3.4) and at 100 lpi from 3.08 to 0.91 (accuracy improvement factor: 3.4).

  9. Use of Temperature to Improve West Nile Virus Forecasts

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; DeFelice, N.; Schneider, Z.; Little, E.; Barker, C.; Caillouet, K.; Campbell, S.; Damian, D.; Irwin, P.; Jones, H.; Townsend, J.

    2017-12-01

    Ecological and laboratory studies have demonstrated that temperature modulates West Nile virus (WNV) transmission dynamics and spillover infection to humans. Here we explore whether the inclusion of temperature forcing in a model depicting WNV transmission improves WNV forecast accuracy relative to a baseline model depicting WNV transmission without temperature forcing. Both models are optimized using a data assimilation method and two observed data streams: mosquito infection rates and reported human WNV cases. Each coupled model-inference framework is then used to generate retrospective ensemble forecasts of WNV for 110 outbreak years from among 12 geographically diverse United States counties. The temperature-forced model improves forecast accuracy for much of the outbreak season. From the end of July until the beginning of October, a timespan during which 70% of human cases are reported, the temperature-forced model generated forecasts of the total number of human cases over the next 3 weeks, total number of human cases over the season, the week with the highest percentage of infectious mosquitoes, and the peak percentage of infectious mosquitoes that were on average 5%, 10%, 12%, and 6% more accurate, respectively, than the baseline model. These results indicate that use of temperature forcing improves WNV forecast accuracy and provide further evidence that temperatures influence rates of WNV transmission. The findings help build a foundation for implementation of a statistically rigorous system for real-time forecast of seasonal WNV outbreaks and their use as a quantitative decision support tool for public health officials and mosquito control programs.

  10. The Geoid Slope Validation Survey 2014 and GRAV-D airborne gravity enhanced geoid comparison results in Iowa

    NASA Astrophysics Data System (ADS)

    Wang, Y. M.; Becker, C.; Mader, G.; Martin, D.; Li, X.; Jiang, T.; Breidenbach, S.; Geoghegan, C.; Winester, D.; Guillaume, S.; Bürki, B.

    2017-10-01

    Three Geoid Slope Validation Surveys were planned by the National Geodetic Survey for validating geoid improvement gained by incorporating airborne gravity data collected by the "Gravity for the Redefinition of the American Vertical Datum" (GRAV-D) project in flat, medium and rough topographic areas, respectively. The first survey GSVS11 over a flat topographic area in Texas confirmed that a 1-cm differential accuracy geoid over baseline lengths between 0.4 and 320 km is achievable with GRAV-D data included (Smith et al. in J Geod 87:885-907, 2013). The second survey, Geoid Slope Validation Survey 2014 (GSVS14) took place in Iowa in an area with moderate topography but significant gravity variation. Two sets of geoidal heights were computed from GPS/leveling data and observed astrogeodetic deflections of the vertical at 204 GSVS14 official marks. They agree with each other at a {± }1.2 cm level, which attests to the high quality of the GSVS14 data. In total, four geoid models were computed. Three models combined the GOCO03/5S satellite gravity model with terrestrial and GRAV-D gravity with different strategies. The fourth model, called xGEOID15A, had no airborne gravity data and served as the benchmark to quantify the contribution of GRAV-D to the geoid improvement. The comparisons show that each model agrees with the GPS/leveling geoid height by 1.5 cm in mark-by-mark comparisons. In differential comparisons, all geoid models have a predicted accuracy of 1-2 cm at baseline lengths from 1.6 to 247 km. The contribution of GRAV-D is not apparent due to a 9-cm slope in the western 50-km section of the traverse for all gravimetric geoid models, and it was determined that the slopes have been caused by a 5 mGal bias in the terrestrial gravity data. If that western 50-km section of the testing line is excluded in the comparisons, then the improvement with GRAV-D is clearly evident. In that case, 1-cm differential accuracy on baselines of any length is achieved with the GRAV-D-enhanced geoid models and exhibits a clear improvement over the geoid models without GRAV-D data. GSVS14 confirmed that the geoid differential accuracies are in the 1-2 cm range at various baseline lengths. The accuracy increases to 1 cm with GRAV-D gravity when the west 50 km line is not included. The data collected by the surveys have high accuracy and have the potential to be used for validation of other geodetic techniques, e.g., the chronometric leveling. To reach the 1-cm height differences of the GSVS data, a clock with frequency accuracy of 10^{-18} is required. Using the GSVS data, the accuracy of ellipsoidal height differences can also be estimated.

  11. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  12. An Interactive Image Segmentation Method in Hand Gesture Recognition

    PubMed Central

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy. PMID:28134818

  13. A Global Optimization Methodology for Rocket Propulsion Applications

    NASA Technical Reports Server (NTRS)

    2001-01-01

    While the response surface method is an effective method in engineering optimization, its accuracy is often affected by the use of limited amount of data points for model construction. In this chapter, the issues related to the accuracy of the RS approximations and possible ways of improving the RS model using appropriate treatments, including the iteratively re-weighted least square (IRLS) technique and the radial-basis neural networks, are investigated. A main interest is to identify ways to offer added capabilities for the RS method to be able to at least selectively improve the accuracy in regions of importance. An example is to target the high efficiency region of a fluid machinery design space so that the predictive power of the RS can be maximized when it matters most. Analytical models based on polynomials, with controlled level of noise, are used to assess the performance of these techniques.

  14. The use of absolute gravity data for the validation of Global Geopotential Models and for improving quasigeoid heights determined from satellite-only Global Geopotential Models

    NASA Astrophysics Data System (ADS)

    Godah, Walyeldeen; Krynski, Jan; Szelachowska, Malgorzata

    2018-05-01

    The objective of this paper is to demonstrate the usefulness of absolute gravity data for the validation of Global Geopotential Models (GGMs). It is also aimed at improving quasigeoid heights determined from satellite-only GGMs using absolute gravity data. The area of Poland, as a unique one, covered with a homogeneously distributed set of absolute gravity data, has been selected as a study area. The gravity anomalies obtained from GGMs were validated using the corresponding ones determined from absolute gravity data. The spectral enhancement method was implemented to overcome the spectral inconsistency in data being validated. The quasigeoid heights obtained from the satellite-only GGM as well as from the satellite-only GGM in combination with absolute gravity data were evaluated with high accuracy GNSS/levelling data. Estimated accuracy of gravity anomalies obtained from GGMs investigated is of 1.7 mGal. Considering omitted gravity signal, e.g. from degree and order 101 to 2190, satellite-only GGMs can be validated at the accuracy level of 1 mGal using absolute gravity data. An improvement up to 59% in the accuracy of quasigeoid heights obtained from the satellite-only GGM can be observed when combining the satellite-only GGM with absolute gravity data.

  15. Analysis of variance to assess statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes.

    PubMed

    Makeyev, Oleksandr; Joe, Cody; Lee, Colin; Besio, Walter G

    2017-07-01

    Concentric ring electrodes have shown promise in non-invasive electrophysiological measurement demonstrating their superiority to conventional disc electrodes, in particular, in accuracy of Laplacian estimation. Recently, we have proposed novel variable inter-ring distances concentric ring electrodes. Analytic and finite element method modeling results for linearly increasing distances electrode configurations suggested they may decrease the truncation error resulting in more accurate Laplacian estimates compared to currently used constant inter-ring distances configurations. This study assesses statistical significance of Laplacian estimation accuracy improvement due to novel variable inter-ring distances concentric ring electrodes. Full factorial design of analysis of variance was used with one categorical and two numerical factors: the inter-ring distances, the electrode diameter, and the number of concentric rings in the electrode. The response variables were the Relative Error and the Maximum Error of Laplacian estimation computed using a finite element method model for each of the combinations of levels of three factors. Effects of the main factors and their interactions on Relative Error and Maximum Error were assessed and the obtained results suggest that all three factors have statistically significant effects in the model confirming the potential of using inter-ring distances as a means of improving accuracy of Laplacian estimation.

  16. Does probabilistic modelling of linkage disequilibrium evolution improve the accuracy of QTL location in animal pedigree?

    PubMed

    Cierco-Ayrolles, Christine; Dejean, Sébastien; Legarra, Andrés; Gilbert, Hélène; Druet, Tom; Ytournel, Florence; Estivals, Delphine; Oumouhou, Naïma; Mangin, Brigitte

    2010-10-22

    Since 2001, the use of more and more dense maps has made researchers aware that combining linkage and linkage disequilibrium enhances the feasibility of fine-mapping genes of interest. So, various method types have been derived to include concepts of population genetics in the analyses. One major drawback of many of these methods is their computational cost, which is very significant when many markers are considered. Recent advances in technology, such as SNP genotyping, have made it possible to deal with huge amount of data. Thus the challenge that remains is to find accurate and efficient methods that are not too time consuming. The study reported here specifically focuses on the half-sib family animal design. Our objective was to determine whether modelling of linkage disequilibrium evolution improved the mapping accuracy of a quantitative trait locus of agricultural interest in these populations. We compared two methods of fine-mapping. The first one was an association analysis. In this method, we did not model linkage disequilibrium evolution. Therefore, the modelling of the evolution of linkage disequilibrium was a deterministic process; it was complete at time 0 and remained complete during the following generations. In the second method, the modelling of the evolution of population allele frequencies was derived from a Wright-Fisher model. We simulated a wide range of scenarios adapted to animal populations and compared these two methods for each scenario. Our results indicated that the improvement produced by probabilistic modelling of linkage disequilibrium evolution was not significant. Both methods led to similar results concerning the location accuracy of quantitative trait loci which appeared to be mainly improved by using four flanking markers instead of two. Therefore, in animal half-sib designs, modelling linkage disequilibrium evolution using a Wright-Fisher model does not significantly improve the accuracy of the QTL location when compared to a simpler method assuming complete and constant linkage between the QTL and the marker alleles. Finally, given the high marker density available nowadays, the simpler method should be preferred as it gives accurate results in a reasonable computing time.

  17. Statewide Quality Improvement Initiative to Reduce Early Elective Deliveries and Improve Birth Registry Accuracy.

    PubMed

    Kaplan, Heather C; King, Eileen; White, Beth E; Ford, Susan E; Fuller, Sandra; Krew, Michael A; Marcotte, Michael P; Iams, Jay D; Bailit, Jennifer L; Bouchard, Jo M; Friar, Kelly; Lannon, Carole M

    2018-04-01

    To evaluate the success of a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data accuracy rapidly and at scale in Ohio. Between February 2013 and March 2014, participating hospitals were involved in a quality improvement initiative to reduce early elective deliveries at less than 39 weeks of gestation and improve birth registry data. This initiative was designed as a learning collaborative model (group webinars and a single face-to-face meeting) and included individual quality improvement coaching. It was implemented using a stepped wedge design with hospitals divided into three balanced groups (waves) participating in the initiative sequentially. Birth registry data were used to assess hospital rates of nonmedically indicated inductions at less than 39 weeks of gestation. Comparisons were made between groups participating and those not participating in the initiative at two time points. To measure birth registry accuracy, hospitals conducted monthly audits comparing birth registry data with the medical record. Associations were assessed using generalized linear repeated measures models accounting for time effects. Seventy of 72 (97%) eligible hospitals participated. Based on birth registry data, nonmedically indicated inductions at less than 39 weeks of gestation declined in all groups with implementation (wave 1: 6.2-3.2%, P<.001; wave 2: 4.2-2.5%, P=.04; wave 3: 6.8-3.7%, P=.002). When waves 1 and 2 were participating in the initiative, they saw significant decreases in rates of early elective deliveries as compared with wave 3 (control; P=.018). All waves had significant improvement in birth registry accuracy (wave 1: 80-90%, P=.017; wave 2: 80-100%, P=.002; wave 3: 75-100%, P<.001). A quality improvement initiative enabled statewide spread of change strategies to decrease early elective deliveries and improve birth registry accuracy over 14 months and could be used for rapid dissemination of other evidence-based obstetric care practices across states or hospital systems.

  18. Development of 3D Oxide Fuel Mechanics Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, B. W.; Casagranda, A.; Pitts, S. A.

    This report documents recent work to improve the accuracy and robustness of the mechanical constitutive models used in the BISON fuel performance code. These developments include migration of the fuel mechanics models to be based on the MOOSE Tensor Mechanics module, improving the robustness of the smeared cracking model, implementing a capability to limit the time step size based on material model response, and improving the robustness of the return mapping iterations used in creep and plasticity models.

  19. A study of ionospheric grid modification technique for BDS/GPS receiver

    NASA Astrophysics Data System (ADS)

    Liu, Xuelin; Li, Meina; Zhang, Lei

    2017-07-01

    For the single-frequency GPS receiver, ionospheric delay is an important factor affecting the positioning performance. There are many kinds of ionospheric correction methods, common models are Bent model, IRI model, Klobuchar model, Ne Quick model and so on. The US Global Positioning System (GPS) uses the Klobuchar coefficients transmitted in the satellite signal to correct the ionospheric delay error for a single frequency GPS receiver, but this model can only reduce the ionospheric error of about 50% in the mid-latitudes. In the Beidou system, the accuracy of the correction delay is higher. Therefore, this paper proposes a method that using BD grid information to correct GPS ionospheric delay to improve the ionospheric delay for the BDS/GPS compatible positioning receiver. In this paper, the principle of ionospheric grid algorithm is introduced in detail, and the positioning accuracy of GPS system and BDS/GPS compatible positioning system is compared and analyzed by the real measured data. The results show that the method can effectively improve the positioning accuracy of the receiver in a more concise way.

  20. Utilizing multiple scale models to improve predictions of extra-axial hemorrhage in the immature piglet.

    PubMed

    Scott, Gregory G; Margulies, Susan S; Coats, Brittany

    2016-10-01

    Traumatic brain injury (TBI) is a leading cause of death and disability in the USA. To help understand and better predict TBI, researchers have developed complex finite element (FE) models of the head which incorporate many biological structures such as scalp, skull, meninges, brain (with gray/white matter differentiation), and vasculature. However, most models drastically simplify the membranes and substructures between the pia and arachnoid membranes. We hypothesize that substructures in the pia-arachnoid complex (PAC) contribute substantially to brain deformation following head rotation, and that when included in FE models accuracy of extra-axial hemorrhage prediction improves. To test these hypotheses, microscale FE models of the PAC were developed to span the variability of PAC substructure anatomy and regional density. The constitutive response of these models were then integrated into an existing macroscale FE model of the immature piglet brain to identify changes in cortical stress distribution and predictions of extra-axial hemorrhage (EAH). Incorporating regional variability of PAC substructures substantially altered the distribution of principal stress on the cortical surface of the brain compared to a uniform representation of the PAC. Simulations of 24 non-impact rapid head rotations in an immature piglet animal model resulted in improved accuracy of EAH prediction (to 94 % sensitivity, 100 % specificity), as well as a high accuracy in regional hemorrhage prediction (to 82-100 % sensitivity, 100 % specificity). We conclude that including a biofidelic PAC substructure variability in FE models of the head is essential for improved predictions of hemorrhage at the brain/skull interface.

  1. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model

    PubMed Central

    Li, Xiaoqing; Wang, Yu

    2018-01-01

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. PMID:29351254

  2. Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model.

    PubMed

    Xin, Jingzhou; Zhou, Jianting; Yang, Simon X; Li, Xiaoqing; Wang, Yu

    2018-01-19

    Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology.

  3. Observations of TOPEX/Poseidon Orbit Errors Due to Gravitational and Tidal Modeling Errors Using the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Haines, B.; Christensen, E.; Guinn, J.; Norman, R.; Marshall, J.

    1995-01-01

    Satellite altimetry must measure variations in ocean topography with cm-level accuracy. The TOPEX/Poseidon mission is designed to do this by measuring the radial component of the orbit with an accuracy of 13 cm or better RMS. Recent advances, however, have improved this accuracy by about an order of magnitude.

  4. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  5. Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion

    DOE PAGES

    He, Xu; Tuo, Rui; Jeff Wu, C. F.

    2017-01-31

    Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less

  6. Proposal for a New Predictive Model of Short-Term Mortality After Living Donor Liver Transplantation due to Acute Liver Failure.

    PubMed

    Chung, Hyun Sik; Lee, Yu Jung; Jo, Yun Sung

    2017-02-21

    BACKGROUND Acute liver failure (ALF) is known to be a rapidly progressive and fatal disease. Various models which could help to estimate the post-transplant outcome for ALF have been developed; however, none of them have been proved to be the definitive predictive model of accuracy. We suggest a new predictive model, and investigated which model has the highest predictive accuracy for the short-term outcome in patients who underwent living donor liver transplantation (LDLT) due to ALF. MATERIAL AND METHODS Data from a total 88 patients were collected retrospectively. King's College Hospital criteria (KCH), Child-Turcotte-Pugh (CTP) classification, and model for end-stage liver disease (MELD) score were calculated. Univariate analysis was performed, and then multivariate statistical adjustment for preoperative variables of ALF prognosis was performed. A new predictive model was developed, called the MELD conjugated serum phosphorus model (MELD-p). The individual diagnostic accuracy and cut-off value of models in predicting 3-month post-transplant mortality were evaluated using the area under the receiver operating characteristic curve (AUC). The difference in AUC between MELD-p and the other models was analyzed. The diagnostic improvement in MELD-p was assessed using the net reclassification improvement (NRI) and integrated discrimination improvement (IDI). RESULTS The MELD-p and MELD scores had high predictive accuracy (AUC >0.9). KCH and serum phosphorus had an acceptable predictive ability (AUC >0.7). The CTP classification failed to show discriminative accuracy in predicting 3-month post-transplant mortality. The difference in AUC between MELD-p and the other models had statistically significant associations with CTP and KCH. The cut-off value of MELD-p was 3.98 for predicting 3-month post-transplant mortality. The NRI was 9.9% and the IDI was 2.9%. CONCLUSIONS MELD-p score can predict 3-month post-transplant mortality better than other scoring systems after LDLT due to ALF. The recommended cut-off value of MELD-p is 3.98.

  7. Prediction of Driver’s Intention of Lane Change by Augmenting Sensor Information Using Machine Learning Techniques

    PubMed Central

    Kim, Il-Hwan; Bong, Jae-Hwan; Park, Jooyoung; Park, Shinsuk

    2017-01-01

    Driver assistance systems have become a major safety feature of modern passenger vehicles. The advanced driver assistance system (ADAS) is one of the active safety systems to improve the vehicle control performance and, thus, the safety of the driver and the passengers. To use the ADAS for lane change control, rapid and correct detection of the driver’s intention is essential. This study proposes a novel preprocessing algorithm for the ADAS to improve the accuracy in classifying the driver’s intention for lane change by augmenting basic measurements from conventional on-board sensors. The information on the vehicle states and the road surface condition is augmented by using an artificial neural network (ANN) models, and the augmented information is fed to a support vector machine (SVM) to detect the driver’s intention with high accuracy. The feasibility of the developed algorithm was tested through driving simulator experiments. The results show that the classification accuracy for the driver’s intention can be improved by providing an SVM model with sufficient driving information augmented by using ANN models of vehicle dynamics. PMID:28604582

  8. An improved model for the calculation of CO2 solubility in aqueous solutions containing Na+, K+, Ca2+, Mg2+, Cl-, and SO42-

    USGS Publications Warehouse

    Duan, Zhenhao; Sun, R.; Zhu, Chen; Chou, I.-Ming

    2006-01-01

    An improved model is presented for the calculation of the solubility of carbon dioxide in aqueous solutions containing Na+, K+, Ca2+, Mg2+, Cl-, and SO42- in a wide temperature-pressure-ionic strength range (from 273 to 533 K, from 0 to 2000 bar, and from 0 to 4.5 molality of salts) with experimental accuracy. The improvements over the previous model [Duan, Z. and Sun, R., 2003. An improved model calculating CO2 solubility in pure water and aqueous NaCl solutions from 273 to 533K and from 0 to 2000 bar. Chemical Geology, 193: 257-271] include: (1) By developing a non-iterative equation to replace the original equation of state in the calculation of CO 2 fugacity coefficients, the new model is at least twenty times computationally faster and can be easily adapted to numerical reaction-flow simulator for such applications as CO2 sequestration and (2) By fitting to the new solubility data, the new model improved the accuracy below 288 K from 6% to about 3% of uncertainty but still retains the high accuracy of the original model above 288 K. We comprehensively evaluate all experimental CO2 solubility data. Compared with these data, this model not only reproduces all the reliable data used for the parameterization but also predicts the data that were not used in the parameterization. In order to facilitate the application to CO2 sequestration, we also predicted CO2 solubility in seawater at two-phase coexistence (vapor-liquid or liquid-liquid) and at three-phase coexistence (CO2 hydrate-liquid water-vapor CO2 [or liquid CO2]). The improved model is programmed and can be downloaded from the website http://www.geochem-model.org/programs.htm. ?? 2005 Elsevier B.V. All rights reserved.

  9. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    PubMed Central

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  10. Vascular input function correction of inflow enhancement for improved pharmacokinetic modeling of liver DCE-MRI.

    PubMed

    Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B

    2018-06-01

    To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Improving default risk prediction using Bayesian model uncertainty techniques.

    PubMed

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  12. Surface accuracy measurement sensor test on a 50-meter antenna surface model

    NASA Technical Reports Server (NTRS)

    Spiers, R. B.; Burcher, E. E.; Stump, C. W.; Saunders, C. G.; Brooks, G. F.

    1984-01-01

    The Surface Accuracy Measurement Sensor (SAMS) is a telescope with a focal plane photo electric detector that senses the lateral position of light source targets in its field of view. After extensive laboratory testing the engineering breadboard sensor system was installed and tested on a 30 degree segment of a 50-meter diameter, mesh surface, antenna model. Test results correlated well with the laboratory tests and indicated accuracies of approximately 0.59 arc seconds at 21 meters range. Test results are presented and recommendations given for sensor improvements.

  13. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  14. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  15. Importance of Personalized Health-Care Models: A Case Study in Activity Recognition.

    PubMed

    Zdravevski, Eftim; Lameski, Petre; Trajkovik, Vladimir; Pombo, Nuno; Garcia, Nuno

    2018-01-01

    Novel information and communication technologies create possibilities to change the future of health care. Ambient Assisted Living (AAL) is seen as a promising supplement of the current care models. The main goal of AAL solutions is to apply ambient intelligence technologies to enable elderly people to continue to live in their preferred environments. Applying trained models from health data is challenging because the personalized environments could differ significantly than the ones which provided training data. This paper investigates the effects on activity recognition accuracy using single accelerometer of personalized models compared to models built on general population. In addition, we propose a collaborative filtering based approach which provides balance between fully personalized models and generic models. The results show that the accuracy could be improved to 95% with fully personalized models, and up to 91.6% with collaborative filtering based models, which is significantly better than common models that exhibit accuracy of 85.1%. The collaborative filtering approach seems to provide highly personalized models with substantial accuracy, while overcoming the cold start problem that is common for fully personalized models.

  16. Use of in Vitro HTS-Derived Concentration–Response Data as Biological Descriptors Improves the Accuracy of QSAR Models of in Vivo Toxicity

    PubMed Central

    Sedykh, Alexander; Zhu, Hao; Tang, Hao; Zhang, Liying; Richard, Ann; Rusyn, Ivan; Tropsha, Alexander

    2011-01-01

    Background Quantitative high-throughput screening (qHTS) assays are increasingly being used to inform chemical hazard identification. Hundreds of chemicals have been tested in dozens of cell lines across extensive concentration ranges by the National Toxicology Program in collaboration with the National Institutes of Health Chemical Genomics Center. Objectives Our goal was to test a hypothesis that dose–response data points of the qHTS assays can serve as biological descriptors of assayed chemicals and, when combined with conventional chemical descriptors, improve the accuracy of quantitative structure–activity relationship (QSAR) models applied to prediction of in vivo toxicity end points. Methods We obtained cell viability qHTS concentration–response data for 1,408 substances assayed in 13 cell lines from PubChem; for a subset of these compounds, rodent acute toxicity half-maximal lethal dose (LD50) data were also available. We used the k nearest neighbor classification and random forest QSAR methods to model LD50 data using chemical descriptors either alone (conventional models) or combined with biological descriptors derived from the concentration–response qHTS data (hybrid models). Critical to our approach was the use of a novel noise-filtering algorithm to treat qHTS data. Results Both the external classification accuracy and coverage (i.e., fraction of compounds in the external set that fall within the applicability domain) of the hybrid QSAR models were superior to conventional models. Conclusions Concentration–response qHTS data may serve as informative biological descriptors of molecules that, when combined with conventional chemical descriptors, may considerably improve the accuracy and utility of computational approaches for predicting in vivo animal toxicity end points. PMID:20980217

  17. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds.

    PubMed

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M; Bloom, Peter H; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  18. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    PubMed Central

    Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data. PMID:28403159

  19. Improved supervised classification of accelerometry data to distinguish behaviors of soaring birds

    USGS Publications Warehouse

    Sur, Maitreyi; Suffredini, Tony; Wessells, Stephen M.; Bloom, Peter H.; Lanzone, Michael J.; Blackshire, Sheldon; Sridhar, Srisarguru; Katzner, Todd

    2017-01-01

    Soaring birds can balance the energetic costs of movement by switching between flapping, soaring and gliding flight. Accelerometers can allow quantification of flight behavior and thus a context to interpret these energetic costs. However, models to interpret accelerometry data are still being developed, rarely trained with supervised datasets, and difficult to apply. We collected accelerometry data at 140Hz from a trained golden eagle (Aquila chrysaetos) whose flight we recorded with video that we used to characterize behavior. We applied two forms of supervised classifications, random forest (RF) models and K-nearest neighbor (KNN) models. The KNN model was substantially easier to implement than the RF approach but both were highly accurate in classifying basic behaviors such as flapping (85.5% and 83.6% accurate, respectively), soaring (92.8% and 87.6%) and sitting (84.1% and 88.9%) with overall accuracies of 86.6% and 92.3% respectively. More detailed classification schemes, with specific behaviors such as banking and straight flights were well classified only by the KNN model (91.24% accurate; RF = 61.64% accurate). The RF model maintained its accuracy of classifying basic behavior classification accuracy of basic behaviors at sampling frequencies as low as 10Hz, the KNN at sampling frequencies as low as 20Hz. Classification of accelerometer data collected from free ranging birds demonstrated a strong dependence of predicted behavior on the type of classification model used. Our analyses demonstrate the consequence of different approaches to classification of accelerometry data, the potential to optimize classification algorithms with validated flight behaviors to improve classification accuracy, ideal sampling frequencies for different classification algorithms, and a number of ways to improve commonly used analytical techniques and best practices for classification of accelerometry data.

  20. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  1. EUV Irradiance Inputs to Thermospheric Density Models: Open Issues and Path Forward

    NASA Astrophysics Data System (ADS)

    Vourlidas, A.; Bruinsma, S.

    2018-01-01

    One of the objectives of the NASA Living With a Star Institute on "Nowcasting of Atmospheric Drag for low Earth orbit (LEO) Spacecraft" was to investigate whether and how to increase the accuracy of atmospheric drag models by improving the quality of the solar forcing inputs, namely, extreme ultraviolet (EUV) irradiance information. In this focused review, we examine the status of and issues with EUV measurements and proxies, discuss recent promising developments, and suggest a number of ways to improve the reliability, availability, and forecast accuracy of EUV measurements in the next solar cycle.

  2. Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction

    PubMed Central

    Chua, Sing Yee; Guo, Ningqun; Tan, Ching Seong; Wang, Xin

    2017-01-01

    Accuracy is an important measure of system performance and remains a challenge in 3D range gated reconstruction despite the advancement in laser and sensor technology. The weighted average model that is commonly used for range estimation is heavily influenced by the intensity variation due to various factors. Accuracy improvement in term of range estimation is therefore important to fully optimise the system performance. In this paper, a 3D range gated reconstruction model is derived based on the operating principles of range gated imaging and time slicing reconstruction, fundamental of radiant energy, Laser Detection And Ranging (LADAR), and Bidirectional Reflection Distribution Function (BRDF). Accordingly, a new range estimation model is proposed to alleviate the effects induced by distance, target reflection, and range distortion. From the experimental results, the proposed model outperforms the conventional weighted average model to improve the range estimation for better 3D reconstruction. The outcome demonstrated is of interest to various laser ranging applications and can be a reference for future works. PMID:28872589

  3. Dispersal and extrapolation on the accuracy of temporal predictions from distribution models for the Darwin's frog.

    PubMed

    Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio

    2017-07-01

    Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.

  4. Modeling Soil Organic Carbon at Regional Scale by Combining Multi-Spectral Images with Laboratory Spectra.

    PubMed

    Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog

    2015-01-01

    There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l'Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the 'upland model' was able to more accurately predict SOC compared with the 'upland & wetland model'. However, the separately calibrated 'upland and wetland model' did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis-NIR spectroscopy adds critical information that significantly improves the prediction accuracy of SOC compared to using RS data alone. We recommend the incorporation of laboratory spectra with RS data and other environmental data to improve soil spatial modeling and digital soil mapping (DSM).

  5. Improving satellite-driven PM2.5 models with Moderate Resolution Imaging Spectroradiometer fire counts in the southeastern U.S.

    PubMed

    Hu, Xuefei; Waller, Lance A; Lyapustin, Alexei; Wang, Yujie; Liu, Yang

    2014-10-16

    Multiple studies have developed surface PM 2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM 2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM 2.5 . In this paper, we examined whether remotely sensed fire count data could improve PM 2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM 2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM 2.5 across the models considered. Cross validation (CV) generated an R 2 of 0.69, a mean prediction error of 2.75 µg/m 3 , and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m 3 , indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m 3 , exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM 2.5 concentration estimation, especially in areas and seasons prone to fire events.

  6. Improving satellite-driven PM2.5 models with Moderate Resolution Imaging Spectroradiometer fire counts in the southeastern U.S

    PubMed Central

    Hu, Xuefei; Waller, Lance A.; Lyapustin, Alexei; Wang, Yujie; Liu, Yang

    2017-01-01

    Multiple studies have developed surface PM2.5 (particle size less than 2.5 µm in aerodynamic diameter) prediction models using satellite-derived aerosol optical depth as the primary predictor and meteorological and land use variables as secondary variables. To our knowledge, satellite-retrieved fire information has not been used for PM2.5 concentration prediction in statistical models. Fire data could be a useful predictor since fires are significant contributors of PM2.5. In this paper, we examined whether remotely sensed fire count data could improve PM2.5 prediction accuracy in the southeastern U.S. in a spatial statistical model setting. A sensitivity analysis showed that when the radius of the buffer zone centered at each PM2.5 monitoring site reached 75 km, fire count data generally have the greatest predictive power of PM2.5 across the models considered. Cross validation (CV) generated an R2 of 0.69, a mean prediction error of 2.75 µg/m3, and root-mean-square prediction errors (RMSPEs) of 4.29 µg/m3, indicating a good fit between the dependent and predictor variables. A comparison showed that the prediction accuracy was improved more substantially from the nonfire model to the fire model at sites with higher fire counts. With increasing fire counts, CV RMSPE decreased by values up to 1.5 µg/m3, exhibiting a maximum improvement of 13.4% in prediction accuracy. Fire count data were shown to have better performance in southern Georgia and in the spring season due to higher fire occurrence. Our findings indicate that fire count data provide a measurable improvement in PM2.5 concentration estimation, especially in areas and seasons prone to fire events. PMID:28967648

  7. Calibration of PMIS pavement performance prediction models.

    DOT National Transportation Integrated Search

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  8. Vibration Noise Modeling for Measurement While Drilling System Based on FOGs

    PubMed Central

    Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu

    2017-01-01

    Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors’ noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn’t white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%. PMID:29039815

  9. Vibration Noise Modeling for Measurement While Drilling System Based on FOGs.

    PubMed

    Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu

    2017-10-17

    Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors' noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn't white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.

  10. LASE measurements of water vapor, aerosol, and cloud distribution in hurricane environments and their role in hurricane development

    NASA Technical Reports Server (NTRS)

    Mahoney, M. J.; Ismail, S.; Browell, E. V.; Ferrare, R. A.; Kooi, S. A.; Brasseur, L.; Notari, A.; Petway, L.; Brackett, V.; Clayton, M.; hide

    2002-01-01

    LASE measures high resolution moisture, aerosol, and cloud distributions not available from conventional observations. LASE water vapor measurements were compared with dropsondes to evaluate their accuracy. LASE water vapor measurements were used to assess the capability of hurricane models to improve their track accuracy by 100 km on 3 day forecasts using Florida State University models.

  11. EVALUATING RISK-PREDICTION MODELS USING DATA FROM ELECTRONIC HEALTH RECORDS.

    PubMed

    Wang, L E; Shaw, Pamela A; Mathelier, Hansie M; Kimmel, Stephen E; French, Benjamin

    2016-03-01

    The availability of data from electronic health records facilitates the development and evaluation of risk-prediction models, but estimation of prediction accuracy could be limited by outcome misclassification, which can arise if events are not captured. We evaluate the robustness of prediction accuracy summaries, obtained from receiver operating characteristic curves and risk-reclassification methods, if events are not captured (i.e., "false negatives"). We derive estimators for sensitivity and specificity if misclassification is independent of marker values. In simulation studies, we quantify the potential for bias in prediction accuracy summaries if misclassification depends on marker values. We compare the accuracy of alternative prognostic models for 30-day all-cause hospital readmission among 4548 patients discharged from the University of Pennsylvania Health System with a primary diagnosis of heart failure. Simulation studies indicate that if misclassification depends on marker values, then the estimated accuracy improvement is also biased, but the direction of the bias depends on the direction of the association between markers and the probability of misclassification. In our application, 29% of the 1143 readmitted patients were readmitted to a hospital elsewhere in Pennsylvania, which reduced prediction accuracy. Outcome misclassification can result in erroneous conclusions regarding the accuracy of risk-prediction models.

  12. A New Deep Learning Model for Fault Diagnosis with Good Anti-Noise and Domain Adaptation Ability on Raw Vibration Signals

    PubMed Central

    Zhang, Wei; Peng, Gaoliang; Li, Chuanhao; Chen, Yuanhang; Zhang, Zhujun

    2017-01-01

    Intelligent fault diagnosis techniques have replaced time-consuming and unreliable human analysis, increasing the efficiency of fault diagnosis. Deep learning models can improve the accuracy of intelligent fault diagnosis with the help of their multilayer nonlinear mapping ability. This paper proposes a novel method named Deep Convolutional Neural Networks with Wide First-layer Kernels (WDCNN). The proposed method uses raw vibration signals as input (data augmentation is used to generate more inputs), and uses the wide kernels in the first convolutional layer for extracting features and suppressing high frequency noise. Small convolutional kernels in the preceding layers are used for multilayer nonlinear mapping. AdaBN is implemented to improve the domain adaptation ability of the model. The proposed model addresses the problem that currently, the accuracy of CNN applied to fault diagnosis is not very high. WDCNN can not only achieve 100% classification accuracy on normal signals, but also outperform the state-of-the-art DNN model which is based on frequency features under different working load and noisy environment conditions. PMID:28241451

  13. A reduced-order nonlinear sliding mode observer for vehicle slip angle and tyre forces

    NASA Astrophysics Data System (ADS)

    Chen, Yuhang; Ji, Yunfeng; Guo, Konghui

    2014-12-01

    In this paper, a reduced-order sliding mode observer (RO-SMO) is developed for vehicle state estimation. Several improvements are achieved in this paper. First, the reference model accuracy is improved by considering vehicle load transfers and using a precise nonlinear tyre model 'UniTire'. Second, without the reference model accuracy degraded, the computing burden of the state observer is decreased by a reduced-order approach. Third, nonlinear system damping is integrated into the SMO to speed convergence and reduce chattering. The proposed RO-SMO is evaluated through simulation and experiments based on an in-wheel motor electric vehicle. The results show that the proposed observer accurately predicts the vehicle states.

  14. Artificial dissipation and central difference schemes for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1987-01-01

    An artificial dissipation model, including boundary treatment, that is employed in many central difference schemes for solving the Euler and Navier-Stokes equations is discussed. Modifications of this model such as the eigenvalue scaling suggested by upwind differencing are examined. Multistage time stepping schemes with and without a multigrid method are used to investigate the effects of changes in the dissipation model on accuracy and convergence. Improved accuracy for inviscid and viscous airfoil flow is obtained with the modified eigenvalue scaling. Slower convergence rates are experienced with the multigrid method using such scaling. The rate of convergence is improved by applying a dissipation scaling function that depends on mesh cell aspect ratio.

  15. Region-confined restoration method for motion-blurred star image of the star sensor under dynamic conditions.

    PubMed

    Ma, Liheng; Bernelli-Zazzera, Franco; Jiang, Guangwen; Wang, Xingshu; Huang, Zongsheng; Qin, Shiqiao

    2016-06-10

    Under dynamic conditions, the centroiding accuracy of the motion-blurred star image decreases and the number of identified stars reduces, which leads to the degradation of the attitude accuracy of the star sensor. To improve the attitude accuracy, a region-confined restoration method, which concentrates on the noise removal and signal to noise ratio (SNR) improvement of the motion-blurred star images, is proposed for the star sensor under dynamic conditions. A multi-seed-region growing technique with the kinematic recursive model for star image motion is given to find the star image regions and to remove the noise. Subsequently, a restoration strategy is employed in the extracted regions, taking the time consumption and SNR improvement into consideration simultaneously. Simulation results indicate that the region-confined restoration method is effective in removing noise and improving the centroiding accuracy. The identification rate and the average number of identified stars in the experiments verify the advantages of the region-confined restoration method.

  16. Calibration of Reduced Dynamic Models of Power Systems using Phasor Measurement Unit (PMU) Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ning; Lu, Shuai; Singh, Ruchi

    2011-09-23

    Accuracy of a power system dynamic model is essential to the secure and efficient operation of the system. Lower confidence on model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, identification algorithms have been developed to calibrate parameters of individual components using measurement data from staged tests. To facilitate online dynamic studies for large power system interconnections, this paper proposes a model reduction and calibration approach using phasor measurement unit (PMU) data. First, a model reduction method is used to reduce the number of dynamic components. Then, a calibration algorithm is developed to estimatemore » parameters of the reduced model. This approach will help to maintain an accurate dynamic model suitable for online dynamic studies. The performance of the proposed method is verified through simulation studies.« less

  17. Application of genetic algorithm in modeling on-wafer inductors for up to 110 Ghz

    NASA Astrophysics Data System (ADS)

    Liu, Nianhong; Fu, Jun; Liu, Hui; Cui, Wenpu; Liu, Zhihong; Liu, Linlin; Zhou, Wei; Wang, Quan; Guo, Ao

    2018-05-01

    In this work, the genetic algorithm has been introducted into parameter extraction for on-wafer inductors for up to 110 GHz millimeter-wave operations, and nine independent parameters of the equivalent circuit model are optimized together. With the genetic algorithm, the model with the optimized parameters gives a better fitting accuracy than the preliminary parameters without optimization. Especially, the fitting accuracy of the Q value achieves a significant improvement after the optimization.

  18. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  19. Distribution system model calibration with big data from AMI and PV inverters

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...

    2016-03-03

    Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less

  20. Distribution system model calibration with big data from AMI and PV inverters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.

    Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less

  1. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  2. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-03-31

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.

  3. Improving Odometric Accuracy for an Autonomous Electric Cart.

    PubMed

    Toledo, Jonay; Piñeiro, Jose D; Arnay, Rafael; Acosta, Daniel; Acosta, Leopoldo

    2018-01-12

    In this paper, a study of the odometric system for the autonomous cart Verdino, which is an electric vehicle based on a golf cart, is presented. A mathematical model of the odometric system is derived from cart movement equations, and is used to compute the vehicle position and orientation. The inputs of the system are the odometry encoders, and the model uses the wheels diameter and distance between wheels as parameters. With this model, a least square minimization is made in order to get the nominal best parameters. This model is updated, including a real time wheel diameter measurement improving the accuracy of the results. A neural network model is used in order to learn the odometric model from data. Tests are made using this neural network in several configurations and the results are compared to the mathematical model, showing that the neural network can outperform the first proposed model.

  4. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  5. A geospatial framework for improving the vertical accuracy of elevation models in Florida's coastal Everglades

    NASA Astrophysics Data System (ADS)

    Cooper, H.; Zhang, C.; Sirianni, M.

    2016-12-01

    South Florida relies upon the health of the Everglades, the largest subtropical wetland in North America, as a vital source of water. Since the late 1800's, this imperiled ecosystem has been highly engineered to meet human needs of flood control and water use. The Comprehensive Everglades Restoration Plan (CERP) was initiated in 2000 to restore original water flows to the Everglades and improve overall ecosystem health, while also aiming to achieve balance with human water usage. Due to subtle changes in the Everglades terrain, better vertical accuracy elevation data are needed to model groundwater and surface water levels that are integral to monitoring the effects of restoration under impacts such as sea-level rise. The current best available elevation datasets for the coastal Everglades include High Accuracy Elevation Data (HAED) and Florida Department of Emergency Management (FDEM) Light Detection and Ranging (LiDAR). However, the horizontal resolution of the HAED data is too coarse ( 400 m) for fine scale mapping, and the LiDAR data does not contain an accuracy assessment for coastal Everglades' vegetation communities. The purpose of this study is to develop a framework for generating better vertical accuracy and horizontal resolution Digital Elevation Models in the Flamingo District of Everglades National Park. In the framework, field work is conducted to collect RTK GPS and total station elevation measurements for mangrove swamp, coastal prairies, and freshwater marsh, and the proposed accuracy assessment and elevation modeling methodology is integrated with a Geographical Information System (GIS). It is anticipated that this study will provide more accurate models of the soil substrate elevation that can be used by restoration planners to better predict the future state of the Everglades ecosystem.

  6. Size at emergence improves accuracy of age estimates in forensically-useful beetle Creophilus maxillosus L. (Staphylinidae).

    PubMed

    Matuszewski, Szymon; Frątczak-Łagiewska, Katarzyna

    2018-02-05

    Insects colonizing human or animal cadavers may be used to estimate post-mortem interval (PMI) usually by aging larvae or pupae sampled on a crime scene. The accuracy of insect age estimates in a forensic context is reduced by large intraspecific variation in insect development time. Here we test the concept that insect size at emergence may be used to predict insect physiological age and accordingly to improve the accuracy of age estimates in forensic entomology. Using results of laboratory study on development of forensically-useful beetle Creophilus maxillosus (Linnaeus, 1758) (Staphylinidae) we demonstrate that its physiological age at emergence [i.e. thermal summation value (K) needed for emergence] fall with an increase of beetle size. In the validation study it was found that K estimated based on the adult insect size was significantly closer to the true K as compared to K from the general thermal summation model. Using beetle length at emergence as a predictor variable and male or female specific model regressing K against beetle length gave the most accurate predictions of age. These results demonstrate that size of C. maxillosus at emergence improves accuracy of age estimates in a forensic context.

  7. How social information can improve estimation accuracy in human groups.

    PubMed

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  8. How social information can improve estimation accuracy in human groups

    PubMed Central

    Jayles, Bertrand; Kim, Hye-rin; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-01-01

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects’ sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. PMID:29118142

  9. Ensemble Data Assimilation for Streamflow Forecasting: Experiments with Ensemble Kalman Filter and Particle Filter

    NASA Astrophysics Data System (ADS)

    Hirpa, F. A.; Gebremichael, M.; Hopson, T. M.; Wojick, R.

    2011-12-01

    We present results of data assimilation of ground discharge observation and remotely sensed soil moisture observations into Sacramento Soil Moisture Accounting (SACSMA) model in a small watershed (1593 km2) in Minnesota, the Unites States. Specifically, we perform assimilation experiments with Ensemble Kalman Filter (EnKF) and Particle Filter (PF) in order to improve streamflow forecast accuracy at six hourly time step. The EnKF updates the soil moisture states in the SACSMA from the relative errors of the model and observations, while the PF adjust the weights of the state ensemble members based on the likelihood of the forecast. Results of the improvements of each filter over the reference model (without data assimilation) will be presented. Finally, the EnKF and PF are coupled together to further improve the streamflow forecast accuracy.

  10. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  11. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture.

    PubMed

    Vallejo, Roger L; Leeds, Timothy D; Gao, Guangtu; Parsons, James E; Martin, Kyle E; Evenhuis, Jason P; Fragomeni, Breno O; Wiens, Gregory D; Palti, Yniv

    2017-02-01

    Previously, we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative that enables exploitation of within-family genetic variation. We compared three GS models [single-step genomic best linear unbiased prediction (ssGBLUP), weighted ssGBLUP (wssGBLUP), and BayesB] to predict genomic-enabled breeding values (GEBV) for BCWD resistance in a commercial rainbow trout population, and compared the accuracy of GEBV to traditional estimates of breeding values (EBV) from a pedigree-based BLUP (P-BLUP) model. We also assessed the impact of sampling design on the accuracy of GEBV predictions. For these comparisons, we used BCWD survival phenotypes recorded on 7893 fish from 102 families, of which 1473 fish from 50 families had genotypes [57 K single nucleotide polymorphism (SNP) array]. Naïve siblings of the training fish (n = 930 testing fish) were genotyped to predict their GEBV and mated to produce 138 progeny testing families. In the following generation, 9968 progeny were phenotyped to empirically assess the accuracy of GEBV predictions made on their non-phenotyped parents. The accuracy of GEBV from all tested GS models were substantially higher than the P-BLUP model EBV. The highest increase in accuracy relative to the P-BLUP model was achieved with BayesB (97.2 to 108.8%), followed by wssGBLUP at iteration 2 (94.4 to 97.1%) and 3 (88.9 to 91.2%) and ssGBLUP (83.3 to 85.3%). Reducing the training sample size to n = ~1000 had no negative impact on the accuracy (0.67 to 0.72), but with n = ~500 the accuracy dropped to 0.53 to 0.61 if the training and testing fish were full-sibs, and even substantially lower, to 0.22 to 0.25, when they were not full-sibs. Using progeny performance data, we showed that the accuracy of genomic predictions is substantially higher than estimates obtained from the traditional pedigree-based BLUP model for BCWD resistance. Overall, we found that using a much smaller training sample size compared to similar studies in livestock, GS can substantially improve the selection accuracy and genetic gains for this trait in a commercial rainbow trout breeding population.

  12. 2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Waters, Jiajia

    Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.

  13. Positional and Dimensional Accuracy Assessment of Drone Images Geo-referenced with Three Different GPSs

    NASA Astrophysics Data System (ADS)

    Cao, C.; Lee, X.; Xu, J.

    2017-12-01

    Unmanned Aerial Vehicles (UAVs) or drones have been widely used in environmental, ecological and engineering applications in recent years. These applications require assessment of positional and dimensional accuracy. In this study, positional accuracy refers to the accuracy of the latitudinal and longitudinal coordinates of locations on the mosaicked image in reference to the coordinates of the same locations measured by a Global Positioning System (GPS) in a ground survey, and dimensional accuracy refers to length and height of a ground target. Here, we investigate the effects of the number of Ground Control Points (GCPs) and the accuracy of the GPS used to measure the GCPs on positional and dimensional accuracy of a drone 3D model. Results show that using on-board GPS and a hand-held GPS produce a positional accuracy on the order of 2-9 meters. In comparison, using a differential GPS with high accuracy (30 cm) improves the positional accuracy of the drone model by about 40 %. Increasing the number of GCPs can compensate for the uncertainty brought by the GPS equipment with low accuracy. In terms of the dimensional accuracy of the drone model, even with the use of a low resolution GPS onboard the vehicle, the mean absolute errors are only 0.04 m for height and 0.10 m for length, which are well suited for some applications in precision agriculture and in land survey studies.

  14. Incorporating Neutrophil-to-lymphocyte Ratio and Platelet-to-lymphocyte Ratio in Place of Neutrophil Count and Platelet Count Improves Prognostic Accuracy of the International Metastatic Renal Cell Carcinoma Database Consortium Model

    PubMed Central

    Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary

    2018-01-01

    Purpose The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. Materials and Methods This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R2, Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Results Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R2, 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R2, 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Conclusion Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice. PMID:28253564

  15. Incorporating Neutrophil-to-lymphocyte Ratio and Platelet-to-lymphocyte Ratio in Place of Neutrophil Count and Platelet Count Improves Prognostic Accuracy of the International Metastatic Renal Cell Carcinoma Database Consortium Model.

    PubMed

    Chrom, Pawel; Stec, Rafal; Bodnar, Lubomir; Szczylik, Cezary

    2018-01-01

    The study investigated whether a replacement of neutrophil count and platelet count by neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) within the International Metastatic Renal Cell Carcinoma Database Consortium (IMDC) model would improve its prognostic accuracy. This retrospective analysis included consecutive patients with metastatic renal cell carcinoma treated with first-line tyrosine kinase inhibitors. The IMDC and modified-IMDC models were compared using: concordance index (CI), bias-corrected concordance index (BCCI), calibration plots, the Grønnesby and Borgan test, Bayesian Information Criterion (BIC), generalized R 2 , Integrated Discrimination Improvement (IDI), and continuous Net Reclassification Index (cNRI) for individual risk factors and the three risk groups. Three hundred and twenty-one patients were eligible for analyses. The modified-IMDC model with NLR value of 3.6 and PLR value of 157 was selected for comparison with the IMDC model. Both models were well calibrated. All other measures favoured the modified-IMDC model over the IMDC model (CI, 0.706 vs. 0.677; BCCI, 0.699 vs. 0.671; BIC, 2,176.2 vs. 2,190.7; generalized R 2 , 0.238 vs. 0.202; IDI, 0.044; cNRI, 0.279 for individual risk factors; and CI, 0.669 vs. 0.641; BCCI, 0.669 vs. 0.641; BIC, 2,183.2 vs. 2,198.1; generalized R 2 , 0.163 vs. 0.123; IDI, 0.045; cNRI, 0.165 for the three risk groups). Incorporation of NLR and PLR in place of neutrophil count and platelet count improved prognostic accuracy of the IMDC model. These findings require external validation before introducing into clinical practice.

  16. The Effects of Q-Matrix Design on Classification Accuracy in the Log-Linear Cognitive Diagnosis Model.

    PubMed

    Madison, Matthew J; Bradshaw, Laine P

    2015-06-01

    Diagnostic classification models are psychometric models that aim to classify examinees according to their mastery or non-mastery of specified latent characteristics. These models are well-suited for providing diagnostic feedback on educational assessments because of their practical efficiency and increased reliability when compared with other multidimensional measurement models. A priori specifications of which latent characteristics or attributes are measured by each item are a core element of the diagnostic assessment design. This item-attribute alignment, expressed in a Q-matrix, precedes and supports any inference resulting from the application of the diagnostic classification model. This study investigates the effects of Q-matrix design on classification accuracy for the log-linear cognitive diagnosis model. Results indicate that classification accuracy, reliability, and convergence rates improve when the Q-matrix contains isolated information from each measured attribute.

  17. Spatially distributed modeling of soil organic carbon across China with improved accuracy

    NASA Astrophysics Data System (ADS)

    Li, Qi-quan; Zhang, Hao; Jiang, Xin-ye; Luo, Youlin; Wang, Chang-quan; Yue, Tian-xiang; Li, Bing; Gao, Xue-song

    2017-06-01

    There is a need for more detailed spatial information on soil organic carbon (SOC) for the accurate estimation of SOC stock and earth system models. As it is effective to use environmental factors as auxiliary variables to improve the prediction accuracy of spatially distributed modeling, a combined method (HASM_EF) was developed to predict the spatial pattern of SOC across China using high accuracy surface modeling (HASM), artificial neural network (ANN), and principal component analysis (PCA) to introduce land uses, soil types, climatic factors, topographic attributes, and vegetation cover as predictors. The performance of HASM_EF was compared with ordinary kriging (OK), OK, and HASM combined, respectively, with land uses and soil types (OK_LS and HASM_LS), and regression kriging combined with land uses and soil types (RK_LS). Results showed that HASM_EF obtained the lowest prediction errors and the ratio of performance to deviation (RPD) presented the relative improvements of 89.91%, 63.77%, 55.86%, and 42.14%, respectively, compared to the other four methods. Furthermore, HASM_EF generated more details and more realistic spatial information on SOC. The improved performance of HASM_EF can be attributed to the introduction of more environmental factors, to explicit consideration of the multicollinearity of selected factors and the spatial nonstationarity and nonlinearity of relationships between SOC and selected factors, and to the performance of HASM and ANN. This method may play a useful tool in providing more precise spatial information on soil parameters for global modeling across large areas.

  18. Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to support Healthcare Quality Improvement.

    PubMed

    Grundmeier, Robert W; Masino, Aaron J; Casper, T Charles; Dean, Jonathan M; Bell, Jamie; Enriquez, Rene; Deakyne, Sara; Chamberlain, James M; Alpern, Elizabeth R

    2016-11-09

    Important information to support healthcare quality improvement is often recorded in free text documents such as radiology reports. Natural language processing (NLP) methods may help extract this information, but these methods have rarely been applied outside the research laboratories where they were developed. To implement and validate NLP tools to identify long bone fractures for pediatric emergency medicine quality improvement. Using freely available statistical software packages, we implemented NLP methods to identify long bone fractures from radiology reports. A sample of 1,000 radiology reports was used to construct three candidate classification models. A test set of 500 reports was used to validate the model performance. Blinded manual review of radiology reports by two independent physicians provided the reference standard. Each radiology report was segmented and word stem and bigram features were constructed. Common English "stop words" and rare features were excluded. We used 10-fold cross-validation to select optimal configuration parameters for each model. Accuracy, recall, precision and the F1 score were calculated. The final model was compared to the use of diagnosis codes for the identification of patients with long bone fractures. There were 329 unique word stems and 344 bigrams in the training documents. A support vector machine classifier with Gaussian kernel performed best on the test set with accuracy=0.958, recall=0.969, precision=0.940, and F1 score=0.954. Optimal parameters for this model were cost=4 and gamma=0.005. The three classification models that we tested all performed better than diagnosis codes in terms of accuracy, precision, and F1 score (diagnosis code accuracy=0.932, recall=0.960, precision=0.896, and F1 score=0.927). NLP methods using a corpus of 1,000 training documents accurately identified acute long bone fractures from radiology reports. Strategic use of straightforward NLP methods, implemented with freely available software, offers quality improvement teams new opportunities to extract information from narrative documents.

  19. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  20. Evaluation of Gravitational Field Models Based on the Laser Range Observation of Low Earth Orbit Satellites

    NASA Astrophysics Data System (ADS)

    Hong-bo, Wang; Chang-yin, Zhao; Wei, Zhang; Jin-wei, Zhan; Sheng-xian, Yu

    2016-07-01

    The Earth gravitational field model is one of the most important dynamic models in satellite orbit computation. Several space gravity missions made great successes in recent years, prompting the publishing of several gravitational filed models. In this paper, two classical (JGM3, EGM96) and four latest (EIGEN-CHAMP05S, GGM03S, GOCE02S, EGM2008) models are evaluated by employing them in the precision orbit determination (POD) and prediction. These calculations are performed based on the laser ranging observation of four Low Earth Orbit (LEO) satellites, including CHAMP, GFZ-1, GRACE-A, and SWARM-A. The residual error of observation in POD is adopted to describe the accuracy of six gravitational field models. The main results we obtained are as follows. (1) For the POD of LEOs, the accuracies of 4 latest models are at the same level, and better than those of 2 classical models; (2) Taking JGM3 as reference, EGM96 model's accuracy is better in most situations, and the accuracies of the 4 latest models are improved by 12%-47% in POD and 63% in prediction, respectively. We also confirm that the model's accuracy in POD is enhanced with the increasing degree and order if they are smaller than 70, and when they exceed 70, the accuracy keeps constant, implying that the model's degree and order truncated to 70 are sufficient to meet the requirement of LEO computation of centimeter precision.

  1. Baseline and target values for regional and point PV power forecasts: Toward improved solar forecasting

    DOE PAGES

    Zhang, Jie; Hodge, Bri -Mathias; Lu, Siyuan; ...

    2015-11-10

    Accurate solar photovoltaic (PV) power forecasting allows utilities to reliably utilize solar resources on their systems. However, to truly measure the improvements that any new solar forecasting methods provide, it is important to develop a methodology for determining baseline and target values for the accuracy of solar forecasting at different spatial and temporal scales. This paper aims at developing a framework to derive baseline and target values for a suite of generally applicable, value-based, and custom-designed solar forecasting metrics. The work was informed by close collaboration with utility and independent system operator partners. The baseline values are established based onmore » state-of-the-art numerical weather prediction models and persistence models in combination with a radiative transfer model. The target values are determined based on the reduction in the amount of reserves that must be held to accommodate the uncertainty of PV power output. The proposed reserve-based methodology is a reasonable and practical approach that can be used to assess the economic benefits gained from improvements in accuracy of solar forecasting. Lastly, the financial baseline and targets can be translated back to forecasting accuracy metrics and requirements, which will guide research on solar forecasting improvements toward the areas that are most beneficial to power systems operations.« less

  2. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  3. Accurate disulfide-bonding network predictions improve ab initio structure prediction of cysteine-rich proteins

    PubMed Central

    Yang, Jing; He, Bao-Ji; Jang, Richard; Zhang, Yang; Shen, Hong-Bin

    2015-01-01

    Abstract Motivation: Cysteine-rich proteins cover many important families in nature but there are currently no methods specifically designed for modeling the structure of these proteins. The accuracy of disulfide connectivity pattern prediction, particularly for the proteins of higher-order connections, e.g. >3 bonds, is too low to effectively assist structure assembly simulations. Results: We propose a new hierarchical order reduction protocol called Cyscon for disulfide-bonding prediction. The most confident disulfide bonds are first identified and bonding prediction is then focused on the remaining cysteine residues based on SVR training. Compared with purely machine learning-based approaches, Cyscon improved the average accuracy of connectivity pattern prediction by 21.9%. For proteins with more than 5 disulfide bonds, Cyscon improved the accuracy by 585% on the benchmark set of PDBCYS. When applied to 158 non-redundant cysteine-rich proteins, Cyscon predictions helped increase (or decrease) the TM-score (or RMSD) of the ab initio QUARK modeling by 12.1% (or 14.4%). This result demonstrates a new avenue to improve the ab initio structure modeling for cysteine-rich proteins. Availability and implementation: http://www.csbio.sjtu.edu.cn/bioinf/Cyscon/ Contact: zhng@umich.edu or hbshen@sjtu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26254435

  4. Improving inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2014-10-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead-time is considered within the day-ahead (Elspot) market of the Nordic exchange market. We present here a new approach for issuing hourly reservoir inflow forecasts that aims to improve on existing forecasting models that are in place operationally, without needing to modify the pre-existing approach, but instead formulating an additive or complementary model that is independent and captures the structure the existing model may be missing. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. The procedure presented comprises an error model added on top of an un-alterable constant parameter conceptual model, the models being demonstrated with reference to the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead-times up to 17 h. Season based evaluations indicated that the improvement in inflow forecasts varies across seasons and inflow forecasts in autumn and spring are less successful with the 95% prediction interval bracketing less than 95% of the observations for lead-times beyond 17 h.

  5. Multi-object model-based multi-atlas segmentation for rodent brains using dense discrete correspondences

    NASA Astrophysics Data System (ADS)

    Lee, Joohwi; Kim, Sun Hyung; Styner, Martin

    2016-03-01

    The delineation of rodent brain structures is challenging due to low-contrast multiple cortical and subcortical organs that are closely interfacing to each other. Atlas-based segmentation has been widely employed due to its ability to delineate multiple organs at the same time via image registration. The use of multiple atlases and subsequent label fusion techniques has further improved the robustness and accuracy of atlas-based segmentation. However, the accuracy of atlas-based segmentation is still prone to registration errors; for example, the segmentation of in vivo MR images can be less accurate and robust against image artifacts than the segmentation of post mortem images. In order to improve the accuracy and robustness of atlas-based segmentation, we propose a multi-object, model-based, multi-atlas segmentation method. We first establish spatial correspondences across atlases using a set of dense pseudo-landmark particles. We build a multi-object point distribution model using those particles in order to capture inter- and intra- subject variation among brain structures. The segmentation is obtained by fitting the model into a subject image, followed by label fusion process. Our result shows that the proposed method resulted in greater accuracy than comparable segmentation methods, including a widely used ANTs registration tool.

  6. An Efficient Neural-Network-Based Microseismic Monitoring Platform for Hydraulic Fracture on an Edge Computing Architecture.

    PubMed

    Zhang, Xiaopu; Lin, Jun; Chen, Zubin; Sun, Feng; Zhu, Xi; Fang, Gengfa

    2018-06-05

    Microseismic monitoring is one of the most critical technologies for hydraulic fracturing in oil and gas production. To detect events in an accurate and efficient way, there are two major challenges. One challenge is how to achieve high accuracy due to a poor signal-to-noise ratio (SNR). The other one is concerned with real-time data transmission. Taking these challenges into consideration, an edge-computing-based platform, namely Edge-to-Center LearnReduce, is presented in this work. The platform consists of a data center with many edge components. At the data center, a neural network model combined with convolutional neural network (CNN) and long short-term memory (LSTM) is designed and this model is trained by using previously obtained data. Once the model is fully trained, it is sent to edge components for events detection and data reduction. At each edge component, a probabilistic inference is added to the neural network model to improve its accuracy. Finally, the reduced data is delivered to the data center. Based on experiment results, a high detection accuracy (over 96%) with less transmitted data (about 90%) was achieved by using the proposed approach on a microseismic monitoring system. These results show that the platform can simultaneously improve the accuracy and efficiency of microseismic monitoring.

  7. Satellite SAR geocoding with refined RPC model

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Balz, Timo; Liao, Mingsheng

    2012-04-01

    Recent studies have proved that the Rational Polynomial Camera (RPC) model is able to act as a reliable replacement of the rigorous Range-Doppler (RD) model for the geometric processing of satellite SAR datasets. But its capability in absolute geolocation of SAR images has not been evaluated quantitatively. Therefore, in this article the problems of error analysis and refinement of SAR RPC model are primarily investigated to improve the absolute accuracy of SAR geolocation. Range propagation delay and azimuth timing error are identified as two major error sources for SAR geolocation. An approach based on SAR image simulation and real-to-simulated image matching is developed to estimate and correct these two errors. Afterwards a refined RPC model can be built from the error-corrected RD model and then used in satellite SAR geocoding. Three experiments with different settings are designed and conducted to comprehensively evaluate the accuracies of SAR geolocation with both ordinary and refined RPC models. All the experimental results demonstrate that with RPC model refinement the absolute location accuracies of geocoded SAR images can be improved significantly, particularly in Easting direction. In another experiment the computation efficiencies of SAR geocoding with both RD and RPC models are compared quantitatively. The results show that by using the RPC model such efficiency can be remarkably improved by at least 16 times. In addition the problem of DEM data selection for SAR image simulation in RPC model refinement is studied by a comparative experiment. The results reveal that the best choice should be using the proper DEM datasets of spatial resolution comparable to that of the SAR images.

  8. MuSE: accounting for tumor heterogeneity using a sample-specific error model improves sensitivity and specificity in mutation calling from sequencing data.

    PubMed

    Fan, Yu; Xi, Liu; Hughes, Daniel S T; Zhang, Jianjun; Zhang, Jianhua; Futreal, P Andrew; Wheeler, David A; Wang, Wenyi

    2016-08-24

    Subclonal mutations reveal important features of the genetic architecture of tumors. However, accurate detection of mutations in genetically heterogeneous tumor cell populations using next-generation sequencing remains challenging. We develop MuSE ( http://bioinformatics.mdanderson.org/main/MuSE ), Mutation calling using a Markov Substitution model for Evolution, a novel approach for modeling the evolution of the allelic composition of the tumor and normal tissue at each reference base. MuSE adopts a sample-specific error model that reflects the underlying tumor heterogeneity to greatly improve the overall accuracy. We demonstrate the accuracy of MuSE in calling subclonal mutations in the context of large-scale tumor sequencing projects using whole exome and whole genome sequencing.

  9. Assessing the accuracy of improved force-matched water models derived from Ab initio molecular dynamics simulations.

    PubMed

    Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D

    2016-07-15

    The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. Impact of fitting dominance and additive effects on accuracy of genomic prediction of breeding values in layers.

    PubMed

    Heidaritabar, M; Wolc, A; Arango, J; Zeng, J; Settar, P; Fulton, J E; O'Sullivan, N P; Bastiaansen, J W M; Fernando, R L; Garrick, D J; Dekkers, J C M

    2016-10-01

    Most genomic prediction studies fit only additive effects in models to estimate genomic breeding values (GEBV). However, if dominance genetic effects are an important source of variation for complex traits, accounting for them may improve the accuracy of GEBV. We investigated the effect of fitting dominance and additive effects on the accuracy of GEBV for eight egg production and quality traits in a purebred line of brown layers using pedigree or genomic information (42K single-nucleotide polymorphism (SNP) panel). Phenotypes were corrected for the effect of hatch date. Additive and dominance genetic variances were estimated using genomic-based [genomic best linear unbiased prediction (GBLUP)-REML and BayesC] and pedigree-based (PBLUP-REML) methods. Breeding values were predicted using a model that included both additive and dominance effects and a model that included only additive effects. The reference population consisted of approximately 1800 animals hatched between 2004 and 2009, while approximately 300 young animals hatched in 2010 were used for validation. Accuracy of prediction was computed as the correlation between phenotypes and estimated breeding values of the validation animals divided by the square root of the estimate of heritability in the whole population. The proportion of dominance variance to total phenotypic variance ranged from 0.03 to 0.22 with PBLUP-REML across traits, from 0 to 0.03 with GBLUP-REML and from 0.01 to 0.05 with BayesC. Accuracies of GEBV ranged from 0.28 to 0.60 across traits. Inclusion of dominance effects did not improve the accuracy of GEBV, and differences in their accuracies between genomic-based methods were small (0.01-0.05), with GBLUP-REML yielding higher prediction accuracies than BayesC for egg production, egg colour and yolk weight, while BayesC yielded higher accuracies than GBLUP-REML for the other traits. In conclusion, fitting dominance effects did not impact accuracy of genomic prediction of breeding values in this population. © 2016 Blackwell Verlag GmbH.

  11. Conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews depend on the analytic approach for comparing reported information to reference information.

    PubMed

    Baxter, Suzanne Domel; Smith, Albert F; Hardin, James W; Nichols, Michele D

    2007-04-01

    Validation study data are used to illustrate that conclusions about children's reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information-conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Children were observed eating school meals on 1 day (n=12), or 2 (n=13) or 3 (n=79) nonconsecutive days separated by >or=25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (ie, protein, carbohydrate, and fat), and compared. For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), and inflation ratios (error measures). Mixed-model analyses. Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (all four P values >0.61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (all four P values <0.04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. When analyzed using the reporting-error-sensitive approach, children's dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients.

  12. Conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews depend on the analytic approach for comparing reported information to reference information

    PubMed Central

    Baxter, Suzanne Domel; Smith, Albert F.; Hardin, James W.; Nichols, Michele D.

    2008-01-01

    Objective Validation-study data are used to illustrate that conclusions about children’s reporting accuracy for energy and macronutrients over multiple interviews (ie, time) depend on the analytic approach for comparing reported and reference information—conventional, which disregards accuracy of reported items and amounts, or reporting-error-sensitive, which classifies reported items as matches (eaten) or intrusions (not eaten), and amounts as corresponding or overreported. Subjects and design Children were observed eating school meals on one day (n = 12), or two (n = 13) or three (n = 79) nonconsecutive days separated by ≥25 days, and interviewed in the morning after each observation day about intake the previous day. Reference (observed) and reported information were transformed to energy and macronutrients (protein, carbohydrate, fat), and compared. Main outcome measures For energy and each macronutrient: report rates (reported/reference), correspondence rates (genuine accuracy measures), inflation ratios (error measures). Statistical analyses Mixed-model analyses. Results Using the conventional approach for analyzing energy and macronutrients, report rates did not vary systematically over interviews (Ps > .61). Using the reporting-error-sensitive approach for analyzing energy and macronutrients, correspondence rates increased over interviews (Ps < .04), indicating that reporting accuracy improved over time; inflation ratios decreased, although not significantly, over interviews, also suggesting that reporting accuracy improved over time. Correspondence rates were lower than report rates, indicating that reporting accuracy was worse than implied by conventional measures. Conclusions When analyzed using the reporting-error-sensitive approach, children’s dietary reporting accuracy for energy and macronutrients improved over time, but the conventional approach masked improvements and overestimated accuracy. Applications The reporting-error-sensitive approach is recommended when analyzing data from validation studies of dietary reporting accuracy for energy and macronutrients. PMID:17383265

  13. Geoid undulation accuracy

    NASA Technical Reports Server (NTRS)

    Rapp, Richard H.

    1993-01-01

    The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.

  14. Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks.

    PubMed

    Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R; Nguyen, Tuan N; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T

    2017-01-01

    This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.

  15. Improving EEG-Based Driver Fatigue Classification Using Sparse-Deep Belief Networks

    PubMed Central

    Chai, Rifai; Ling, Sai Ho; San, Phyo Phyo; Naik, Ganesh R.; Nguyen, Tuan N.; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T.

    2017-01-01

    This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively. PMID:28326009

  16. An accuracy assessment of Magellan Very Long Baseline Interferometry (VLBI)

    NASA Technical Reports Server (NTRS)

    Engelhardt, D. B.; Kronschnabl, G. R.; Border, J. S.

    1990-01-01

    Very Long Baseline Interferometry (VLBI) measurements of the Magellan spacecraft's angular position and velocity were made during July through September, 1989, during the spacecraft's heliocentric flight to Venus. The purpose of this data acquisition and reduction was to verify this data type for operational use before Magellan is inserted into Venus orbit, in August, 1990. The accuracy of these measurements are shown to be within 20 nanoradians in angular position, and within 5 picoradians/sec in angular velocity. The media effects and their calibrations are quantified; the wet fluctuating troposphere is the dominant source of measurement error for angular velocity. The charged particle effect is completely calibrated with S- and X-Band dual-frequency calibrations. Increasing the accuracy of the Earth platform model parameters, by using VLBI-derived tracking station locations consistent with the planetary ephemeris frame, and by including high frequency Earth tidal terms in the Earth rotation model, add a few nanoradians improvement to the angular position measurements. Angular velocity measurements were insensitive to these Earth platform modelling improvements.

  17. The Impact of Learning Curve Model Selection and Criteria for Cost Estimation Accuracy in the DoD

    DTIC Science & Technology

    2016-04-30

    Estimation Accuracy in the DoD Candice Honious, Student , Air Force Institute of Technology Brandon Johnson, Student , Air Force Institute of Technology...póåÉêÖó=Ñçê=fåÑçêãÉÇ=`Ü~åÖÉ= - 453 - Panel 21. Methods for Improving Cost Estimates for Defense Acquisition Projects Thursday, May 5, 2016 3:30 p.m...Curve Model Selection and Criteria for Cost Estimation Accuracy in the DoD Candice Honious, Student , Air Force Institute of Technology Brandon Johnson

  18. VO2 estimation using 6-axis motion sensor with sports activity classification.

    PubMed

    Nagata, Takashi; Nakamura, Naoteru; Miyatake, Masato; Yuuki, Akira; Yomo, Hiroyuki; Kawabata, Takashi; Hara, Shinsuke

    2016-08-01

    In this paper, we focus on oxygen consumption (VO2) estimation using 6-axis motion sensor (3-axis accelerometer and 3-axis gyroscope) for people playing sports with diverse intensities. The VO2 estimated with a small motion sensor can be used to calculate the energy expenditure, however, its accuracy depends on the intensities of various types of activities. In order to achieve high accuracy over a wide range of intensities, we employ an estimation framework that first classifies activities with a simple machine-learning based classification algorithm. We prepare different coefficients of linear regression model for different types of activities, which are determined with training data obtained by experiments. The best-suited model is used for each type of activity when VO2 is estimated. The accuracy of the employed framework depends on the trade-off between the degradation due to classification errors and improvement brought by applying separate, optimum model to VO2 estimation. Taking this trade-off into account, we evaluate the accuracy of the employed estimation framework by using a set of experimental data consisting of VO2 and motion data of people with a wide range of intensities of exercises, which were measured by a VO2 meter and motion sensor, respectively. Our numerical results show that the employed framework can improve the estimation accuracy in comparison to a reference method that uses a common regression model for all types of activities.

  19. The uncertainty of crop yield projections is reduced by improved temperature response functions

    USDA-ARS?s Scientific Manuscript database

    Increasing the accuracy of crop productivity estimates is a key Increasing the accuracy of crop productivity estimates is a key element in planning adaptation strategies to ensure global food security under climate change. Process-based crop models are effective means to project climate impact on cr...

  20. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  1. Modeling Soil Organic Carbon at Regional Scale by Combining Multi-Spectral Images with Laboratory Spectra

    PubMed Central

    Peng, Yi; Xiong, Xiong; Adhikari, Kabindra; Knadel, Maria; Grunwald, Sabine; Greve, Mogens Humlekrog

    2015-01-01

    There is a great challenge in combining soil proximal spectra and remote sensing spectra to improve the accuracy of soil organic carbon (SOC) models. This is primarily because mixing of spectral data from different sources and technologies to improve soil models is still in its infancy. The first objective of this study was to integrate information of SOC derived from visible near-infrared reflectance (Vis-NIR) spectra in the laboratory with remote sensing (RS) images to improve predictions of topsoil SOC in the Skjern river catchment, Denmark. The second objective was to improve SOC prediction results by separately modeling uplands and wetlands. A total of 328 topsoil samples were collected and analyzed for SOC. Satellite Pour l’Observation de la Terre (SPOT5), Landsat Data Continuity Mission (Landsat 8) images, laboratory Vis-NIR and other ancillary environmental data including terrain parameters and soil maps were compiled to predict topsoil SOC using Cubist regression and Bayesian kriging. The results showed that the model developed from RS data, ancillary environmental data and laboratory spectral data yielded a lower root mean square error (RMSE) (2.8%) and higher R2 (0.59) than the model developed from only RS data and ancillary environmental data (RMSE: 3.6%, R2: 0.46). Plant-available water (PAW) was the most important predictor for all the models because of its close relationship with soil organic matter content. Moreover, vegetation indices, such as the Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI), were very important predictors in SOC spatial models. Furthermore, the ‘upland model’ was able to more accurately predict SOC compared with the ‘upland & wetland model’. However, the separately calibrated ‘upland and wetland model’ did not improve the prediction accuracy for wetland sites, since it was not possible to adequately discriminate the vegetation in the RS summer images. We conclude that laboratory Vis-NIR spectroscopy adds critical information that significantly improves the prediction accuracy of SOC compared to using RS data alone. We recommend the incorporation of laboratory spectra with RS data and other environmental data to improve soil spatial modeling and digital soil mapping (DSM). PMID:26555071

  2. Brute force meets Bruno force in parameter optimisation: introduction of novel constraints for parameter accuracy improvement by symbolic computation.

    PubMed

    Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F

    2011-09-01

    Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.

  3. Improving clinical models based on knowledge extracted from current datasets: a new approach.

    PubMed

    Mendes, D; Paredes, S; Rocha, T; Carvalho, P; Henriques, J; Morais, J

    2016-08-01

    The Cardiovascular Diseases (CVD) are the leading cause of death in the world, being prevention recognized to be a key intervention able to contradict this reality. In this context, although there are several models and scores currently used in clinical practice to assess the risk of a new cardiovascular event, they present some limitations. The goal of this paper is to improve the CVD risk prediction taking into account the current models as well as information extracted from real and recent datasets. This approach is based on a decision tree scheme in order to assure the clinical interpretability of the model. An innovative optimization strategy is developed in order to adjust the decision tree thresholds (rule structure is fixed) based on recent clinical datasets. A real dataset collected in the ambit of the National Registry on Acute Coronary Syndromes, Portuguese Society of Cardiology is applied to validate this work. In order to assess the performance of the new approach, the metrics sensitivity, specificity and accuracy are used. This new approach achieves sensitivity, a specificity and an accuracy values of, 80.52%, 74.19% and 77.27% respectively, which represents an improvement of about 26% in relation to the accuracy of the original score.

  4. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Regional Seismic Travel-Time Prediction, Uncertainty, and Location Improvement in Western Eurasia

    NASA Astrophysics Data System (ADS)

    Flanagan, M. P.; Myers, S. C.

    2004-12-01

    We investigate our ability to improve regional travel-time prediction and seismic event location using an a priori, three-dimensional velocity model of Western Eurasia and North Africa: WENA1.0 [Pasyanos et al., 2004]. Our objective is to improve the accuracy of seismic location estimates and calculate representative location uncertainty estimates. As we focus on the geographic region of Western Eurasia, the Middle East, and North Africa, we develop, test, and validate 3D model-based travel-time prediction models for 30 stations in the study region. Three principal results are presented. First, the 3D WENA1.0 velocity model improves travel-time prediction over the iasp91 model, as measured by variance reduction, for regional Pg, Pn, and P phases recorded at the 30 stations. Second, a distance-dependent uncertainty model is developed and tested for the WENA1.0 model. Third, an end-to-end validation test based on 500 event relocations demonstrates improved location performance over the 1-dimensional iasp91 model. Validation of the 3D model is based on a comparison of approximately 11,000 Pg, Pn, and P travel-time predictions and empirical observations from ground truth (GT) events. Ray coverage for the validation dataset is chosen to provide representative, regional-distance sampling across Eurasia and North Africa. The WENA1.0 model markedly improves travel-time predictions for most stations with an average variance reduction of 25% for all ray paths. We find that improvement is station dependent, with some stations benefiting greatly from WENA1.0 predictions (52% at APA, 33% at BKR, and 32% at NIL), some stations showing moderate improvement (12% at KEV, 14% at BOM, and 12% at TAM), some benefiting only slightly (6% at MOX, and 4% at SVE), and some are degraded (-6% at MLR and -18% at QUE). We further test WENA1.0 by comparing location accuracy with results obtained using the iasp91 model. Again, relocation of these events is dependent on ray paths that evenly sample WENA1.0 and therefore provide an unbiased assessment of location performance. A statistically significant sample is achieved by generating 500 location realizations based on 5 events with location accuracy between 1 km and 5 km. Each realization is a randomly selected event with location determined by randomly selecting 5 stations from the available network. In 340 cases (68% of the instances), locations are improved, and average mislocation is reduced from 31 km to 26 km. Preliminary test of uncertainty estimates suggest that our uncertainty model produces location uncertainty ellipses that are representative of location accuracy. These results highlight the importance of accurate GT datasets in assessing regional travel-time models and demonstrate that an a priori 3D model can markedly improve our ability to locate small magnitude events in a regional monitoring context. This work was performed under the auspices of the U.S. Department of Energy by the University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48, Contribution UCRL-CONF-206386.

  6. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  7. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  8. A new fault diagnosis algorithm for AUV cooperative localization system

    NASA Astrophysics Data System (ADS)

    Shi, Hongyang; Miao, Zhiyong; Zhang, Yi

    2017-10-01

    Multiple AUVs cooperative localization as a new kind of underwater positioning technology, not only can improve the positioning accuracy, but also has many advantages the single AUV does not have. It is necessary to detect and isolate the fault to increase the reliability and availability of the AUVs cooperative localization system. In this paper, the Extended Multiple Model Adaptive Cubature Kalmam Filter (EMMACKF) method is presented to detect the fault. The sensor failures are simulated based on the off-line experimental data. Experimental results have shown that the faulty apparatus can be diagnosed effectively using the proposed method. Compared with Multiple Model Adaptive Extended Kalman Filter and Multi-Model Adaptive Unscented Kalman Filter, both accuracy and timelines have been improved to some extent.

  9. An improved model of the Earth's gravitational field: GEM-T1

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Lerch, F. J.; Christodoulidis, D. C.; Putney, B. H.; Felsentreger, T. L.; Sanchez, B. V.; Smith, D. E.; Klosko, S. M.; Martin, T. V.; Pavlis, E. C.

    1987-01-01

    Goddard Earth Model T1 (GEM-T1), which was developed from an analysis of direct satellite tracking observations, is the first in a new series of such models. GEM-T1 is complete to degree and order 36. It was developed using consistent reference parameters and extensive earth and ocean tidal models. It was simultaneously solved for gravitational and tidal terms, earth orientation parameters, and the orbital parameters of 580 individual satellite arcs. The solution used only satellite tracking data acquired on 17 different satellites and is predominantly based upon the precise laser data taken by third generation systems. In all, 800,000 observations were used. A major improvement in field accuracy was obtained. For marine geodetic applications, long wavelength geoidal modeling is twice as good as in earlier satellite-only GEM models. Orbit determination accuracy has also been substantially advanced over a wide range of satellites that have been tested.

  10. Mammographic density, breast cancer risk and risk prediction

    PubMed Central

    Vachon, Celine M; van Gils, Carla H; Sellers, Thomas A; Ghosh, Karthik; Pruthi, Sandhya; Brandt, Kathleen R; Pankratz, V Shane

    2007-01-01

    In this review, we examine the evidence for mammographic density as an independent risk factor for breast cancer, describe the risk prediction models that have incorporated density, and discuss the current and future implications of using mammographic density in clinical practice. Mammographic density is a consistent and strong risk factor for breast cancer in several populations and across age at mammogram. Recently, this risk factor has been added to existing breast cancer risk prediction models, increasing the discriminatory accuracy with its inclusion, albeit slightly. With validation, these models may replace the existing Gail model for clinical risk assessment. However, absolute risk estimates resulting from these improved models are still limited in their ability to characterize an individual's probability of developing cancer. Promising new measures of mammographic density, including volumetric density, which can be standardized using full-field digital mammography, will likely result in a stronger risk factor and improve accuracy of risk prediction models. PMID:18190724

  11. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF). PMID:28608843

  12. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-06-13

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).

  13. Gravity model development for TOPEX/POSEIDON: Joint gravity models 1 and 2

    NASA Technical Reports Server (NTRS)

    Nerem, R. S.; Lerch, F. J.; Marshall, J. A.; Pavlis, E. C.; Putney, B. H.; Tapley, B. D.; Eanes, R. J.; Ries, J. C.; Schutz, B. E.; Shum, C. K.

    1994-01-01

    The TOPEX/POSEIDON (T/P) prelaunch Joint Gravity Model-1 (JGM-1) and the postlaunch JGM-2 Earth gravitational models have been developed to support precision orbit determination for T/P. Each of these models is complete to degree 70 in spherical harmonics and was computed from a combination of satellite tracking data, satellite altimetry, and surface gravimetry. While improved orbit determination accuracies for T/P have driven the improvements in the models, the models are general in application and also provide an improved geoid for oceanographic computations. The postlaunch model, JGM-2, which includes T/P satellite laser ranging (SLR) and Doppler orbitography and radiopositioning integrated by satellite (DORIS) tracking data, introduces radial orbit errors for T/P that are only 2 cm RMS with the commission errors of the marine geoid for terms to degree 70 being +/- 25 cm. Errors in modeling the nonconservative forces acting on T/P increase the total radial errors to only 3-4 cm root mean square (RMS), a result much better than premission goals. While the orbit accuracy goal for T/P has been far surpassed geoid errors still prevent the absolute determination of the ocean dynamic topography for wavelengths shorter than about 2500 km. Only a dedicated gravitational field satellite mission will likely provide the necessary improvement in the geoid.

  14. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients respectively, which are used to improve/re-evaluate the AR model. Comparing to the single AR model, the AR+Kalman method performs better in the prediction of UT1-UTC and ΔLOD, and the improvement in the prediction of the polar motion is significant. (3) Following the successful Earth Orientation Parameter Prediction Comparison Campaign (EOP PCC), the Earth Orientation Parameter Combination of Prediction Pilot Project (EOPC PPP) was sponsored in 2010. As one of the participants from China, we update and submit the short- and medium-term (1 to 90 days) EOP predictions every day. From the current comparative statistics, our prediction accuracy is on the medium international level. We will carry out more innovative researches to improve the EOP forecast accuracy and enhance our level in EOP forecast.

  15. Feasibility Analysis of DEM Differential Method on Tree Height Assessment wit Terra-SAR/TanDEM-X Data

    NASA Astrophysics Data System (ADS)

    Zhang, Wangfei; Chen, Erxue; Li, Zengyuan; Feng, Qi; Zhao, Lei

    2016-08-01

    DEM Differential Method is an effective and efficient way for forest tree height assessment with Polarimetric and interferometric technology, however, the assessment accuracy of it is based on the accuracy of interferometric results and DEM. Terra-SAR/TanDEM-X, which established the first spaceborne bistatic interferometer, can provide highly accurate cross-track interferometric images in the whole global without inherent accuracy limitations like temporal decorrelation and atmospheric disturbance. These characters of Terra-SAR/TandDEM-X give great potential for global or regional tree height assessment, which have been constraint by the temporal decorrelation in traditional repeat-pass interferometry. Currently, in China, it will be costly to collect high accurate DEM with Lidar. At the same time, it is also difficult to get truly representative ground survey samples to test and verify the assessment results. In this paper, we analyzed the feasibility of using TerraSAR/TanDEM-X data to assess forest tree height with current free DEM data like ASTER-GDEM and archived ground in-suit data like forest management inventory data (FMI). At first, the accuracy and of ASTER-GDEM and forest management inventory data had been assessment according to the DEM and canopy height model (CHM) extracted from Lidar data. The results show the average elevation RMSE between ASTER-GEDM and Lidar-DEM is about 13 meters, but they have high correlation with the correlation coefficient of 0.96. With a linear regression model, we can compensate ASTER-GDEM and improve its accuracy nearly to the Lidar-DEM with same scale. The correlation coefficient between FMI and CHM is 0.40. its accuracy is able to be improved by a linear regression model withinconfidence intervals of 95%. After compensation of ASTER-GDEM and FMI, we calculated the tree height in Mengla test site with DEM Differential Method. The results showed that the corrected ASTER-GDEM can effectively improve the assessment accuracy. The average assessment accuracy before and after corrected is 0.73 and 0.76, the RMSE is 5.5 and 4.4, respectively.

  16. Template-Directed Instrumentation Reduces Cost and Improves Efficiency for Total Knee Arthroplasty: An Economic Decision Analysis and Pilot Study.

    PubMed

    McLawhorn, Alexander S; Carroll, Kaitlin M; Blevins, Jason L; DeNegre, Scott T; Mayman, David J; Jerabek, Seth A

    2015-10-01

    Template-directed instrumentation (TDI) for total knee arthroplasty (TKA) may streamline operating room (OR) workflow and reduce costs by preselecting implants and minimizing instrument tray burden. A decision model simulated the economics of TDI. Sensitivity analyses determined thresholds for model variables to ensure TDI success. A clinical pilot was reviewed. The accuracy of preoperative templates was validated, and 20 consecutive primary TKAs were performed using TDI. The model determined that preoperative component size estimation should be accurate to ±1 implant size for 50% of TKAs to implement TDI. The pilot showed that preoperative template accuracy exceeded 97%. There were statistically significant improvements in OR turnover time and in-room time for TDI compared to an historical cohort of TKAs. TDI reduces costs and improves OR efficiency. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Improved Surgery Planning Using 3-D Printing: a Case Study.

    PubMed

    Singhal, A J; Shetty, V; Bhagavan, K R; Ragothaman, Ananthan; Shetty, V; Koneru, Ganesh; Agarwala, M

    2016-04-01

    The role of 3-D printing is presented for improved patient-specific surgery planning. Key benefits are time saved and surgery outcome. Two hard-tissue surgery models were 3-D printed, for orthopedic, pelvic surgery, and craniofacial surgery. We discuss software data conversion in computed tomography (CT)/magnetic resonance (MR) medical image for 3-D printing. 3-D printed models save time in surgery planning and help visualize complex pre-operative anatomy. Time saved in surgery planning can be as much as two thirds. In addition to improved surgery accuracy, 3-D printing presents opportunity in materials research. Other hard-tissue and soft-tissue cases in maxillofacial, abdominal, thoracic, cardiac, orthodontics, and neurosurgery are considered. We recommend using 3-D printing as standard protocol for surgery planning and for teaching surgery practices. A quick turnaround time of a 3-D printed surgery model, in improved accuracy in surgery planning, is helpful for the surgery team. It is recommended that these costs be within 20 % of the total surgery budget.

  18. Bayesian model aggregation for ensemble-based estimates of protein pKa values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gosink, Luke J.; Hogan, Emilie A.; Pulsipher, Trenton C.

    2014-03-01

    This paper investigates an ensemble-based technique called Bayesian Model Averaging (BMA) to improve the performance of protein amino acid pmore » $$K_a$$ predictions. Structure-based p$$K_a$$ calculations play an important role in the mechanistic interpretation of protein structure and are also used to determine a wide range of protein properties. A diverse set of methods currently exist for p$$K_a$$ prediction, ranging from empirical statistical models to {\\it ab initio} quantum mechanical approaches. However, each of these methods are based on a set of assumptions that have inherent bias and sensitivities that can effect a model's accuracy and generalizability for p$$K_a$$ prediction in complicated biomolecular systems. We use BMA to combine eleven diverse prediction methods that each estimate pKa values of amino acids in staphylococcal nuclease. These methods are based on work conducted for the pKa Cooperative and the pKa measurements are based on experimental work conducted by the Garc{\\'i}a-Moreno lab. Our study demonstrates that the aggregated estimate obtained from BMA outperforms all individual prediction methods in our cross-validation study with improvements from 40-70\\% over other method classes. This work illustrates a new possible mechanism for improving the accuracy of p$$K_a$$ prediction and lays the foundation for future work on aggregate models that balance computational cost with prediction accuracy.« less

  19. Response of data-driven artificial neural network-based TEC models to neutral wind for different locations, seasons, and solar activity levels from the Indian longitude sector

    NASA Astrophysics Data System (ADS)

    Sur, D.; Haldar, S.; Ray, S.; Paul, A.

    2017-07-01

    The perturbations imposed on transionospheric signals by the ionosphere are a major concern for navigation. The dynamic nature of the ionosphere in the low-latitude equatorial region and the Indian longitude sector has some specific characteristics such as sharp temporal and latitudinal variation of total electron content (TEC). TEC in the Indian longitude sector also undergoes seasonal variations. The large magnitude and sharp variation of TEC cause large and variable range errors for satellite-based navigation system such as Global Positioning System (GPS) throughout the day. For accurate navigation using satellite-based augmentation systems, proper prediction of TEC under certain geophysical conditions is necessary in the equatorial region. It has been reported in the literature that prediction accuracy of TEC has been improved using measured data-driven artificial neural network (ANN)-based vertical TEC (VTEC) models, compared to standard ionospheric models. A set of observations carried out in the Indian longitude sector have been reported in this paper in order to find the amount of improvement in performance accuracy of an ANN-based VTEC model after incorporation of neutral wind as model input. The variations of this improvement in prediction accuracy with respect to latitude, longitude, season, and solar activity have also been reported in this paper.

  20. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  1. Mapping irrigated lands at 250-m scale by merging MODIS data and National Agricultural Statistics

    USGS Publications Warehouse

    Pervez, Md Shahriar; Brown, Jesslyn F.

    2010-01-01

    Accurate geospatial information on the extent of irrigated land improves our understanding of agricultural water use, local land surface processes, conservation or depletion of water resources, and components of the hydrologic budget. We have developed a method in a geospatial modeling framework that assimilates irrigation statistics with remotely sensed parameters describing vegetation growth conditions in areas with agricultural land cover to spatially identify irrigated lands at 250-m cell size across the conterminous United States for 2002. The geospatial model result, known as the Moderate Resolution Imaging Spectroradiometer (MODIS) Irrigated Agriculture Dataset (MIrAD-US), identified irrigated lands with reasonable accuracy in California and semiarid Great Plains states with overall accuracies of 92% and 75% and kappa statistics of 0.75 and 0.51, respectively. A quantitative accuracy assessment of MIrAD-US for the eastern region has not yet been conducted, and qualitative assessment shows that model improvements are needed for the humid eastern regions where the distinction in annual peak NDVI between irrigated and non-irrigated crops is minimal and county sizes are relatively small. This modeling approach enables consistent mapping of irrigated lands based upon USDA irrigation statistics and should lead to better understanding of spatial trends in irrigated lands across the conterminous United States. An improved version of the model with revised datasets is planned and will employ 2007 USDA irrigation statistics.

  2. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE PAGES

    Rosewater, David; Ferreira, Summer; Schoenwald, David; ...

    2018-01-25

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  3. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosewater, David; Ferreira, Summer; Schoenwald, David

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  4. Researches on the Orbit Determination and Positioning of the Chinese Lunar Exploration Program

    NASA Astrophysics Data System (ADS)

    Li, P. J.

    2015-07-01

    This dissertation studies the precise orbit determination (POD) and positioning of the Chinese lunar exploration spacecraft, emphasizing the variety of VLBI (very long baseline interferometry) technologies applied for the deep-space exploration, and their contributions to the methods and accuracies of the precise orbit determination and positioning. In summary, the main contents are as following: In this work, using the real-time data measured by the CE-2 (Chang'E-2) detector, the accuracy of orbit determination is analyzed for the domestic lunar probe under the present condition, and the role played by the VLBI tracking data is particularly reassessed through the precision orbit determination experiments for CE-2. The experiments of the short-arc orbit determination for the lunar probe show that the combination of the ranging and VLBI data with the arc of 15 minutes is able to improve the accuracy by 1-1.5 order of magnitude, compared to the cases for only using the ranging data with the arc of 3 hours. The orbital accuracy is assessed through the orbital overlapping analysis, and the results show that the VLBI data is able to contribute to the CE-2's long-arc POD especially in the along-track and orbital normal directions. For the CE-2's 100 km× 100 km lunar orbit, the position errors are better than 30 meters, and for the CE-2's 15 km× 100 km orbit, the position errors are better than 45 meters. The observational data with the delta differential one-way ranging (Δ DOR) from the CE-2's X-band monitoring and control system experimental are analyzed. It is concluded that the accuracy of Δ DOR delay is dramatically improved with the noise level better than 0.1 ns, and the systematic errors are well calibrated. Although it is unable to support the development of an independent lunar gravity model, the tracking data of CE-2 provided the evaluations of different lunar gravity models through POD, and the accuracies are examined in terms of orbit-to-orbit solution differences for several gravity models. It is found that for the 100 km× 100 km lunar orbit, with a degree and order expansion up to 165, the JPL's gravity model LP165P does not show noticeable improvement over Japan's SGM series models (100× 100), but for the 15 km× 100 km lunar orbit, a higher degree-order model can significantly improve the orbit accuracy. After accomplished its nominal mission, CE-2 launched its extended missions, which involving the L2 mission and the 4179 Toutatis mission. During the flight of the extended missions, the regime offers very little dynamics thus requires an extensive amount of time and tracking data in order to attain a solution. The overlap errors are computed, and it is indicated that the use of VLBI measurements is able to increase the accuracy and reduce the total amount of tracking time. An orbit determination method based on the polynomial fitting is proposed for the CE-3's planned lunar soft landing mission. In this method, spacecraft's dynamic modeling is not necessary, and its noise reduction is expected to be better than that of the point positioning method by making full use of all-arc observational data. The simulation experiments and real data processing showed that the optimal description of the CE-1's free-fall landing trajectory is a set of five-order polynomial functions for each of the position components as well as velocity components in J2000.0. The combination of the VLBI delay, the delay rate data, and the USB (united S-band) ranging data significantly improved the accuracy than the use of USB data alone. In order to determine the position for the CE-3's Lunar Lander, a kinematic statistical method is proposed. This method uses both ranging and VLBI measurements to the lander for a continuous arc, combing with precise knowledge about the motion of the moon as provided by planetary ephemeris, to estimate the lander's position on the lunar surface with high accuracy. Application of the lunar digital elevation model (DEM) as constraints in the lander positioning is helpful. The positioning method for the traverse of lunar rover is also investigated. The integration of delay-rate method is able to achieve higher precise positioning results than the point positioning method. This method provides a wide application of the VLBI data. In the automated sample return mission, the lunar orbit rendezvous and docking are involved. Precise orbit determination using the same-beam VLBI (SBI) measurement for two spacecraft at the same time is analyzed. The simulation results showed that the SBI data is able to improve the absolute and relative orbit accuracy for two targets by 1-2 orders of magnitude. In order to verify the simulation results and test the two-target POD software developed by SHAO (Shanghai Astronomical Observatory), the real SBI data of the SELENE (Selenological and Engineering Explorer) are processed. The POD results for the Rstar and the Vstar showed that the combination of SBI data could significantly improve the accuracy for the two spacecraft, especially for the Vstar with less ranging data, and the POD accuracy is improved by approximate one order of magnitude to the POD accuracy of the Rstar.

  5. Accuracy Assessment of Recent Global Ocean Tide Models around Antarctica

    NASA Astrophysics Data System (ADS)

    Lei, J.; Li, F.; Zhang, S.; Ke, H.; Zhang, Q.; Li, W.

    2017-09-01

    Due to the coverage limitation of T/P-series altimeters, the lack of bathymetric data under large ice shelves, and the inaccurate definitions of coastlines and grounding lines, the accuracy of ocean tide models around Antarctica is poorer than those in deep oceans. Using tidal measurements from tide gauges, gravimetric data and GPS records, the accuracy of seven state-of-the-art global ocean tide models (DTU10, EOT11a, GOT4.8, FES2012, FES2014, HAMTIDE12, TPXO8) is assessed, as well as the most widely-used conventional model FES2004. Four regions (Antarctic Peninsula region, Amery ice shelf region, Filchner-Ronne ice shelf region and Ross ice shelf region) are separately reported. The standard deviations of eight main constituents between the selected models are large in polar regions, especially under the big ice shelves, suggesting that the uncertainty in these regions remain large. Comparisons with in situ tidal measurements show that the most accurate model is TPXO8, and all models show worst performance in Weddell sea and Filchner-Ronne ice shelf regions. The accuracy of tidal predictions around Antarctica is gradually improving.

  6. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  7. An Evaluation of Explicit Receptor Flexibility in Molecular Docking Using Molecular Dynamics and Torsion Angle Molecular Dynamics.

    PubMed

    Armen, Roger S; Chen, Jianhan; Brooks, Charles L

    2009-10-13

    Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and "noise" that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds.

  8. An Evaluation of Explicit Receptor Flexibility in Molecular Docking Using Molecular Dynamics and Torsion Angle Molecular Dynamics

    PubMed Central

    Armen, Roger S.; Chen, Jianhan; Brooks, Charles L.

    2009-01-01

    Incorporating receptor flexibility into molecular docking should improve results for flexible proteins. However, the incorporation of explicit all-atom flexibility with molecular dynamics for the entire protein chain may also introduce significant error and “noise” that could decrease docking accuracy and deteriorate the ability of a scoring function to rank native-like poses. We address this apparent paradox by comparing the success of several flexible receptor models in cross-docking and multiple receptor ensemble docking for p38α mitogen-activated protein (MAP) kinase. Explicit all-atom receptor flexibility has been incorporated into a CHARMM-based molecular docking method (CDOCKER) using both molecular dynamics (MD) and torsion angle molecular dynamics (TAMD) for the refinement of predicted protein-ligand binding geometries. These flexible receptor models have been evaluated, and the accuracy and efficiency of TAMD sampling is directly compared to MD sampling. Several flexible receptor models are compared, encompassing flexible side chains, flexible loops, multiple flexible backbone segments, and treatment of the entire chain as flexible. We find that although including side chain and some backbone flexibility is required for improved docking accuracy as expected, docking accuracy also diminishes as additional and unnecessary receptor flexibility is included into the conformational search space. Ensemble docking results demonstrate that including protein flexibility leads to to improved agreement with binding data for 227 active compounds. This comparison also demonstrates that a flexible receptor model enriches high affinity compound identification without significantly increasing the number of false positives from low affinity compounds. PMID:20160879

  9. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  10. Modeling and Simulation of High Resolution Optical Remote Sensing Satellite Geometric Chain

    NASA Astrophysics Data System (ADS)

    Xia, Z.; Cheng, S.; Huang, Q.; Tian, G.

    2018-04-01

    The high resolution satellite with the longer focal length and the larger aperture has been widely used in georeferencing of the observed scene in recent years. The consistent end to end model of high resolution remote sensing satellite geometric chain is presented, which consists of the scene, the three line array camera, the platform including attitude and position information, the time system and the processing algorithm. The integrated design of the camera and the star tracker is considered and the simulation method of the geolocation accuracy is put forward by introduce the new index of the angle between the camera and the star tracker. The model is validated by the geolocation accuracy simulation according to the test method of the ZY-3 satellite imagery rigorously. The simulation results show that the geolocation accuracy is within 25m, which is highly consistent with the test results. The geolocation accuracy can be improved about 7 m by the integrated design. The model combined with the simulation method is applicable to the geolocation accuracy estimate before the satellite launching.

  11. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images

    PubMed Central

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-01-01

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236

  12. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    PubMed

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  13. Mutants of Cre recombinase with improved accuracy

    PubMed Central

    Eroshenko, Nikolai; Church, George M.

    2013-01-01

    Despite rapid advances in genome engineering technologies, inserting genes into precise locations in the human genome remains an outstanding problem. It has been suggested that site-specific recombinases can be adapted towards use as transgene delivery vectors. The specificity of recombinases can be altered either with directed evolution or via fusions to modular DNA-binding domains. Unfortunately, both wildtype and altered variants often have detectable activities at off-target sites. Here we use bacterial selections to identify mutations in the dimerization surface of Cre recombinase (R32V, R32M, and 303GVSdup) that improve the accuracy of recombination. The mutants are functional in bacteria, in human cells, and in vitro (except for 303GVSdup, which we did not purify), and have improved selectivity against both model off-target sites and the entire E. coli genome. We propose that destabilizing binding cooperativity may be a general strategy for improving the accuracy of dimeric DNA-binding proteins. PMID:24056590

  14. Improved parametrization of the growth index for dark energy and DGP models

    NASA Astrophysics Data System (ADS)

    Jing, Jiliang; Chen, Songbai

    2010-03-01

    We propose two improved parameterized form for the growth index of the linear matter perturbations: (I) γ(z)=γ0+(γ∞-γ0)z/z+1 and (II) γ(z)=γ0+γ1 z/z+1 +(γ∞-γ1-γ0)(. With these forms of γ(z), we analyze the accuracy of the approximation the growth factor f by Ωmγ(z) for both the wCDM model and the DGP model. For the first improved parameterized form, we find that the approximation accuracy is enhanced at the high redshifts for both kinds of models, but it is not at the low redshifts. For the second improved parameterized form, it is found that Ωmγ(z) approximates the growth factor f very well for all redshifts. For chosen α, the relative error is below 0.003% for the ΛCDM model and 0.028% for the DGP model when Ωm=0.27. Thus, the second improved parameterized form of γ(z) should be useful for the high precision constraint on the growth index of different models with the observational data. Moreover, we also show that α depends on the equation of state w and the fractional energy density of matter Ωm0, which may help us learn more information about dark energy and DGP models.

  15. Improving the accuracy of macromolecular structure refinement at 7 Å resolution.

    PubMed

    Brunger, Axel T; Adams, Paul D; Fromme, Petra; Fromme, Raimund; Levitt, Michael; Schröder, Gunnar F

    2012-06-06

    In X-ray crystallography, molecular replacement and subsequent refinement is challenging at low resolution. We compared refinement methods using synchrotron diffraction data of photosystem I at 7.4 Å resolution, starting from different initial models with increasing deviations from the known high-resolution structure. Standard refinement spoiled the initial models, moving them further away from the true structure and leading to high R(free)-values. In contrast, DEN refinement improved even the most distant starting model as judged by R(free), atomic root-mean-square differences to the true structure, significance of features not included in the initial model, and connectivity of electron density. The best protocol was DEN refinement with initial segmented rigid-body refinement. For the most distant initial model, the fraction of atoms within 2 Å of the true structure improved from 24% to 60%. We also found a significant correlation between R(free) values and the accuracy of the model, suggesting that R(free) is useful even at low resolution. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Global and Regional 3D Tomography for Improved Seismic Event Location and Uncertainty in Explosion Monitoring

    NASA Astrophysics Data System (ADS)

    Downey, N.; Begnaud, M. L.; Hipp, J. R.; Ballard, S.; Young, C. S.; Encarnacao, A. V.

    2017-12-01

    The SALSA3D global 3D velocity model of the Earth was developed to improve the accuracy and precision of seismic travel time predictions for a wide suite of regional and teleseismic phases. Recently, the global SALSA3D model was updated to include additional body wave phases including mantle phases, core phases, reflections off the core-mantle boundary and underside reflections off the surface of the Earth. We show that this update improves travel time predictions and leads directly to significant improvements in the accuracy and precision of seismic event locations as compared to locations computed using standard 1D velocity models like ak135, or 2½D models like RSTT. A key feature of our inversions is that path-specific model uncertainty of travel time predictions are calculated using the full 3D model covariance matrix computed during tomography, which results in more realistic uncertainty ellipses that directly reflect tomographic data coverage. Application of this method can also be done at a regional scale: we present a velocity model with uncertainty obtained using data obtained from the University of Utah Seismograph Stations. These results show a reduction in travel-time residuals for re-located events compared with those obtained using previously published models.

  17. Daily water level forecasting using wavelet decomposition and artificial intelligence techniques

    NASA Astrophysics Data System (ADS)

    Seo, Youngmin; Kim, Sungwon; Kisi, Ozgur; Singh, Vijay P.

    2015-01-01

    Reliable water level forecasting for reservoir inflow is essential for reservoir operation. The objective of this paper is to develop and apply two hybrid models for daily water level forecasting and investigate their accuracy. These two hybrid models are wavelet-based artificial neural network (WANN) and wavelet-based adaptive neuro-fuzzy inference system (WANFIS). Wavelet decomposition is employed to decompose an input time series into approximation and detail components. The decomposed time series are used as inputs to artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS) for WANN and WANFIS models, respectively. Based on statistical performance indexes, the WANN and WANFIS models are found to produce better efficiency than the ANN and ANFIS models. WANFIS7-sym10 yields the best performance among all other models. It is found that wavelet decomposition improves the accuracy of ANN and ANFIS. This study evaluates the accuracy of the WANN and WANFIS models for different mother wavelets, including Daubechies, Symmlet and Coiflet wavelets. It is found that the model performance is dependent on input sets and mother wavelets, and the wavelet decomposition using mother wavelet, db10, can further improve the efficiency of ANN and ANFIS models. Results obtained from this study indicate that the conjunction of wavelet decomposition and artificial intelligence models can be a useful tool for accurate forecasting daily water level and can yield better efficiency than the conventional forecasting models.

  18. The prediction of zenith range refraction from surface measurements of meteorological parameters. [mathematical models of atmospheric refraction used to improve spacecraft tracking space navigation

    NASA Technical Reports Server (NTRS)

    Berman, A. L.

    1976-01-01

    In the last two decades, increasingly sophisticated deep space missions have placed correspondingly stringent requirements on navigational accuracy. As part of the effort to increase navigational accuracy, and hence the quality of radiometric data, much effort has been expended in an attempt to understand and compute the tropospheric effect on range (and hence range rate) data. The general approach adopted has been that of computing a zenith range refraction, and then mapping this refraction to any arbitrary elevation angle via an empirically derived function of elevation. The prediction of zenith range refraction derived from surface measurements of meteorological parameters is presented. Refractivity is separated into wet (water vapor pressure) and dry (atmospheric pressure) components. The integration of dry refractivity is shown to be exact. Attempts to integrate wet refractivity directly prove ineffective; however, several empirical models developed by the author and other researchers at JPL are discussed. The best current wet refraction model is here considered to be a separate day/night model, which is proportional to surface water vapor pressure and inversely proportional to surface temperature. Methods are suggested that might improve the accuracy of the wet range refraction model.

  19. Research on High Accuracy Detection of Red Tide Hyperspecrral Based on Deep Learning Cnn

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Ma, Y.; An, J.

    2018-04-01

    Increasing frequency in red tide outbreaks has been reported around the world. It is of great concern due to not only their adverse effects on human health and marine organisms, but also their impacts on the economy of the affected areas. this paper put forward a high accuracy detection method based on a fully-connected deep CNN detection model with 8-layers to monitor red tide in hyperspectral remote sensing images, then make a discussion of the glint suppression method for improving the accuracy of red tide detection. The results show that the proposed CNN hyperspectral detection model can detect red tide accurately and effectively. The red tide detection accuracy of the proposed CNN model based on original image and filter-image is 95.58 % and 97.45 %, respectively, and compared with the SVM method, the CNN detection accuracy is increased by 7.52 % and 2.25 %. Compared with SVM method base on original image, the red tide CNN detection accuracy based on filter-image increased by 8.62 % and 6.37 %. It also indicates that the image glint affects the accuracy of red tide detection seriously.

  20. The Use of Ambient Humidity Conditions to Improve Influenza Forecast

    NASA Astrophysics Data System (ADS)

    Shaman, J. L.; Kandula, S.; Yang, W.; Karspeck, A. R.

    2017-12-01

    Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing. These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast and provide further evidence that humidity modulates rates of influenza transmission.

  1. Usefulness of Wave Data Assimilation to the WAVE WATCH III Modeling System

    NASA Astrophysics Data System (ADS)

    Choi, J. K.; Dykes, J. D.; Yaremchuk, M.; Wittmann, P.

    2017-12-01

    In-situ and remote-sensed wave data are more abundant currently than in years past, with excellent accuracy at global scales. Forecast skill of the WAVE WATCH III model is improved by assimilation of these measurements and they are also useful for model validation and calibration. It has been known that the impact of assimilation in wind-sea conditions is not large, but spectra that result in large swell with long term propagation are identified and assimilated, the improved accuracy of the initial conditions improve the long-term forecasts. The Navy's assimilation method started with the simple Optimal Interpolation (OI) method. Operationally, Fleet Numerical Meteorology and Oceanography Center uses the sequential 2DVar scheme, but a new approach has been tested based on an adjoint-free method to variational assimilation in WAVE WATCH III. We will present the status of wave data assimilation into the WAVE WATCH III numerical model and upcoming development of this new adjoint-free variational approach.

  2. Effects of additional data on Bayesian clustering.

    PubMed

    Yamazaki, Keisuke

    2017-10-01

    Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Reynolds, Daniel R.

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  4. Filters for Improvement of Multiscale Data from Atomistic Simulations

    DOE PAGES

    Gardner, David J.; Reynolds, Daniel R.

    2017-01-05

    Multiscale computational models strive to produce accurate and efficient numerical simulations of systems involving interactions across multiple spatial and temporal scales that typically differ by several orders of magnitude. Some such models utilize a hybrid continuum-atomistic approach combining continuum approximations with first-principles-based atomistic models to capture multiscale behavior. By following the heterogeneous multiscale method framework for developing multiscale computational models, unknown continuum scale data can be computed from an atomistic model. Concurrently coupling the two models requires performing numerous atomistic simulations which can dominate the computational cost of the method. Furthermore, when the resulting continuum data is noisy due tomore » sampling error, stochasticity in the model, or randomness in the initial conditions, filtering can result in significant accuracy gains in the computed multiscale data without increasing the size or duration of the atomistic simulations. In this work, we demonstrate the effectiveness of spectral filtering for increasing the accuracy of noisy multiscale data obtained from atomistic simulations. Moreover, we present a robust and automatic method for closely approximating the optimum level of filtering in the case of additive white noise. By improving the accuracy of this filtered simulation data, it leads to a dramatic computational savings by allowing for shorter and smaller atomistic simulations to achieve the same desired multiscale simulation precision.« less

  5. Artificial neural network prediction of ischemic tissue fate in acute stroke imaging

    PubMed Central

    Huang, Shiliang; Shen, Qiang; Duong, Timothy Q

    2010-01-01

    Multimodal magnetic resonance imaging of acute stroke provides predictive value that can be used to guide stroke therapy. A flexible artificial neural network (ANN) algorithm was developed and applied to predict ischemic tissue fate on three stroke groups: 30-, 60-minute, and permanent middle cerebral artery occlusion in rats. Cerebral blood flow (CBF), apparent diffusion coefficient (ADC), and spin–spin relaxation time constant (T2) were acquired during the acute phase up to 3 hours and again at 24 hours followed by histology. Infarct was predicted on a pixel-by-pixel basis using only acute (30-minute) stroke data. In addition, neighboring pixel information and infarction incidence were also incorporated into the ANN model to improve prediction accuracy. Receiver-operating characteristic analysis was used to quantify prediction accuracy. The major findings were the following: (1) CBF alone poorly predicted the final infarct across three experimental groups; (2) ADC alone adequately predicted the infarct; (3) CBF+ADC improved the prediction accuracy; (4) inclusion of neighboring pixel information and infarction incidence further improved the prediction accuracy; and (5) prediction was more accurate for permanent occlusion, followed by 60- and 30-minute occlusion. The ANN predictive model could thus provide a flexible and objective framework for clinicians to evaluate stroke treatment options on an individual patient basis. PMID:20424631

  6. Efficient use of unlabeled data for protein sequence classification: a comparative study.

    PubMed

    Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir

    2009-04-29

    Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags-the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably.

  7. Evaluation of Empirical Tropospheric Models Using Satellite-Tracking Tropospheric Wet Delays with Water Vapor Radiometer at Tongji, China

    PubMed Central

    Wang, Miaomiao; Li, Bofeng

    2016-01-01

    An empirical tropospheric delay model, together with a mapping function, is commonly used to correct the tropospheric errors in global navigation satellite system (GNSS) processing. As is well-known, the accuracy of tropospheric delay models relies mainly on the correction efficiency for tropospheric wet delays. In this paper, we evaluate the accuracy of three tropospheric delay models, together with five mapping functions in wet delays calculation. The evaluations are conducted by comparing their slant wet delays with those measured by water vapor radiometer based on its satellite-tracking function (collected data with large liquid water path is removed). For all 15 combinations of three tropospheric models and five mapping functions, their accuracies as a function of elevation are statistically analyzed by using nine-day data in two scenarios, with and without meteorological data. The results show that (1) no matter with or without meteorological data, there is no practical difference between mapping functions, i.e., Chao, Ifadis, Vienna Mapping Function 1 (VMF1), Niell Mapping Function (NMF), and MTT Mapping Function (MTT); (2) without meteorological data, the UNB3 is much better than Saastamoinen and Hopfield models, while the Saastamoinen model performed slightly better than the Hopfield model; (3) with meteorological data, the accuracies of all three tropospheric delay models are improved to be comparable, especially for lower elevations. In addition, the kinematic precise point positioning where no parameter is set up for tropospheric delay modification is conducted to further evaluate the performance of tropospheric delay models in positioning accuracy. It is shown that the UNB3 model is best and can achieve about 10 cm accuracy for the N and E coordinate component while 20 cm accuracy for the U coordinate component no matter the meteorological data is available or not. This accuracy can be obtained by the Saastamoinen model only when meteorological data is available, and degraded to 46 cm for the U component if the meteorological data is not available. PMID:26848662

  8. A general model to calculate the spin-lattice (T1) relaxation time of blood, accounting for haematocrit, oxygen saturation and magnetic field strength.

    PubMed

    Hales, Patrick W; Kirkham, Fenella J; Clark, Christopher A

    2016-02-01

    Many MRI techniques require prior knowledge of the T1-relaxation time of blood (T1bl). An assumed/fixed value is often used; however, T1bl is sensitive to magnetic field (B0), haematocrit (Hct), and oxygen saturation (Y). We aimed to combine data from previous in vitro measurements into a mathematical model, to estimate T1bl as a function of B0, Hct, and Y. The model was shown to predict T1bl from in vivo studies with a good accuracy (± 87 ms). This model allows for improved estimation of T1bl between 1.5-7.0 T while accounting for variations in Hct and Y, leading to improved accuracy of MRI-derived perfusion measurements. © The Author(s) 2015.

  9. Diffusion-like recommendation with enhanced similarity of objects

    NASA Astrophysics Data System (ADS)

    An, Ya-Hui; Dong, Qiang; Sun, Chong-Jing; Nie, Da-Cheng; Fu, Yan

    2016-11-01

    In the last decade, diversity and accuracy have been regarded as two important measures in evaluating a recommendation model. However, a clear concern is that a model focusing excessively on one measure will put the other one at risk, thus it is not easy to greatly improve diversity and accuracy simultaneously. In this paper, we propose to enhance the Resource-Allocation (RA) similarity in resource transfer equations of diffusion-like models, by giving a tunable exponent to the RA similarity, and traversing the value of this exponent to achieve the optimal recommendation results. In this way, we can increase the recommendation scores (allocated resource) of many unpopular objects. Experiments on three benchmark data sets, MovieLens, Netflix and RateYourMusic show that the modified models can yield remarkable performance improvement compared with the original ones.

  10. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-09-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  11. Improving the accuracy of energy baseline models for commercial buildings with occupancy data

    DOE PAGES

    Liang, Xin; Hong, Tianzhen; Shen, Geoffrey Qiping

    2016-07-07

    More than 80% of energy is consumed during operation phase of a building's life cycle, so energy efficiency retrofit for existing buildings is considered a promising way to reduce energy use in buildings. The investment strategies of retrofit depend on the ability to quantify energy savings by “measurement and verification” (M&V), which compares actual energy consumption to how much energy would have been used without retrofit (called the “baseline” of energy use). Although numerous models exist for predicting baseline of energy use, a critical limitation is that occupancy has not been included as a variable. However, occupancy rate is essentialmore » for energy consumption and was emphasized by previous studies. This study develops a new baseline model which is built upon the Lawrence Berkeley National Laboratory (LBNL) model but includes the use of building occupancy data. The study also proposes metrics to quantify the accuracy of prediction and the impacts of variables. However, the results show that including occupancy data does not significantly improve the accuracy of the baseline model, especially for HVAC load. The reasons are discussed further. In addition, sensitivity analysis is conducted to show the influence of parameters in baseline models. To conclude, the results from this study can help us understand the influence of occupancy on energy use, improve energy baseline prediction by including the occupancy factor, reduce risks of M&V and facilitate investment strategies of energy efficiency retrofit.« less

  12. Fast-PPP assessment in European and equatorial region near the solar cycle maximum

    NASA Astrophysics Data System (ADS)

    Rovira-Garcia, Adria; Juan, José Miguel; Sanz, Jaume

    2014-05-01

    The Fast Precise Point Positioning (Fast-PPP) is a technique to provide quick high-accuracy navigation with ambiguity fixing capability, thanks to an accurate modelling of the ionosphere. Indeed, once the availability of real-time precise satellite orbits and clocks is granted to users, the next challenge is the accuracy of real-time ionospheric corrections. Several steps had been taken by gAGE/UPC to develop such global system for precise navigation. First Wide-Area Real-Time Kinematics (WARTK) feasibility studies enabled precise relative continental navigation using a few tens of reference stations. Later multi-frequency and multi-constellation assessments in different ionospheric scenarios, including maximum solar-cycle conditions, were focussed on user-domain performance. Recently, a mature evolution of the technique consists on a dual service scheme; a global Precise Point Positioning (PPP) service, together with a continental enhancement to shorten convergence. A end to end performance assessment of the Fast-PPP technique is presented in this work, focussed in Europe and in the equatorial region of South East Asia (SEA), both near the solar cycle maximum. The accuracy of the Central Processing Facility (CPF) real-time precise satellite orbits and clocks is respectively, 4 centimetres and 0.2 nanoseconds, in line with the accuracy of the International GNSS Service (IGS) analysis centres. This global PPP service is enhanced by the Fast-PPP by adding the capability of global undifferenced ambiguity fixing thanks to the fractional part of the ambiguities determination. The core of the Fast-PPP is the capability to compute real-time ionospheric determinations with accuracies at the level or better than 1 Total Electron Content Unit (TECU), improving the widely-accepted Global Ionospheric Maps (GIM), with declared accuracies of 2-8 TECU. This large improvement in the modelling accuracy is achieved thanks to a two-layer description of the ionosphere combined with the carrier-phase ambiguity fixing performed in the Fast-PPP CPF. The Fast-PPP user domain positioning takes benefit of such precise ionospheric modelling. Convergence time of dual-frequency classic PPP solutions is reduced from the best part of an hour to 5-10 minutes not only in European mid-latitudes but also in the much more challenging equatorial region. The improvement of ionospheric modelling is directly translated into the accuracy of single-frequency mass-market users, achieving 2-3 decimetres of error after any cold start. Since all Fast-PPP corrections are broadcast together with their confidence level (sigma), such high-accuracy navigation is protected with safety integrity bounds.

  13. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  14. Improving real-time inflow forecasting into hydropower reservoirs through a complementary modelling framework

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.

    2015-08-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.

  15. Post processing for offline Chinese handwritten character string recognition

    NASA Astrophysics Data System (ADS)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong

    2012-01-01

    Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.

  16. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    NASA Astrophysics Data System (ADS)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.

  17. Surface wind accuracy for modeling mineral dust emissions: Comparing two regional models in a Bodélé case study

    NASA Astrophysics Data System (ADS)

    Laurent, B.; Heinold, B.; Tegen, I.; Bouet, C.; Cautenet, G.

    2008-05-01

    After a decade of research on improving the description of surface and soil features in desert regions to accurately model mineral dust emissions, we now emphasize the need for deeper evaluating the accuracy of modeled 10-m surface wind speeds U 10 . Two mesoscale models, the Lokal-Modell (LM) and the Regional Atmospheric Modeling System (RAMS), coupled with an explicit dust emission model have previously been used to simulate mineral dust events in the Bodélé region. We compare LM and RAMS U 10 , together with measurements at the Chicha site (BoDEx campaign) and Faya-Largeau meteorological station. Surface features and soil schemes are investigated to correctly simulate U 10 intensity and diurnal variability. The uncertainties in dust emissions computed with LM and RAMS U 10 and different soil databases are estimated. This sensitivity study shows the importance of accurate computation of surface winds to improve the quantification of regional dust emissions from the Bodélé

  18. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  19. How to improve an un-alterable model forecast? A sequential data assimilation based error updating approach

    NASA Astrophysics Data System (ADS)

    Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.

    2012-12-01

    Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.

  20. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  1. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  2. The mathematical model accuracy estimation of the oil storage tank foundation soil moistening

    NASA Astrophysics Data System (ADS)

    Gildebrandt, M. I.; Ivanov, R. N.; Gruzin, AV; Antropova, L. B.; Kononov, S. A.

    2018-04-01

    The oil storage tanks foundations preparation technologies improvement is the relevant objective which achievement will make possible to reduce the material costs and spent time for the foundation preparing while providing the required operational reliability. The laboratory research revealed the nature of sandy soil layer watering with a given amount of water. The obtained data made possible developing the sandy soil layer moistening mathematical model. The performed estimation of the oil storage tank foundation soil moistening mathematical model accuracy showed the experimental and theoretical results acceptable convergence.

  3. Can CT and MR Shape and Textural Features Differentiate Benign Versus Malignant Pleural Lesions?

    PubMed

    Pena, Elena; Ojiaku, MacArinze; Inacio, Joao R; Gupta, Ashish; Macdonald, D Blair; Shabana, Wael; Seely, Jean M; Rybicki, Frank J; Dennie, Carole; Thornhill, Rebecca E

    2017-10-01

    The study aimed to identify a radiomic approach based on CT and or magnetic resonance (MR) features (shape and texture) that may help differentiate benign versus malignant pleural lesions, and to assess if the radiomic model may improve confidence and accuracy of radiologists with different subspecialty backgrounds. Twenty-nine patients with pleural lesions studied on both contrast-enhanced CT and MR imaging were reviewed retrospectively. Three texture and three shape features were extracted. Combinations of features were used to generate logistic regression models using histopathology as outcome. Two thoracic and two abdominal radiologists evaluated their degree of confidence in malignancy. Diagnostic accuracy of radiologists was determined using contingency tables. Cohen's kappa coefficient was used to assess inter-reader agreement. Using optimal threshold criteria, sensitivity, specificity, and accuracy of each feature and combination of features were obtained and compared to the accuracy and confidence of radiologists. The CT model that best discriminated malignant from benign lesions revealed an AUC CT  = 0.92 ± 0.05 (P < 0.0001). The most discriminative MR model showed an AUC MR  = 0.87 ± 0.09 (P < 0.0001). The CT model was compared to the diagnostic confidence of all radiologists and the model outperformed both abdominal radiologists (P < 0.002), whereas the top discriminative MR model outperformed one of the abdominal radiologists (P = 0.02). The most discriminative MR model was more accurate than one abdominal (P = 0.04) and one thoracic radiologist (P = 0.02). Quantitative textural and shape analysis may help distinguish malignant from benign lesions. A radiomics-based approach may increase diagnostic confidence of abdominal radiologists on CT and MR and may potentially improve radiologists' accuracy in the assessment of pleural lesions characterized by MR. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  4. High accuracy navigation information estimation for inertial system using the multi-model EKF fusing adams explicit formula applied to underwater gliders.

    PubMed

    Huang, Haoqian; Chen, Xiyuan; Zhang, Bo; Wang, Jian

    2017-01-01

    The underwater navigation system, mainly consisting of MEMS inertial sensors, is a key technology for the wide application of underwater gliders and plays an important role in achieving high accuracy navigation and positioning for a long time of period. However, the navigation errors will accumulate over time because of the inherent errors of inertial sensors, especially for MEMS grade IMU (Inertial Measurement Unit) generally used in gliders. The dead reckoning module is added to compensate the errors. In the complicated underwater environment, the performance of MEMS sensors is degraded sharply and the errors will become much larger. It is difficult to establish the accurate and fixed error model for the inertial sensor. Therefore, it is very hard to improve the accuracy of navigation information calculated by sensors. In order to solve the problem mentioned, the more suitable filter which integrates the multi-model method with an EKF approach can be designed according to different error models to give the optimal estimation for the state. The key parameters of error models can be used to determine the corresponding filter. The Adams explicit formula which has an advantage of high precision prediction is simultaneously fused into the above filter to achieve the much more improvement in attitudes estimation accuracy. The proposed algorithm has been proved through theory analyses and has been tested by both vehicle experiments and lake trials. Results show that the proposed method has better accuracy and effectiveness in terms of attitudes estimation compared with other methods mentioned in the paper for inertial navigation applied to underwater gliders. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    PubMed

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  6. The effect of stochastic modeling of ionospheric effect on the various lengths of baseline determination

    NASA Astrophysics Data System (ADS)

    Kwon, J.; Yang, H.

    2006-12-01

    Although GPS provides continuous and accurate position information, there are still some rooms for improvement of its positional accuracy, especially in the medium and long range baseline determination. In general, in case of more than 50 km baseline length, the effect of ionospheric delay is the one causing the largest degradation in positional accuracy. For example, the ionospheric delay in terms of double differenced mode easily reaches 10 cm with baseline length of 101 km. Therefore, many researchers have been tried to mitigate/reduce the effect using various modeling methods. In this paper, the optimal stochastic modeling of the ionospheric delay in terms of baseline length is presented. The data processing has been performed by constructing a Kalman filter with states of positions, ambiguities, and the ionospheric delays in the double differenced mode. Considering the long baseline length, both double differenced GPS phase and code observations are used as observables and LAMBDA has been applied to fix the ambiguities. Here, the ionospheric delay is stochastically modeled by well-known Gaussian, 1st and 3rd order Gauss-Markov process. The parameters required in those models such as correlation distance and time is determined by the least-square adjustment using ionosphere-only observables. Mainly the results and analysis from this study show the effect of stochastic models of the ionospheric delay in terms of the baseline length, models, and parameters used. In the above example with 101 km baseline length, it was found that the positional accuracy with appropriate ionospheric modeling (Gaussian) was about ±2 cm whereas it reaches about ±15 cm with no stochastic modeling. It is expected that the approach in this study contributes to improve positional accuracy, especially in medium and long range baseline determination.

  7. Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis

    NASA Technical Reports Server (NTRS)

    Putnam, J.; Somers, J.; Wells, J.; Newby, N.; Currie-Gregg, N.; Lawrence, C.

    2016-01-01

    Introduction: In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.

  8. Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis

    NASA Technical Reports Server (NTRS)

    Putnam, Jacob B.; Sommers, Jeffrey T.; Wells, Jessica A.; Newby, Nathaniel J.; Currie-Gregg, Nancy J.; Lawrence, Chuck

    2016-01-01

    In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.

  9. Refining climate models

    ScienceCinema

    Warren, Jeff; Iversen, Colleen; Brooks, Jonathan; Ricciuto, Daniel

    2018-02-13

    Using dogwood trees, Oak Ridge National Laboratory researchers are gaining a better understanding of the role photosynthesis and respiration play in the atmospheric carbon dioxide cycle. Their findings will aid computer modelers in improving the accuracy of climate simulations.

  10. Iowa calibration of MEPDG performance prediction models.

    DOT National Transportation Integrated Search

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  11. Clinical time series prediction: Toward a hierarchical dynamical system framework.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2015-09-01

    Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Improving BeiDou real-time precise point positioning with numerical weather models

    NASA Astrophysics Data System (ADS)

    Lu, Cuixian; Li, Xingxing; Zus, Florian; Heinkelmann, Robert; Dick, Galina; Ge, Maorong; Wickert, Jens; Schuh, Harald

    2017-09-01

    Precise positioning with the current Chinese BeiDou Navigation Satellite System is proven to be of comparable accuracy to the Global Positioning System, which is at centimeter level for the horizontal components and sub-decimeter level for the vertical component. But the BeiDou precise point positioning (PPP) shows its limitation in requiring a relatively long convergence time. In this study, we develop a numerical weather model (NWM) augmented PPP processing algorithm to improve BeiDou precise positioning. Tropospheric delay parameters, i.e., zenith delays, mapping functions, and horizontal delay gradients, derived from short-range forecasts from the Global Forecast System of the National Centers for Environmental Prediction (NCEP) are applied into BeiDou real-time PPP. Observational data from stations that are capable of tracking the BeiDou constellation from the International GNSS Service (IGS) Multi-GNSS Experiments network are processed, with the introduced NWM-augmented PPP and the standard PPP processing. The accuracy of tropospheric delays derived from NCEP is assessed against with the IGS final tropospheric delay products. The positioning results show that an improvement in convergence time up to 60.0 and 66.7% for the east and vertical components, respectively, can be achieved with the NWM-augmented PPP solution compared to the standard PPP solutions, while only slight improvement in the solution convergence can be found for the north component. A positioning accuracy of 5.7 and 5.9 cm for the east component is achieved with the standard PPP that estimates gradients and the one that estimates no gradients, respectively, in comparison to 3.5 cm of the NWM-augmented PPP, showing an improvement of 38.6 and 40.1%. Compared to the accuracy of 3.7 and 4.1 cm for the north component derived from the two standard PPP solutions, the one of the NWM-augmented PPP solution is improved to 2.0 cm, by about 45.9 and 51.2%. The positioning accuracy for the up component improves from 11.4 and 13.2 cm with the two standard PPP solutions to 8.0 cm with the NWM-augmented PPP solution, an improvement of 29.8 and 39.4%, respectively.

  13. Effect of citizen engagement levels in flood forecasting by assimilating crowdsourced observations in hydrological models

    NASA Astrophysics Data System (ADS)

    Mazzoleni, Maurizio; Cortes Arevalo, Juliette; Alfonso, Leonardo; Wehn, Uta; Norbiato, Daniele; Monego, Martina; Ferri, Michele; Solomatine, Dimitri

    2017-04-01

    In the past years, a number of methods have been proposed to reduce uncertainty in flood prediction by means of model updating techniques. Traditional physical observations are usually integrated into hydrological and hydraulic models to improve model performances and consequent flood predictions. Nowadays, low-cost sensors can be used for crowdsourced observations. Different type of social sensors can measure, in a more distributed way, physical variables such as precipitation and water level. However, these crowdsourced observations are not integrated into a real-time fashion into water-system models due to their varying accuracy and random spatial-temporal coverage. We assess the effect in model performance due to the assimilation of crowdsourced observations of water level. Our method consists in (1) implementing a Kalman filter into a cascade of hydrological and hydraulic models. (2) defining observation errors depending on the type of sensor either physical or social. Randomly distributed errors are based on accuracy ranges that slightly improve according to the citizens' expertise level. (3) Using a simplified social model to realistically represent citizen engagement levels based on population density and citizens' motivation scenarios. To test our method, we synthetically derive crowdsourced observations for different citizen engagement levels from a distributed network of physical and social sensors. The observations are assimilated during a particular flood event occurred in the Bacchiglione catchment, Italy. The results of this study demonstrate that sharing crowdsourced water level observations (often motivated by a feeling of belonging to a community of friends) can help in improving flood prediction. On the other hand, a growing participation of individual citizens or weather enthusiasts sharing hydrological observations in cities can help to improve model performance. This study is a first step to assess the effects of crowdsourced observations in flood model predictions. Effective communication and feedback about the quality of observations from water authorities to engaged citizens are further required to minimize their intrinsic low-variable accuracy.

  14. Gravity model improvement using GEOS 3 /GEM 9 and 10/. [and Seasat altimetry data

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Wagner, C. A.; Klosko, S. M.; Laubscher, R. E.

    1979-01-01

    Although errors in previous gravity models have produced large uncertainties in the orbital position of GEOS 3, significant improvement has been obtained with new geopotential solutions, Goddard Earth Model (GEM) 9 and 10. The GEM 9 and 10 solutions for the potential coefficients and station coordinates are presented along with a discussion of the new techniques employed. Also presented and discussed are solutions for three fundamental geodetic reference parameters, viz. the mean radius of the earth, the gravitational constant, and mean equatorial gravity. Evaluation of the gravity field is examined together with evaluation of GEM 9 and 10 for orbit determination accuracy. The major objectives of GEM 9 and 10 are achieved. GEOS 3 orbital accuracies from these models are about 1 m in their radial components for 5-day arc lengths. Both models yield significantly improved results over GEM solutions when compared to surface gravimetry, Skylab and GEOS 3 altimetry, and highly accurate BE-C (Beacon Explorer-C) laser ranges. The new values of the parameters discussed are given.

  15. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    PubMed Central

    2012-01-01

    Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945

  16. Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.

    PubMed

    Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G

    2017-09-01

    To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.

  17. APPLICATION OF A FULLY DISTRIBUTED WASHOFF AND TRANSPORT MODEL FOR A GULF COAST WATERSHED

    EPA Science Inventory

    Advances in hydrologic modeling have been shown to improve the accuracy of rainfall runoff simulation and prediction. Building on the capabilities of distributed hydrologic modeling, a water quality model was developed to simulate buildup, washoff, and advective transport of a co...

  18. Accurate ECG diagnosis of atrial tachyarrhythmias using quantitative analysis: a prospective diagnostic and cost-effectiveness study.

    PubMed

    Krummen, David E; Patel, Mitul; Nguyen, Hong; Ho, Gordon; Kazi, Dhruv S; Clopton, Paul; Holland, Marian C; Greenberg, Scott L; Feld, Gregory K; Faddis, Mitchell N; Narayan, Sanjiv M

    2010-11-01

    Quantitative ECG Analysis. Optimal atrial tachyarrhythmia management is facilitated by accurate electrocardiogram interpretation, yet typical atrial flutter (AFl) may present without sawtooth F-waves or RR regularity, and atrial fibrillation (AF) may be difficult to separate from atypical AFl or rapid focal atrial tachycardia (AT). We analyzed whether improved diagnostic accuracy using a validated analysis tool significantly impacts costs and patient care. We performed a prospective, blinded, multicenter study using a novel quantitative computerized algorithm to identify atrial tachyarrhythmia mechanism from the surface ECG in patients referred for electrophysiology study (EPS). In 122 consecutive patients (age 60 ± 12 years) referred for EPS, 91 sustained atrial tachyarrhythmias were studied. ECGs were also interpreted by 9 physicians from 3 specialties for comparison and to allow healthcare system modeling. Diagnostic accuracy was compared to the diagnosis at EPS. A Markov model was used to estimate the impact of improved arrhythmia diagnosis. We found 13% of typical AFl ECGs had neither sawtooth flutter waves nor RR regularity, and were misdiagnosed by the majority of clinicians (0/6 correctly diagnosed by consensus visual interpretation) but correctly by quantitative analysis in 83% (5/6, P = 0.03). AF diagnosis was also improved through use of the algorithm (92%) versus visual interpretation (primary care: 76%, P < 0.01). Economically, we found that these improvements in diagnostic accuracy resulted in an average cost-savings of $1,303 and 0.007 quality-adjusted-life-years per patient. Typical AFl and AF are frequently misdiagnosed using visual criteria. Quantitative analysis improves diagnostic accuracy and results in improved healthcare costs and patient outcomes. © 2010 Wiley Periodicals, Inc.

  19. Effects of Land Use Land Cover (LULC) and Climate on Simulation of Phosphorus loading in the Southeast United States Region

    NASA Astrophysics Data System (ADS)

    Jima, T. G.; Roberts, A.

    2013-12-01

    Quality of coastal and freshwater resources in the Southeastern United States is threatened due to Eutrophication as a result of excessive nutrients, and phosphorus is acknowledged as one of the major limiting nutrients. In areas with much non-point source (NPS) pollution, land use land cover and climate have been found to have significant impact on water quality. Landscape metrics applied in catchment and riparian stream based nutrient export models are known to significantly improve nutrient prediction. The regional SPARROW (Spatially Referenced Regression On Watershed attributes), which predicts Total Phosphorus has been developed by the Southeastern United States regions USGS, as part of the National Water Quality Assessment (NAWQA) program and the model accuracy was found to be 67%. However, landscape composition and configuration metrics which play a significant role in the source, transport and delivery of the nutrient have not been incorporated in the model. Including these matrices in the models parameterization will improve the models accuracy and improve decision making process for mitigating and managing NPS phosphorus in the region. The National Land Cover Data 2001 raster data will be used (since the base line is 2002) for the region (with 8321 watersheds ) with fragstats 4.1 and ArcGIS Desktop 10.1 for the analysis of landscape matrices, buffers and creating map layers. The result will be imported to the Southeast SPARROW model and will be analyzed. Resulting statistical significance and model accuracy will be assessed and predictions for those areas with no water quality monitoring station will be made.

  20. Validation of Groundwater Models: Meaningful or Meaningless?

    NASA Astrophysics Data System (ADS)

    Konikow, L. F.

    2003-12-01

    Although numerical simulation models are valuable tools for analyzing groundwater systems, their predictive accuracy is limited. People who apply groundwater flow or solute-transport models, as well as those who make decisions based on model results, naturally want assurance that a model is "valid." To many people, model validation implies some authentication of the truth or accuracy of the model. History matching is often presented as the basis for model validation. Although such model calibration is a necessary modeling step, it is simply insufficient for model validation. Because of parameter uncertainty and solution non-uniqueness, declarations of validation (or verification) of a model are not meaningful. Post-audits represent a useful means to assess the predictive accuracy of a site-specific model, but they require the existence of long-term monitoring data. Model testing may yield invalidation, but that is an opportunity to learn and to improve the conceptual and numerical models. Examples of post-audits and of the application of a solute-transport model to a radioactive waste disposal site illustrate deficiencies in model calibration, prediction, and validation.

  1. Evaluation of Gravitational Field Models Based on the Laser Range Observation of Low Earth Orbit Satellites

    NASA Astrophysics Data System (ADS)

    Wang, H. B.; Zhao, C. Y.; Zhang, W.; Zhan, J. W.; Yu, S. X.

    2015-09-01

    The Earth gravitational filed model is a kind of important dynamic model in satellite orbit computation. In recent years, several space gravity missions have obtained great success, prompting a lot of gravitational filed models to be published. In this paper, 2 classical models (JGM3, EGM96) and 4 latest models, including EIGEN-CHAMP05S, GGM03S, GOCE02S, and EGM2008 are evaluated by being employed in the precision orbit determination (POD) and prediction, based on the laser range observation of four low earth orbit (LEO) satellites, including CHAMP, GFZ-1, GRACE-A, and SWARM-A. The residual error of observation in POD is adopted to describe the accuracy of six gravitational field models. We show the main results as follows: (1) for LEO POD, the accuracies of 4 latest models (EIGEN-CHAMP05S, GGM03S, GOCE02S, and EGM2008) are at the same level, and better than those of 2 classical models (JGM3, EGM96); (2) If taking JGM3 as reference, EGM96 model's accuracy is better in most situations, and the accuracies of the 4 latest models are improved by 12%-47% in POD and 63% in prediction, respectively. We also confirm that the model's accuracy in POD is enhanced with the increasing degree and order if they are smaller than 70, and when they exceed 70 the accuracy keeps stable, and is unrelated with the increasing degree, meaning that the model's degree and order truncated to 70 are sufficient to meet the requirement of LEO orbit computation with centimeter level precision.

  2. A Study on Micropipetting Detection Technology of Automatic Enzyme Immunoassay Analyzer.

    PubMed

    Shang, Zhiwu; Zhou, Xiangping; Li, Cheng; Tsai, Sang-Bing

    2018-04-10

    In order to improve the accuracy and reliability of micropipetting, a method of micro-pipette detection and calibration combining the dynamic pressure monitoring in pipetting process and quantitative identification of pipette volume in image processing was proposed. Firstly, the normalized pressure model for the pipetting process was established with the kinematic model of the pipetting operation, and the pressure model is corrected by the experimental method. Through the pipetting process pressure and pressure of the first derivative of real-time monitoring, the use of segmentation of the double threshold method as pipetting fault evaluation criteria, and the pressure sensor data are processed by Kalman filtering, the accuracy of fault diagnosis is improved. When there is a fault, the pipette tip image is collected through the camera, extract the boundary of the liquid region by the background contrast method, and obtain the liquid volume in the tip according to the geometric characteristics of the pipette tip. The pipette deviation feedback to the automatic pipetting module and deviation correction is carried out. The titration test results show that the combination of the segmented pipetting kinematic model of the double threshold method of pressure monitoring, can effectively real-time judgment and classification of the pipette fault. The method of closed-loop adjustment of pipetting volume can effectively improve the accuracy and reliability of the pipetting system.

  3. SVM feature selection based rotation forest ensemble classifiers to improve computer-aided diagnosis of Parkinson disease.

    PubMed

    Ozcift, Akin

    2012-08-01

    Parkinson disease (PD) is an age-related deterioration of certain nerve systems, which affects movement, balance, and muscle control of clients. PD is one of the common diseases which affect 1% of people older than 60 years. A new classification scheme based on support vector machine (SVM) selected features to train rotation forest (RF) ensemble classifiers is presented for improving diagnosis of PD. The dataset contains records of voice measurements from 31 people, 23 with PD and each record in the dataset is defined with 22 features. The diagnosis model first makes use of a linear SVM to select ten most relevant features from 22. As a second step of the classification model, six different classifiers are trained with the subset of features. Subsequently, at the third step, the accuracies of classifiers are improved by the utilization of RF ensemble classification strategy. The results of the experiments are evaluated using three metrics; classification accuracy (ACC), Kappa Error (KE) and Area under the Receiver Operating Characteristic (ROC) Curve (AUC). Performance measures of two base classifiers, i.e. KStar and IBk, demonstrated an apparent increase in PD diagnosis accuracy compared to similar studies in literature. After all, application of RF ensemble classification scheme improved PD diagnosis in 5 of 6 classifiers significantly. We, numerically, obtained about 97% accuracy in RF ensemble of IBk (a K-Nearest Neighbor variant) algorithm, which is a quite high performance for Parkinson disease diagnosis.

  4. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    PubMed

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  5. Improving the Accuracy of Satellite Sea Surface Temperature Measurements by Explicitly Accounting for the Bulk-Skin Temperature Difference

    NASA Technical Reports Server (NTRS)

    Castro, Sandra L.; Emery, William J.

    2002-01-01

    The focus of this research was to determine whether the accuracy of satellite measurements of sea surface temperature (SST) could be improved by explicitly accounting for the complex temperature gradients at the surface of the ocean associated with the cool skin and diurnal warm layers. To achieve this goal, work centered on the development and deployment of low-cost infrared radiometers to enable the direct validation of satellite measurements of skin temperature. During this one year grant, design and construction of an improved infrared radiometer was completed and testing was initiated. In addition, development of an improved parametric model for the bulk-skin temperature difference was completed using data from the previous version of the radiometer. This model will comprise a key component of an improved procedure for estimating the bulk SST from satellites. The results comprised a significant portion of the Ph.D. thesis completed by one graduate student and they are currently being converted into a journal publication.

  6. Improved dense trajectories for action recognition based on random projection and Fisher vectors

    NASA Astrophysics Data System (ADS)

    Ai, Shihui; Lu, Tongwei; Xiong, Yudian

    2018-03-01

    As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.

  7. Relationship between accuracy and complexity when learning underarm precision throwing.

    PubMed

    Valle, Maria Stella; Lombardo, Luciano; Cioni, Matteo; Casabona, Antonino

    2018-06-12

    Learning precision ball throwing was mostly studied to explore the early rapid improvement of accuracy, with poor attention on possible adaptive processes occurring later when the rate of improvement is reduced. Here, we tried to demonstrate that the strategy to select angle, speed and height at ball release can be managed during the learning periods following the performance stabilization. To this aim, we used a multivariate linear model with angle, speed and height as predictors of changes in accuracy. Participants performed underarm throws of a tennis ball to hit a target on the floor, 3.42 m away. Two training sessions (S1, S2) and one retention test were executed. Performance accuracy increased over the S1 and stabilized during the S2, with a rate of changes along the throwing axis slower than along the orthogonal axis. However, both the axes contributed to the performance changes over the learning and consolidation time. A stable relationship between the accuracy and the release parameters was observed only during S2, with a good fraction of the performance variance explained by the combination of speed and height. All the variations were maintained during the retention test. Overall, accuracy improvements and reduction in throwing complexity at the ball release followed separate timing over the course of learning and consolidation.

  8. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  9. Application of Numerical Integration and Data Fusion in Unit Vector Method

    NASA Astrophysics Data System (ADS)

    Zhang, J.

    2012-01-01

    The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.

  10. Estimating Small-Body Gravity Field from Shape Model and Navigation Data

    NASA Technical Reports Server (NTRS)

    Park, Ryan S.; Werner, Robert A.; Bhaskaran, Shyam

    2008-01-01

    This paper presents a method to model the external gravity field and to estimate the internal density variation of a small-body. We first discuss the modeling problem, where we assume the polyhedral shape and internal density distribution are given, and model the body interior using finite elements definitions, such as cubes and spheres. The gravitational attractions computed from these approaches are compared with the true uniform-density polyhedral attraction and the level of accuracies are presented. We then discuss the inverse problem where we assume the body shape, radiometric measurements, and a priori density constraints are given, and estimate the internal density variation by estimating the density of each finite element. The result shows that the accuracy of the estimated density variation can be significantly improved depending on the orbit altitude, finite-element resolution, and measurement accuracy.

  11. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks.

    PubMed

    Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin

    2018-04-26

    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.

  12. Cross-Participant EEG-Based Assessment of Cognitive Workload Using Multi-Path Convolutional Recurrent Neural Networks

    PubMed Central

    Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin

    2018-01-01

    Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668

  13. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    PubMed

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  14. Spatially-Resolved Hydraulic Conductivity Estimation Via Poroelastic Magnetic Resonance Elastography

    PubMed Central

    McGarry, Matthew; Weaver, John B.; Paulsen, Keith D.

    2015-01-01

    Poroelastic magnetic resonance elastography is an imaging technique that could recover mechanical and hydrodynamical material properties of in vivo tissue. To date, mechanical properties have been estimated while hydrodynamical parameters have been assumed homogeneous with literature-based values. Estimating spatially-varying hydraulic conductivity would likely improve model accuracy and provide new image information related to a tissue’s interstitial fluid compartment. A poroelastic model was reformulated to recover hydraulic conductivity with more appropriate fluid-flow boundary conditions. Simulated and physical experiments were conducted to evaluate the accuracy and stability of the inversion algorithm. Simulations were accurate (property errors were < 2%) even in the presence of Gaussian measurement noise up to 3%. The reformulated model significantly decreased variation in the shear modulus estimate (p≪0.001) and eliminated the homogeneity assumption and the need to assign hydraulic conductivity values from literature. Material property contrast was recovered experimentally in three different tofu phantoms and the accuracy was improved through soft-prior regularization. A frequency-dependence in hydraulic conductivity contrast was observed suggesting that fluid-solid interactions may be more prominent at low frequency. In vivo recovery of both structural and hydrodynamical characteristics of tissue could improve detection and diagnosis of neurological disorders such as hydrocephalus and brain tumors. PMID:24771571

  15. Diagnostic accuracy of a bayesian latent group analysis for the detection of malingering-related poor effort.

    PubMed

    Ortega, Alonso; Labrenz, Stephan; Markowitsch, Hans J; Piefke, Martina

    2013-01-01

    In the last decade, different statistical techniques have been introduced to improve assessment of malingering-related poor effort. In this context, we have recently shown preliminary evidence that a Bayesian latent group model may help to optimize classification accuracy using a simulation research design. In the present study, we conducted two analyses. Firstly, we evaluated how accurately this Bayesian approach can distinguish between participants answering in an honest way (honest response group) and participants feigning cognitive impairment (experimental malingering group). Secondly, we tested the accuracy of our model in the differentiation between patients who had real cognitive deficits (cognitively impaired group) and participants who belonged to the experimental malingering group. All Bayesian analyses were conducted using the raw scores of a visual recognition forced-choice task (2AFC), the Test of Memory Malingering (TOMM, Trial 2), and the Word Memory Test (WMT, primary effort subtests). The first analysis showed 100% accuracy for the Bayesian model in distinguishing participants of both groups with all effort measures. The second analysis showed outstanding overall accuracy of the Bayesian model when estimates were obtained from the 2AFC and the TOMM raw scores. Diagnostic accuracy of the Bayesian model diminished when using the WMT total raw scores. Despite, overall diagnostic accuracy can still be considered excellent. The most plausible explanation for this decrement is the low performance in verbal recognition and fluency tasks of some patients of the cognitively impaired group. Additionally, the Bayesian model provides individual estimates, p(zi |D), of examinees' effort levels. In conclusion, both high classification accuracy levels and Bayesian individual estimates of effort may be very useful for clinicians when assessing for effort in medico-legal settings.

  16. Improved Fuzzy K-Nearest Neighbor Using Modified Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Jamaluddin; Siringoringo, Rimbun

    2017-12-01

    Fuzzy k-Nearest Neighbor (FkNN) is one of the most powerful classification methods. The presence of fuzzy concepts in this method successfully improves its performance on almost all classification issues. The main drawbackof FKNN is that it is difficult to determine the parameters. These parameters are the number of neighbors (k) and fuzzy strength (m). Both parameters are very sensitive. This makes it difficult to determine the values of ‘m’ and ‘k’, thus making FKNN difficult to control because no theories or guides can deduce how proper ‘m’ and ‘k’ should be. This study uses Modified Particle Swarm Optimization (MPSO) to determine the best value of ‘k’ and ‘m’. MPSO is focused on the Constriction Factor Method. Constriction Factor Method is an improvement of PSO in order to avoid local circumstances optima. The model proposed in this study was tested on the German Credit Dataset. The test of the data/The data test has been standardized by UCI Machine Learning Repository which is widely applied to classification problems. The application of MPSO to the determination of FKNN parameters is expected to increase the value of classification performance. Based on the experiments that have been done indicating that the model offered in this research results in a better classification performance compared to the Fk-NN model only. The model offered in this study has an accuracy rate of 81%, while. With using Fk-NN model, it has the accuracy of 70%. At the end is done comparison of research model superiority with 2 other classification models;such as Naive Bayes and Decision Tree. This research model has a better performance level, where Naive Bayes has accuracy 75%, and the decision tree model has 70%

  17. Application of the MacCormack scheme to overland flow routing for high-spatial resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Nan, Zhuotong; Liang, Xu; Xu, Yi; Hernández, Felipe; Li, Lianxia

    2018-03-01

    Although process-based distributed hydrological models (PDHMs) are evolving rapidly over the last few decades, their extensive applications are still challenged by the computational expenses. This study attempted, for the first time, to apply the numerically efficient MacCormack algorithm to overland flow routing in a representative high-spatial resolution PDHM, i.e., the distributed hydrology-soil-vegetation model (DHSVM), in order to improve its computational efficiency. The analytical verification indicates that both the semi and full versions of the MacCormack schemes exhibit robust numerical stability and are more computationally efficient than the conventional explicit linear scheme. The full-version outperforms the semi-version in terms of simulation accuracy when a same time step is adopted. The semi-MacCormack scheme was implemented into DHSVM (version 3.1.2) to solve the kinematic wave equations for overland flow routing. The performance and practicality of the enhanced DHSVM-MacCormack model was assessed by performing two groups of modeling experiments in the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The experiments show that DHSVM-MacCormack can considerably improve the computational efficiency without compromising the simulation accuracy of the original DHSVM model. More specifically, with the same computational environment and model settings, the computational time required by DHSVM-MacCormack can be reduced to several dozen minutes for a simulation period of three months (in contrast with one day and a half by the original DHSVM model) without noticeable sacrifice of the accuracy. The MacCormack scheme proves to be applicable to overland flow routing in DHSVM, which implies that it can be coupled into other PHDMs for watershed routing to either significantly improve their computational efficiency or to make the kinematic wave routing for high resolution modeling computational feasible.

  18. Evaluation of anthropogenic influence in probabilistic forecasting of coastal change

    NASA Astrophysics Data System (ADS)

    Hapke, C. J.; Wilson, K.; Adams, P. N.

    2014-12-01

    Prediction of large scale coastal behavior is especially challenging in areas of pervasive human activity. Many coastal zones on the Gulf and Atlantic coasts are moderately to highly modified through the use of soft sediment and hard stabilization techniques. These practices have the potential to alter sediment transport and availability, as well as reshape the beach profile, ultimately transforming the natural evolution of the coastal system. We present the results of a series of probabilistic models, designed to predict the observed geomorphic response to high wave events at Fire Island, New York. The island comprises a variety of land use types, including inhabited communities with modified beaches, where beach nourishment and artificial dune construction (scraping) occur, unmodified zones, and protected national seashore. This variation in land use presents an opportunity for comparison of model accuracy across highly modified and rarely modified stretches of coastline. Eight models with basic and expanded structures were developed, resulting in sixteen models, informed with observational data from Fire Island. The basic model type does not include anthropogenic modification. The expanded model includes records of nourishment and scraping, designed to quantify the improved accuracy when anthropogenic activity is represented. Modification was included as frequency of occurrence divided by the time since the most recent event, to distinguish between recent and historic events. All but one model reported improved predictive accuracy from the basic to expanded form. The addition of nourishment and scraping parameters resulted in a maximum reduction in predictive error of 36%. The seven improved models reported an average 23% reduction in error. These results indicate that it is advantageous to incorporate the human forcing into a coastal hazards probability model framework.

  19. Assessing performance of Bayesian state-space models fit to Argos satellite telemetry locations processed with Kalman filtering.

    PubMed

    Silva, Mónica A; Jonsen, Ian; Russell, Deborah J F; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to "true" GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6 ± 5.6 km) was nearly half that of LS estimates (11.6 ± 8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales' behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates.

  20. Assessing Performance of Bayesian State-Space Models Fit to Argos Satellite Telemetry Locations Processed with Kalman Filtering

    PubMed Central

    Silva, Mónica A.; Jonsen, Ian; Russell, Deborah J. F.; Prieto, Rui; Thompson, Dave; Baumgartner, Mark F.

    2014-01-01

    Argos recently implemented a new algorithm to calculate locations of satellite-tracked animals that uses a Kalman filter (KF). The KF algorithm is reported to increase the number and accuracy of estimated positions over the traditional Least Squares (LS) algorithm, with potential advantages to the application of state-space methods to model animal movement data. We tested the performance of two Bayesian state-space models (SSMs) fitted to satellite tracking data processed with KF algorithm. Tracks from 7 harbour seals (Phoca vitulina) tagged with ARGOS satellite transmitters equipped with Fastloc GPS loggers were used to calculate the error of locations estimated from SSMs fitted to KF and LS data, by comparing those to “true” GPS locations. Data on 6 fin whales (Balaenoptera physalus) were used to investigate consistency in movement parameters, location and behavioural states estimated by switching state-space models (SSSM) fitted to data derived from KF and LS methods. The model fit to KF locations improved the accuracy of seal trips by 27% over the LS model. 82% of locations predicted from the KF model and 73% of locations from the LS model were <5 km from the corresponding interpolated GPS position. Uncertainty in KF model estimates (5.6±5.6 km) was nearly half that of LS estimates (11.6±8.4 km). Accuracy of KF and LS modelled locations was sensitive to precision but not to observation frequency or temporal resolution of raw Argos data. On average, 88% of whale locations estimated by KF models fell within the 95% probability ellipse of paired locations from LS models. Precision of KF locations for whales was generally higher. Whales’ behavioural mode inferred by KF models matched the classification from LS models in 94% of the cases. State-space models fit to KF data can improve spatial accuracy of location estimates over LS models and produce equally reliable behavioural estimates. PMID:24651252

  1. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  2. Aerothermal modeling program, phase 2. Element B: Flow interaction experiment

    NASA Technical Reports Server (NTRS)

    Nikjooy, M.; Mongia, H. C.; Murthy, S. N. B.; Sullivan, J. P.

    1986-01-01

    The design process was improved and the efficiency, life, and maintenance costs of the turbine engine hot section was enhanced. Recently, there has been much emphasis on the need for improved numerical codes for the design of efficient combustors. For the development of improved computational codes, there is a need for an experimentally obtained data base to be used at test cases for the accuracy of the computations. The purpose of Element-B is to establish a benchmark quality velocity and scalar measurements of the flow interaction of circular jets with swirling flow typical of that in the dome region of annular combustor. In addition to the detailed experimental effort, extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current and advanced turbulence and scalar transport models.

  3. Active surface model improvement by energy function optimization for 3D segmentation.

    PubMed

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. State-of-the-art satellite laser range modeling for geodetic and oceanographic applications

    NASA Technical Reports Server (NTRS)

    Klosko, Steve M.; Smith, David E.

    1993-01-01

    Significant improvements have been made in the modeling and accuracy of Satellite Laser Range (SLR) data since the launch of LAGEOS in 1976. Some of these include: improved models of the static geopotential, solid-Earth and ocean tides, more advanced atmospheric drag models, and the adoption of the J2000 reference system with improved nutation and precession. Site positioning using SLR systems currently yield approximately 2 cm static and 5 mm/y kinematic descriptions of the geocentric location of these sites. Incorporation of a large set of observations from advanced Satellite Laser Ranging (SLR) tracking systems have directly made major contributions to the gravitational fields and in advancing the state-of-the-art in precision orbit determination. SLR is the baseline tracking system for the altimeter bearing TOPEX/Poseidon and ERS-1 satellites and thus, will play an important role in providing the Conventional Terrestrial Reference Frame for instantaneously locating the geocentric position of the ocean surface over time, in providing an unchanging range standard for altimeter range calibration, and for improving the geoid models to separate gravitational from ocean circulation signals seen in the sea surface. Nevertheless, despite the unprecedented improvements in the accuracy of the models used to support orbit reduction of laser observations, there still remain systematic unmodeled effects which limit the full exploitation of modern SLR data.

  5. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    USGS Publications Warehouse

    Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby

    2017-01-01

    Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.

  6. Classification of ECG beats using deep belief network and active learning.

    PubMed

    G, Sayantan; T, Kien P; V, Kadambari K

    2018-04-12

    A new semi-supervised approach based on deep learning and active learning for classification of electrocardiogram signals (ECG) is proposed. The objective of the proposed work is to model a scientific method for classification of cardiac irregularities using electrocardiogram beats. The model follows the Association for the Advancement of medical instrumentation (AAMI) standards and consists of three phases. In phase I, feature representation of ECG is learnt using Gaussian-Bernoulli deep belief network followed by a linear support vector machine (SVM) training in the consecutive phase. It yields three deep models which are based on AAMI-defined classes, namely N, V, S, and F. In the last phase, a query generator is introduced to interact with the expert to label few beats to improve accuracy and sensitivity. The proposed approach depicts significant improvement in accuracy with minimal queries posed to the expert and fast online training as tested on the MIT-BIH Arrhythmia Database and the MIT-BIH Supra-ventricular Arrhythmia Database (SVDB). With 100 queries labeled by the expert in phase III, the method achieves an accuracy of 99.5% in "S" versus all classifications (SVEB) and 99.4% accuracy in "V " versus all classifications (VEB) on MIT-BIH Arrhythmia Database. In a similar manner, it is attributed that an accuracy of 97.5% for SVEB and 98.6% for VEB on SVDB database is achieved respectively. Graphical Abstract Reply- Deep belief network augmented by active learning for efficient prediction of arrhythmia.

  7. A scoping review of malaria forecasting: past work and future directions

    PubMed Central

    Zinszer, Kate; Verma, Aman D; Charland, Katia; Brewer, Timothy F; Brownstein, John S; Sun, Zhuoyu; Buckeridge, David L

    2012-01-01

    Objectives There is a growing body of literature on malaria forecasting methods and the objective of our review is to identify and assess methods, including predictors, used to forecast malaria. Design Scoping review. Two independent reviewers searched information sources, assessed studies for inclusion and extracted data from each study. Information sources Search strategies were developed and the following databases were searched: CAB Abstracts, EMBASE, Global Health, MEDLINE, ProQuest Dissertations & Theses and Web of Science. Key journals and websites were also manually searched. Eligibility criteria for included studies We included studies that forecasted incidence, prevalence or epidemics of malaria over time. A description of the forecasting model and an assessment of the forecast accuracy of the model were requirements for inclusion. Studies were restricted to human populations and to autochthonous transmission settings. Results We identified 29 different studies that met our inclusion criteria for this review. The forecasting approaches included statistical modelling, mathematical modelling and machine learning methods. Climate-related predictors were used consistently in forecasting models, with the most common predictors being rainfall, relative humidity, temperature and the normalised difference vegetation index. Model evaluation was typically based on a reserved portion of data and accuracy was measured in a variety of ways including mean-squared error and correlation coefficients. We could not compare the forecast accuracy of models from the different studies as the evaluation measures differed across the studies. Conclusions Applying different forecasting methods to the same data, exploring the predictive ability of non-environmental variables, including transmission reducing interventions and using common forecast accuracy measures will allow malaria researchers to compare and improve models and methods, which should improve the quality of malaria forecasting. PMID:23180505

  8. Identification of Long Bone Fractures in Radiology Reports Using Natural Language Processing to Support Healthcare Quality Improvement

    PubMed Central

    Masino, Aaron J.; Casper, T. Charles; Dean, Jonathan M.; Bell, Jamie; Enriquez, Rene; Deakyne, Sara; Chamberlain, James M.; Alpern, Elizabeth R.

    2016-01-01

    Summary Background Important information to support healthcare quality improvement is often recorded in free text documents such as radiology reports. Natural language processing (NLP) methods may help extract this information, but these methods have rarely been applied outside the research laboratories where they were developed. Objective To implement and validate NLP tools to identify long bone fractures for pediatric emergency medicine quality improvement. Methods Using freely available statistical software packages, we implemented NLP methods to identify long bone fractures from radiology reports. A sample of 1,000 radiology reports was used to construct three candidate classification models. A test set of 500 reports was used to validate the model performance. Blinded manual review of radiology reports by two independent physicians provided the reference standard. Each radiology report was segmented and word stem and bigram features were constructed. Common English “stop words” and rare features were excluded. We used 10-fold cross-validation to select optimal configuration parameters for each model. Accuracy, recall, precision and the F1 score were calculated. The final model was compared to the use of diagnosis codes for the identification of patients with long bone fractures. Results There were 329 unique word stems and 344 bigrams in the training documents. A support vector machine classifier with Gaussian kernel performed best on the test set with accuracy=0.958, recall=0.969, precision=0.940, and F1 score=0.954. Optimal parameters for this model were cost=4 and gamma=0.005. The three classification models that we tested all performed better than diagnosis codes in terms of accuracy, precision, and F1 score (diagnosis code accuracy=0.932, recall=0.960, precision=0.896, and F1 score=0.927). Conclusions NLP methods using a corpus of 1,000 training documents accurately identified acute long bone fractures from radiology reports. Strategic use of straightforward NLP methods, implemented with freely available software, offers quality improvement teams new opportunities to extract information from narrative documents. PMID:27826610

  9. Direct position determination for digital modulation signals based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding

    2018-04-01

    The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.

  10. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  11. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  12. ISEA (International geodetic project in SouthEastern Alaska) for rapid uplifting caused by glacial retreat: (4) Gravity tide observation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Miura, S.; Sun, W.; Kaufman, A. M.; Cross, R.; Freymueller, J. T.; Heavner, M.

    2006-12-01

    The southeastern Alaska shows a large uplift rate as 30 mm/yr at most, which is considered to be closely related to the glacial isostatic adjustment (GIA) including two effects of the past and present-day ice melting (Larsen et al., 2004). So, this area is important to improve our knowledge of the viscoelastic property of the earth and to consider the global changes. Combing the displacement and gravity observations is useful to constrain the model computation results for GIA (Sato et al., 2006). In order to progress the previous work by the group of Univ. Alaska, Fairbanks (UAF), an observation project by Japan and USA groups was started in 2005 (Miura et al., this meeting). Under this project, June 2006, the continuous GPS measurements started (M. Kufman et al., this meeting) and the absolute gravity (AG) measurements were conducted (W. Sun et al., this meeting). Precise correction for the effect of ocean tide loading is one of the key to increase the observation accuracy of the GPS and gravity observations, especially for the AG measurement. Thanks for the satellite sea surface altimeters such as TOPEX/Poseidon and Jason-1, the accuracy of global ocean tide models based on these data has been much improved, and its accuracy is estimated at a level better than 1.3 cm as a RMS error of the vector differences of the 8 main tidal waves (Matsumoto et al., 2006). However, on the other hand, it is known that the southeastern Alaska is a place that shows a large discrepancy among the proposed global ocean tide models mainly due to a complex topography and bathymetry of the fjord area. In order to improve the accuracy of the ocean tide correction, we started the gravity tide observation at Juneau from June 2006. Two kinds of gravimeters are used for the observation. Sampling interval of the data is at every 1 min. We analyzed the 1 month data from the beginning of the observation and compared the tidal analysis results with the model tide including both effects of the solid and ocean tides. For this computation, we used the Love numbers and the loading Green function for the PREM earth model (Dziewonski & Anderson, 1981) and a global ocean tide model by Schwiderski (1980). Our comparison clearly indicates that a possibility to improve the accuracy of the model prediction by taking into account the actual tidal harmonics observed in the southeastern Alaska.

  13. Predicting the accuracy of ligand overlay methods with Random Forest models.

    PubMed

    Nandigam, Ravi K; Evans, David A; Erickson, Jon A; Kim, Sangtae; Sutherland, Jeffrey J

    2008-12-01

    The accuracy of binding mode prediction using standard molecular overlay methods (ROCS, FlexS, Phase, and FieldCompare) is studied. Previous work has shown that simple decision tree modeling can be used to improve accuracy by selection of the best overlay template. This concept is extended to the use of Random Forest (RF) modeling for template and algorithm selection. An extensive data set of 815 ligand-bound X-ray structures representing 5 gene families was used for generating ca. 70,000 overlays using four programs. RF models, trained using standard measures of ligand and protein similarity and Lipinski-related descriptors, are used for automatically selecting the reference ligand and overlay method maximizing the probability of reproducing the overlay deduced from X-ray structures (i.e., using rmsd < or = 2 A as the criteria for success). RF model scores are highly predictive of overlay accuracy, and their use in template and method selection produces correct overlays in 57% of cases for 349 overlay ligands not used for training RF models. The inclusion in the models of protein sequence similarity enables the use of templates bound to related protein structures, yielding useful results even for proteins having no available X-ray structures.

  14. Parameter estimation for terrain modeling from gradient data. [navigation system for Martian rover

    NASA Technical Reports Server (NTRS)

    Dangelo, K. R.

    1974-01-01

    A method is developed for modeling terrain surfaces for use on an unmanned Martian roving vehicle. The modeling procedure employs a two-step process which uses gradient as well as height data in order to improve the accuracy of the model's gradient. Least square approximation is used in order to stochastically determine the parameters which describe the modeled surface. A complete error analysis of the modeling procedure is included which determines the effect of instrumental measurement errors on the model's accuracy. Computer simulation is used as a means of testing the entire modeling process which includes the acquisition of data points, the two-step modeling process and the error analysis. Finally, to illustrate the procedure, a numerical example is included.

  15. Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models

    NASA Technical Reports Server (NTRS)

    Buchert, T.; Melott, A. L.; Weiss, A. G.

    1993-01-01

    We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.

  16. Hysteresis compensation of piezoelectric deformable mirror based on Prandtl-Ishlinskii model

    NASA Astrophysics Data System (ADS)

    Ma, Jianqiang; Tian, Lei; Li, Yan; Yang, Zongfeng; Cui, Yuguo; Chu, Jiaru

    2018-06-01

    Hysteresis of piezoelectric deformable mirror (DM) reduces the closed-loop bandwidth and the open-loop correction accuracy of adaptive optics (AO) systems. In this work, a classical Prandtl-Ishlinskii (PI) model is employed to model the hysteresis behavior of a unimorph DM with 20 actuators. A modified control algorithm combined with the inverse PI model is developed for piezoelectric DMs. With the help of PI model, the hysteresis of the DM was reduced effectively from about 9% to 1%. Furthermore, open-loop regenerations of low-order aberrations with or without hysteresis compensation were carried out. The experimental results demonstrate that the regeneration accuracy with PI model compensation is significantly improved.

  17. Establishment of a high accuracy geoid correction model and geodata edge match

    NASA Astrophysics Data System (ADS)

    Xi, Ruifeng

    This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.

  18. Studying the Accuracy of Software Process Elicitation: The User Articulated Model

    ERIC Educational Resources Information Center

    Crabtree, Carlton A.

    2010-01-01

    Process models are often the basis for demonstrating improvement and compliance in software engineering organizations. A descriptive model is a type of process model describing the human activities in software development that actually occur. The purpose of a descriptive model is to provide a documented baseline for further process improvement…

  19. Development of CFRP mirrors for space telescopes

    NASA Astrophysics Data System (ADS)

    Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo

    2013-09-01

    CFRP (Caron fiber reinforced plastics) have superior properties of high specific elasticity and low thermal expansion for satellite telescope structures. However, difficulties to achieve required surface accuracy and to ensure stability in orbit have discouraged CFRP application as main mirrors. We have developed ultra-light weight and high precision CFRP mirrors of sandwich structures composed of CFRP skins and CFRP cores using a replica technique. Shape accuracy of the demonstrated mirrors of 150 mm in diameter was 0.8 μm RMS (Root Mean Square) and surface roughness was 5 nm RMS as fabricated. Further optimization of fabrication process conditions to improve surface accuracy was studied using flat sandwich panels. Then surface accuracy of the flat CFRP sandwich panels of 150 mm square was improved to flatness of 0.2 μm RMS with surface roughness of 6 nm RMS. The surface accuracy vs. size of trial models indicated high possibility of fabrication of over 1m size mirrors with surface accuracy of 1μm. Feasibility of CFRP mirrors for low temperature applications was examined for JASMINE project as an example. Stability of surface accuracy of CFRP mirrors against temperature and moisture was discussed.

  20. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  1. Theoretical algorithms for satellite-derived sea surface temperatures

    NASA Astrophysics Data System (ADS)

    Barton, I. J.; Zavody, A. M.; O'Brien, D. M.; Cutten, D. R.; Saunders, R. W.; Llewellyn-Jones, D. T.

    1989-03-01

    Reliable climate forecasting using numerical models of the ocean-atmosphere system requires accurate data sets of sea surface temperature (SST) and surface wind stress. Global sets of these data will be supplied by the instruments to fly on the ERS 1 satellite in 1990. One of these instruments, the Along-Track Scanning Radiometer (ATSR), has been specifically designed to provide SST in cloud-free areas with an accuracy of 0.3 K. The expected capabilities of the ATSR can be assessed using transmission models of infrared radiative transfer through the atmosphere. The performances of several different models are compared by estimating the infrared brightness temperatures measured by the NOAA 9 AVHRR for three standard atmospheres. Of these, a computationally quick spectral band model is used to derive typical AVHRR and ATSR SST algorithms in the form of linear equations. These algorithms show that a low-noise 3.7-μm channel is required to give the best satellite-derived SST and that the design accuracy of the ATSR is likely to be achievable. The inclusion of extra water vapor information in the analysis did not improve the accuracy of multiwavelength SST algorithms, but some improvement was noted with the multiangle technique. Further modeling is required with atmospheric data that include both aerosol variations and abnormal vertical profiles of water vapor and temperature.

  2. Exploring the genetic architecture and improving genomic prediction accuracy for mastitis and milk production traits in dairy cattle by mapping variants to hepatic transcriptomic regions responsive to intra-mammary infection.

    PubMed

    Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter

    2017-05-12

    A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P < 0.05). GFBLUP provides a framework for integrating multiple layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set test might be used as a first-step to improve GFBLUP models. Approaches like GFBLUP and SNP set test will become increasingly useful, as the functional annotations of genomes keep accumulating for a range of species and traits.

  3. Modeling and controller design of a 6-DOF precision positioning system

    NASA Astrophysics Data System (ADS)

    Cai, Kunhai; Tian, Yanling; Liu, Xianping; Fatikow, Sergej; Wang, Fujun; Cui, Liangyu; Zhang, Dawei; Shirinzadeh, Bijan

    2018-05-01

    A key hurdle to meet the needs of micro/nano manipulation in some complex cases is the inadequate workspace and flexibility of the operation ends. This paper presents a 6-degree of freedom (DOF) serial-parallel precision positioning system, which consists of two compact type 3-DOF parallel mechanisms. Each parallel mechanism is driven by three piezoelectric actuators (PEAs), guided by three symmetric T-shape hinges and three elliptical flexible hinges, respectively. It can extend workspace and improve flexibility of the operation ends. The proposed system can be assembled easily, which will greatly reduce the assembly errors and improve the positioning accuracy. In addition, the kinematic and dynamic model of the 6-DOF system are established, respectively. Furthermore, in order to reduce the tracking error and improve the positioning accuracy, the Discrete-time Model Predictive Controller (DMPC) is applied as an effective control method. Meanwhile, the effectiveness of the DMCP control method is verified. Finally, the tracking experiment is performed to verify the tracking performances of the 6-DOF stage.

  4. Monotonically improving approximate answers to relational algebra queries

    NASA Technical Reports Server (NTRS)

    Smith, Kenneth P.; Liu, J. W. S.

    1989-01-01

    We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.

  5. A three-dimensional parabolic equation model of sound propagation using higher-order operator splitting and Padé approximants.

    PubMed

    Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F

    2012-11-01

    An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.

  6. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    NASA Astrophysics Data System (ADS)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  7. Bio-knowledge based filters improve residue-residue contact prediction accuracy.

    PubMed

    Wozniak, P P; Pelc, J; Skrzypecki, M; Vriend, G; Kotulska, M

    2018-05-29

    Residue-residue contact prediction through direct coupling analysis has reached impressive accuracy, but yet higher accuracy will be needed to allow for routine modelling of protein structures. One way to improve the prediction accuracy is to filter predicted contacts using knowledge about the particular protein of interest or knowledge about protein structures in general. We focus on the latter and discuss a set of filters that can be used to remove false positive contact predictions. Each filter depends on one or a few cut-off parameters for which the filter performance was investigated. Combining all filters while using default parameters resulted for a test-set of 851 protein domains in the removal of 29% of the predictions of which 92% were indeed false positives. All data and scripts are available from http://comprec-lin.iiar.pwr.edu.pl/FPfilter/. malgorzata.kotulska@pwr.edu.pl. Supplementary data are available at Bioinformatics online.

  8. Efficient use of unlabeled data for protein sequence classification: a comparative study

    PubMed Central

    Kuksa, Pavel; Huang, Pai-Hsi; Pavlovic, Vladimir

    2009-01-01

    Background Recent studies in computational primary protein sequence analysis have leveraged the power of unlabeled data. For example, predictive models based on string kernels trained on sequences known to belong to particular folds or superfamilies, the so-called labeled data set, can attain significantly improved accuracy if this data is supplemented with protein sequences that lack any class tags–the unlabeled data. In this study, we present a principled and biologically motivated computational framework that more effectively exploits the unlabeled data by only using the sequence regions that are more likely to be biologically relevant for better prediction accuracy. As overly-represented sequences in large uncurated databases may bias the estimation of computational models that rely on unlabeled data, we also propose a method to remove this bias and improve performance of the resulting classifiers. Results Combined with state-of-the-art string kernels, our proposed computational framework achieves very accurate semi-supervised protein remote fold and homology detection on three large unlabeled databases. It outperforms current state-of-the-art methods and exhibits significant reduction in running time. Conclusion The unlabeled sequences used under the semi-supervised setting resemble the unpolished gemstones; when used as-is, they may carry unnecessary features and hence compromise the classification accuracy but once cut and polished, they improve the accuracy of the classifiers considerably. PMID:19426450

  9. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    PubMed

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction.

  10. Dual Quaternions as Constraints in 4D-DPM Models for Pose Estimation.

    PubMed

    Martinez-Berti, Enrique; Sánchez-Salmerón, Antonio-José; Ricolfe-Viala, Carlos

    2017-08-19

    The goal of this research work is to improve the accuracy of human pose estimation using the Deformation Part Model (DPM) without increasing computational complexity. First, the proposed method seeks to improve pose estimation accuracy by adding the depth channel to DPM, which was formerly defined based only on red-green-blue (RGB) channels, in order to obtain a four-dimensional DPM (4D-DPM). In addition, computational complexity can be controlled by reducing the number of joints by taking it into account in a reduced 4D-DPM. Finally, complete solutions are obtained by solving the omitted joints by using inverse kinematics models. In this context, the main goal of this paper is to analyze the effect on pose estimation timing cost when using dual quaternions to solve the inverse kinematics.

  11. Adaptive Grouping Cloud Model Shuffled Frog Leaping Algorithm for Solving Continuous Optimization Problems

    PubMed Central

    Liu, Haorui; Yi, Fengyan; Yang, Heli

    2016-01-01

    The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584

  12. Prediction of clinical behaviour and treatment for cancers.

    PubMed

    Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola

    2003-01-01

    Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.

  13. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique

    PubMed Central

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J.

    2017-01-01

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision. PMID:28125056

  14. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique.

    PubMed

    Jachimczyk, Bartosz; Dziak, Damian; Kulesza, Wlodek J

    2017-01-25

    The increased potential and effectiveness of Real-time Locating Systems (RTLSs) substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA) localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system's configuration and LS's relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS' localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.

  15. Boundary Condition for Modeling Semiconductor Nanostructures

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Oyafuso, Fabiano; von Allmen, Paul; Klimeck, Gerhard

    2006-01-01

    A recently proposed boundary condition for atomistic computational modeling of semiconductor nanostructures (particularly, quantum dots) is an improved alternative to two prior such boundary conditions. As explained, this boundary condition helps to reduce the amount of computation while maintaining accuracy.

  16. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-09-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region.

  17. Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen

    2017-03-01

    We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model. The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation. As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.

  18. Population variation in isotopic composition of shorebird feathers: Implications for determining molting grounds

    USGS Publications Warehouse

    Torres-Dowdall, J.; Farmer, A.H.; Bucher, E.H.; Rye, R.O.; Landis, G.

    2009-01-01

    Stable isotope analyses have revolutionized the study of migratory connectivity. However, as with all tools, their limitations must be understood in order to derive the maximum benefit of a particular application. The goal of this study was to evaluate the efficacy of stable isotopes of C, N, H, O and S for assigning known-origin feathers to the molting sites of migrant shorebird species wintering and breeding in Argentina. Specific objectives were to: 1) compare the efficacy of the technique for studying shorebird species with different migration patterns, life histories and habitat-use patterns; 2) evaluate the grouping of species with similar migration and habitat use patterns in a single analysis to potentially improve prediction accuracy; and 3) evaluate the potential gains in prediction accuracy that might be achieved from using multiple stable isotopes. The efficacy of stable isotope ratios to determine origin was found to vary with species. While one species (White-rumped Sandpiper, Calidris fuscicollis) had high levels of accuracy assigning samples to known origin (91% of samples correctly assigned), another (Collared Plover, Charadrius collaris) showed low levels of accuracy (52% of samples correctly assigned). Intra-individual variability may account for this difference in efficacy. The prediction model for three species with similar migration and habitat-use patterns performed poorly compared with the model for just one of the species (71% versus 91% of samples correctly assigned). Thus, combining multiple sympatric species may not improve model prediction accuracy. Increasing the number of stable isotopes in the analyses increased the accuracy of assigning shorebirds to their molting origin, but the best combination - involving a subset of all the isotopes analyzed - varied among species.

  19. Improving z-tracking accuracy in the two-photon single-particle tracking microscope.

    PubMed

    Liu, C; Liu, Y-L; Perillo, E P; Jiang, N; Dunn, A K; Yeh, H-C

    2015-10-12

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we have precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico . Our method can be generally applied to other 3D single-particle tracking techniques.

  20. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

  1. Enhancing the Performance of LibSVM Classifier by Kernel F-Score Feature Selection

    NASA Astrophysics Data System (ADS)

    Sarojini, Balakrishnan; Ramaraj, Narayanasamy; Nickolas, Savarimuthu

    Medical Data mining is the search for relationships and patterns within the medical datasets that could provide useful knowledge for effective clinical decisions. The inclusion of irrelevant, redundant and noisy features in the process model results in poor predictive accuracy. Much research work in data mining has gone into improving the predictive accuracy of the classifiers by applying the techniques of feature selection. Feature selection in medical data mining is appreciable as the diagnosis of the disease could be done in this patient-care activity with minimum number of significant features. The objective of this work is to show that selecting the more significant features would improve the performance of the classifier. We empirically evaluate the classification effectiveness of LibSVM classifier on the reduced feature subset of diabetes dataset. The evaluations suggest that the feature subset selected improves the predictive accuracy of the classifier and reduce false negatives and false positives.

  2. Improving z-tracking accuracy in the two-photon single-particle tracking microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, Y.-L.; Perillo, E. P.

    Here, we present a method that can improve the z-tracking accuracy of the recently invented TSUNAMI (Tracking of Single particles Using Nonlinear And Multiplexed Illumination) microscope. This method utilizes a maximum likelihood estimator (MLE) to determine the particle's 3D position that maximizes the likelihood of the observed time-correlated photon count distribution. Our Monte Carlo simulations show that the MLE-based tracking scheme can improve the z-tracking accuracy of TSUNAMI microscope by 1.7 fold. In addition, MLE is also found to reduce the temporal correlation of the z-tracking error. Taking advantage of the smaller and less temporally correlated z-tracking error, we havemore » precisely recovered the hybridization-melting kinetics of a DNA model system from thousands of short single-particle trajectories in silico. Our method can be generally applied to other 3D single-particle tracking techniques.« less

  3. Analytical Models of Exoplanetary Atmospheres. IV. Improved Two-stream Radiative Transfer for the Treatment of Aerosols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heng, Kevin; Kitzmann, Daniel, E-mail: kevin.heng@csh.unibe.ch, E-mail: daniel.kitzmann@csh.unibe.ch

    We present a novel generalization of the two-stream method of radiative transfer, which allows for the accurate treatment of radiative transfer in the presence of strong infrared scattering by aerosols. We prove that this generalization involves only a simple modification of the coupling coefficients and transmission functions in the hemispheric two-stream method. This modification originates from allowing the ratio of the first Eddington coefficients to depart from unity. At the heart of the method is the fact that this ratio may be computed once and for all over the entire range of values of the single-scattering albedo and scattering asymmetrymore » factor. We benchmark our improved two-stream method by calculating the fraction of flux reflected by a single atmospheric layer (the reflectivity) and comparing these calculations to those performed using a 32-stream discrete-ordinates method. We further compare our improved two-stream method to the two-stream source function (16 streams) and delta-Eddington methods, demonstrating that it is often more accurate at the order-of-magnitude level. Finally, we illustrate its accuracy using a toy model of the early Martian atmosphere hosting a cloud layer composed of carbon dioxide ice particles. The simplicity of implementation and accuracy of our improved two-stream method renders it suitable for implementation in three-dimensional general circulation models. In other words, our improved two-stream method has the ease of implementation of a standard two-stream method, but the accuracy of a 32-stream method.« less

  4. Numerical modeling and model updating for smart laminated structures with viscoelastic damping

    NASA Astrophysics Data System (ADS)

    Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan

    2018-07-01

    This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.

  5. Model parameter estimation approach based on incremental analysis for lithium-ion batteries without using open circuit voltage

    NASA Astrophysics Data System (ADS)

    Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui

    2015-08-01

    To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.

  6. The on-orbit calibration of geometric parameters of the Tian-Hui 1 (TH-1) satellite

    NASA Astrophysics Data System (ADS)

    Wang, Jianrong; Wang, Renxiang; Hu, Xin; Su, Zhongbo

    2017-02-01

    The on-orbit calibration of geometric parameters is a key step in improving the location accuracy of satellite images without using Ground Control Points (GCPs). Most methods of on-orbit calibration are based on the self-calibration using additional parameters. When using additional parameters, different number of additional parameters may lead to different results. The triangulation bundle adjustment is another way to calibrate the geometric parameters of camera, which can describe the changes in each geometric parameter. When triangulation bundle adjustment method is applied to calibrate geometric parameters, a prerequisite is that the strip model can avoid systematic deformation caused by the rate of attitude changes. Concerning the stereo camera, the influence of the intersection angle should be considered during calibration. The Equivalent Frame Photo (EFP) bundle adjustment based on the Line-Matrix CCD (LMCCD) image can solve the systematic distortion of the strip model, and obtain high accuracy location without using GCPs. In this paper, the triangulation bundle adjustment is used to calibrate the geometric parameters of TH-1 satellite cameras based on LMCCD image. During the bundle adjustment, the three-line array cameras are reconstructed by adopting the principle of inverse triangulation. Finally, the geometric accuracy is validated before and after on-orbit calibration using 5 testing fields. After on-orbit calibration, the 3D geometric accuracy is improved to 11.8 m from 170 m. The results show that the location accuracy of TH-1 without using GCPs is significantly improved using the on-orbit calibration of the geometric parameters.

  7. AGDRIFT: A MODEL FOR ESTIMATING NEAR-FIELD SPRAY DRIFT FROM AERIAL APPLICATIONS

    EPA Science Inventory

    The aerial spray prediction model AgDRIFT(R) embodies the computational engine found in the near-wake Lagrangian model AGricultural DISPersal (AGDISP) but with several important features added that improve the speed and accuracy of its predictions. This article summarizes those c...

  8. Evaluating ASTER satellite imagery and gradient modeling for mapping and characterizing wildland fire fuels

    Treesearch

    Michael J. Falkowski; Paul Gessler; Penelope Morgan; Alistair M. S. Smith; Andrew T. Hudak

    2004-01-01

    Land managers need cost-effective methods for mapping and characterizing fire fuels quickly and accurately. The advent of sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the Advanced Spaceborne Thermal Emission and Reflection...

  9. Characterizing and mapping forest fire fuels using ASTER imagery and gradient modeling

    Treesearch

    Michael J. Falkowski; Paul E. Gessler; Penelope Morgan; Andrew T. Hudak; Alistair M. S. Smith

    2005-01-01

    Land managers need cost-effective methods for mapping and characterizing forest fuels quickly and accurately. The launch of satellite sensors with increased spatial resolution may improve the accuracy and reduce the cost of fuels mapping. The objective of this research is to evaluate the accuracy and utility of imagery from the advanced spaceborne thermal emission and...

  10. Improved P300 speller performance using electrocorticography, spectral features, and natural language processing.

    PubMed

    Speier, William; Fried, Itzhak; Pouratian, Nader

    2013-07-01

    The P300 speller is a system designed to restore communication to patients with advanced neuromuscular disorders. This study was designed to explore the potential improvement from using electrocorticography (ECoG) compared to the more traditional usage of electroencephalography (EEG). We tested the P300 speller on two epilepsy patients with temporary subdural electrode arrays over the occipital and temporal lobes respectively. We then performed offline analysis to determine the accuracy and bit rate of the system and integrated spectral features into the classifier and used a natural language processing (NLP) algorithm to further improve the results. The subject with the occipital grid achieved an accuracy of 82.77% and a bit rate of 41.02, which improved to 96.31% and 49.47 respectively using a language model and spectral features. The temporal grid patient achieved an accuracy of 59.03% and a bit rate of 18.26 with an improvement to 75.81% and 27.05 respectively using a language model and spectral features. Spatial analysis of the individual electrodes showed best performance using signals generated and recorded near the occipital pole. Using ECoG and integrating language information and spectral features can improve the bit rate of a P300 speller system. This improvement is sensitive to the electrode placement and likely depends on visually evoked potentials. This study shows that there can be an improvement in BCI performance when using ECoG, but that it is sensitive to the electrode location. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    NASA Astrophysics Data System (ADS)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  12. Multispectrum analysis of the oxygen A-band.

    PubMed

    Drouin, Brian J; Benner, D Chris; Brown, Linda R; Cich, Matthew J; Crawford, Timothy J; Devi, V Malathy; Guillaume, Alexander; Hodges, Joseph T; Mlawer, Eli J; Robichaud, David J; Oyafuso, Fabiano; Payne, Vivienne H; Sung, Keeyoon; Wishnow, Edward H; Yu, Shanshan

    2017-01-01

    Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O 2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependent Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. The extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.

  13. Multispectrum analysis of the oxygen A-band

    PubMed Central

    Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.; Cich, Matthew J.; Crawford, Timothy J.; Devi, V. Malathy; Guillaume, Alexander; Hodges, Joseph T.; Mlawer, Eli J.; Robichaud, David J.; Oyafuso, Fabiano; Payne, Vivienne H.; Sung, Keeyoon; Wishnow, Edward H.; Yu, Shanshan

    2016-01-01

    Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependent Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. The extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database. PMID:27840454

  14. Multispectrum analysis of the oxygen A-band

    DOE PAGES

    Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.; ...

    2016-04-11

    Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependentmore » Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. As a result, the extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.« less

  15. Multispectrum analysis of the oxygen A-band

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.

    Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependentmore » Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. As a result, the extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.« less

  16. Postmortem time estimation using body temperature and a finite-element computer model.

    PubMed

    den Hartog, Emiel A; Lotens, Wouter A

    2004-09-01

    In the Netherlands most murder victims are found 2-24 h after the crime. During this period, body temperature decrease is the most reliable method to estimate the postmortem time (PMT). Recently, two murder cases were analysed in which currently available methods did not provide a sufficiently reliable estimate of the PMT. In both cases a study was performed to verify the statements of suspects. For this purpose a finite-element computer model was developed that simulates a human torso and its clothing. With this model, changes to the body and the environment can also be modelled; this was very relevant in one of the cases, as the body had been in the presence of a small fire. In both cases it was possible to falsify the statements of the suspects by improving the accuracy of the PMT estimate. The estimated PMT in both cases was within the range of Henssge's model. The standard deviation of the PMT estimate was 35 min in the first case and 45 min in the second case, compared to 168 min (2.8 h) in Henssge's model. In conclusion, the model as presented here can have additional value for improving the accuracy of the PMT estimate. In contrast to the simple model of Henssge, the current model allows for increased accuracy when more detailed information is available. Moreover, the sensitivity of the predicted PMT for uncertainty in the circumstances can be studied, which is crucial to the confidence of the judge in the results.

  17. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce.

    PubMed

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network's initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data.

  18. Prediction of stock markets by the evolutionary mix-game model

    NASA Astrophysics Data System (ADS)

    Chen, Fang; Gou, Chengling; Guo, Xiaoqian; Gao, Jieping

    2008-06-01

    This paper presents the efforts of using the evolutionary mix-game model, which is a modified form of the agent-based mix-game model, to predict financial time series. Here, we have carried out three methods to improve the original mix-game model by adding the abilities of strategy evolution to agents, and then applying the new model referred to as the evolutionary mix-game model to forecast the Shanghai Stock Exchange Composite Index. The results show that these modifications can improve the accuracy of prediction greatly when proper parameters are chosen.

  19. High altitude atmospheric modeling

    NASA Technical Reports Server (NTRS)

    Hedin, Alan E.

    1988-01-01

    Five empirical models were compared with 13 data sets, including both atmospheric drag-based data and mass spectrometer data. The most recently published model, MSIS-86, was found to be the best model overall with an accuracy around 15 percent. The excellent overall agreement of the mass spectrometer-based MSIS models with the drag data, including both the older data from orbital decay and the newer accelerometer data, suggests that the absolute calibration of the (ensemble of) mass spectrometers and the assumed drag coefficient in the atomic oxygen regime are consistent to 5 percent. This study illustrates a number of reasons for the current accuracy limit such as calibration accuracy and unmodeled trends. Nevertheless, the largest variations in total density in the thermosphere are accounted for, to a very high degree, by existing models. The greatest potential for improvements is in areas where we still have insufficient data (like the lower thermosphere or exosphere), where there are disagreements in technique (such as the exosphere) which can be resolved, or wherever generally more accurate measurements become available.

  20. Predictive accuracy of particle filtering in dynamic models supporting outbreak projections.

    PubMed

    Safarishahrbijari, Anahita; Teyhouee, Aydin; Waldner, Cheryl; Liu, Juxin; Osgood, Nathaniel D

    2017-09-26

    While a new generation of computational statistics algorithms and availability of data streams raises the potential for recurrently regrounding dynamic models with incoming observations, the effectiveness of such arrangements can be highly subject to specifics of the configuration (e.g., frequency of sampling and representation of behaviour change), and there has been little attempt to identify effective configurations. Combining dynamic models with particle filtering, we explored a solution focusing on creating quickly formulated models regrounded automatically and recurrently as new data becomes available. Given a latent underlying case count, we assumed that observed incident case counts followed a negative binomial distribution. In accordance with the condensation algorithm, each such observation led to updating of particle weights. We evaluated the effectiveness of various particle filtering configurations against each other and against an approach without particle filtering according to the accuracy of the model in predicting future prevalence, given data to a certain point and a norm-based discrepancy metric. We examined the effectiveness of particle filtering under varying times between observations, negative binomial dispersion parameters, and rates with which the contact rate could evolve. We observed that more frequent observations of empirical data yielded super-linearly improved accuracy in model predictions. We further found that for the data studied here, the most favourable assumptions to make regarding the parameters associated with the negative binomial distribution and changes in contact rate were robust across observation frequency and the observation point in the outbreak. Combining dynamic models with particle filtering can perform well in projecting future evolution of an outbreak. Most importantly, the remarkable improvements in predictive accuracy resulting from more frequent sampling suggest that investments to achieve efficient reporting mechanisms may be more than paid back by improved planning capacity. The robustness of the results on particle filter configuration in this case study suggests that it may be possible to formulate effective standard guidelines and regularized approaches for such techniques in particular epidemiological contexts. Most importantly, the work tentatively suggests potential for health decision makers to secure strong guidance when anticipating outbreak evolution for emerging infectious diseases by combining even very rough models with particle filtering method.

  1. Improved accuracy for finite element structural analysis via an integrated force method

    NASA Technical Reports Server (NTRS)

    Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.

    1992-01-01

    A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.

  2. Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy

    PubMed Central

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-01-01

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715

  3. The use of ambient humidity conditions to improve influenza forecast.

    PubMed

    Shaman, Jeffrey; Kandula, Sasikiran; Yang, Wan; Karspeck, Alicia

    2017-11-01

    Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance (at 1-4 lead weeks, 3.8% more peak week and 4.4% more peak intensity forecasts are accurate than with no forcing) and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing (4.4% and 2.6% respectively). These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast.

  4. Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.

    PubMed

    Cho, Taehwan; Lee, Changho; Choi, Sangbang

    2013-03-27

    The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.

  5. The use of ambient humidity conditions to improve influenza forecast

    PubMed Central

    Kandula, Sasikiran; Karspeck, Alicia

    2017-01-01

    Laboratory and epidemiological evidence indicate that ambient humidity modulates the survival and transmission of influenza. Here we explore whether the inclusion of humidity forcing in mathematical models describing influenza transmission improves the accuracy of forecasts generated with those models. We generate retrospective forecasts for 95 cities over 10 seasons in the United States and assess both forecast accuracy and error. Overall, we find that humidity forcing improves forecast performance (at 1–4 lead weeks, 3.8% more peak week and 4.4% more peak intensity forecasts are accurate than with no forcing) and that forecasts generated using daily climatological humidity forcing generally outperform forecasts that utilize daily observed humidity forcing (4.4% and 2.6% respectively). These findings hold for predictions of outbreak peak intensity, peak timing, and incidence over 2- and 4-week horizons. The results indicate that use of climatological humidity forcing is warranted for current operational influenza forecast. PMID:29145389

  6. Autonomous Navigation Improvements for High-Earth Orbiters Using GPS

    NASA Technical Reports Server (NTRS)

    Long, Anne; Kelbel, David; Lee, Taesul; Garrison, James; Carpenter, J. Russell; Bauer, F. (Technical Monitor)

    2000-01-01

    The Goddard Space Flight Center is currently developing autonomous navigation systems for satellites in high-Earth orbits where acquisition of the GPS signals is severely limited This paper discusses autonomous navigation improvements for high-Earth orbiters and assesses projected navigation performance for these satellites using Global Positioning System (GPS) Standard Positioning Service (SPS) measurements. Navigation performance is evaluated as a function of signal acquisition threshold, measurement errors, and dynamic modeling errors using realistic GPS signal strength and user antenna models. These analyses indicate that an autonomous navigation position accuracy of better than 30 meters root-mean-square (RMS) can be achieved for high-Earth orbiting satellites using a GPS receiver with a very stable oscillator. This accuracy improves to better than 15 meters RMS if the GPS receiver's signal acquisition threshold can be reduced by 5 dB-Hertz to track weaker signals.

  7. Dynamic sea surface topography, gravity and improved orbit accuracies from the direct evaluation of SEASAT altimeter data

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Lerch, F.; Koblinsky, C. J.; Klosko, S. M.; Robbins, J. W.; Williamson, R. G.; Patel, G. B.

    1989-01-01

    A method for the simultaneous solution of dynamic ocean topography, gravity and orbits using satellite altimeter data is described. A GEM-T1 based gravitational model called PGS-3337 that incorporates Seasat altimetry, surface gravimetry and satellite tracking data has been determined complete to degree and order 50. The altimeter data is utilized as a dynamic observation of the satellite's height above the sea surface with a degree 10 model of dynamic topography being recovered simultaneously with the orbit parameters, gravity and tidal terms in this model. PGS-3337 has a geoid uncertainty of 60 cm root-mean-square (RMS) globally, with the uncertainty over the altimeter tracked ocean being in the 25 cm range. Doppler determined orbits for Seasat, show large improvements, with the sub-30 cm radial accuracies being achieved. When altimeter data is used in orbit determination, radial orbital accuracies of 20 cm are achieved. The RMS of fit to the altimeter data directly gives 30 cm fits for Seasat when using PGS-3337 and its geoid and dynamic topography model. This performance level is two to three times better than that achieved with earlier Goddard earth models (GEM) using the dynamic topography from long-term oceanographic averages. The recovered dynamic topography reveals the global long wavelength circulation of the oceans with a resolution of 1500 km. The power in the dynamic topography recovery is now found to be closer to that of oceanographic studies than for previous satellite solutions. This is attributed primarily to the improved modeling of the geoid which has occurred. Study of the altimeter residuals reveals regions where tidal models are poor and sea state effects are major limitations.

  8. Accurate modeling of switched reluctance machine based on hybrid trained WNN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Shoujun, E-mail: sunnyway@nwpu.edu.cn; Ge, Lefei; Ma, Shaojie

    2014-04-15

    According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, themore » nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.« less

  9. Modeling and Assessment of GPS/BDS Combined Precise Point Positioning.

    PubMed

    Chen, Junping; Wang, Jungang; Zhang, Yize; Yang, Sainan; Chen, Qian; Gong, Xiuqiang

    2016-07-22

    Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model and a simplified model, where the inter-system bias (ISB) is treated in different way. To evaluate the performance of combined GPS/BDS PPP, kinematic and static PPP positions are compared to the IGS daily estimates, where 1 month GPS/BDS data of 11 IGS Multi-GNSS Experiment (MGEX) stations are used. The results indicate apparent improvement of GPS/BDS combined PPP solutions in both static and kinematic cases, where much smaller standard deviations are presented in the magnitude distribution of coordinates RMS statistics. Comparisons between the traditional and simplified combined PPP models show no difference in coordinate estimations, and the inter system biases between the GPS/BDS system are assimilated into receiver clock, ambiguities and pseudo-range residuals accordingly.

  10. Accuracy assessment of surgical planning and three-dimensional-printed patient-specific guides for orthopaedic osteotomies.

    PubMed

    Sys, Gwen; Eykens, Hannelore; Lenaerts, Gerlinde; Shumelinsky, Felix; Robbrecht, Cedric; Poffyn, Bart

    2017-06-01

    This study analyses the accuracy of three-dimensional pre-operative planning and patient-specific guides for orthopaedic osteotomies. To this end, patient-specific guides were compared to the classical freehand method in an experimental setup with saw bones in two phases. In the first phase, the effect of guide design and oscillating versus reciprocating saws was analysed. The difference between target and performed cuts was quantified by the average distance deviation and average angular deviations in the sagittal and coronal planes for the different osteotomies. The results indicated that for one model osteotomy, the use of guides resulted in a more accurate cut when compared to the freehand technique. Reciprocating saws and slot guides improved accuracy in all planes, while oscillating saws and open guides lead to larger deviations from the planned cut. In the second phase, the accuracy of transfer of the planning to the surgical field with slot guides and a reciprocating saw was assessed and compared to the classical planning and freehand cutting method. The pre-operative plan was transferred with high accuracy. Three-dimensional-printed patient-specific guides improve the accuracy of osteotomies and bony resections in an experimental setup compared to conventional freehand methods. The improved accuracy is related to (1) a detailed and qualitative pre-operative plan and (2) an accurate transfer of the planning to the operation room with patient-specific guides by an accurate guidance of the surgical tools to perform the desired cuts.

  11. Parametric diagnosis of the adaptive gas path in the automatic control system of the aircraft engine

    NASA Astrophysics Data System (ADS)

    Kuznetsova, T. A.

    2017-01-01

    The paper dwells on the adaptive multimode mathematical model of the gas-turbine aircraft engine (GTE) embedded in the automatic control system (ACS). The mathematical model is based on the throttle performances, and is characterized by high accuracy of engine parameters identification in stationary and dynamic modes. The proposed on-board engine model is the state space linearized low-level simulation. The engine health is identified by the influence of the coefficient matrix. The influence coefficient is determined by the GTE high-level mathematical model based on measurements of gas-dynamic parameters. In the automatic control algorithm, the sum of squares of the deviation between the parameters of the mathematical model and real GTE is minimized. The proposed mathematical model is effectively used for gas path defects detecting in on-line GTE health monitoring. The accuracy of the on-board mathematical model embedded in ACS determines the quality of adaptive control and reliability of the engine. To improve the accuracy of identification solutions and sustainability provision, the numerical method of Monte Carlo was used. The parametric diagnostic algorithm based on the LPτ - sequence was developed and tested. Analysis of the results suggests that the application of the developed algorithms allows achieving higher identification accuracy and reliability than similar models used in practice.

  12. Microwave Resonator Measurements of Atmospheric Absorption Coefficients: A Preliminary Design Study

    NASA Technical Reports Server (NTRS)

    Walter, Steven J.; Spilker, Thomas R.

    1995-01-01

    A preliminary design study examined the feasibility of using microwave resonator measurements to improve the accuracy of atmospheric absorption coefficients and refractivity between 18 and 35 GHz. Increased accuracies would improve the capability of water vapor radiometers to correct for radio signal delays caused by Earth's atmosphere. Calibration of delays incurred by radio signals traversing the atmosphere has applications to both deep space tracking and planetary radio science experiments. Currently, the Cassini gravity wave search requires 0.8-1.0% absorption coefficient accuracy. This study examined current atmospheric absorption models and estimated that current model accuracy ranges from 5% to 7%. The refractivity of water vapor is known to 1% accuracy, while the refractivity of many dry gases (oxygen, nitrogen, etc.) are known to better than 0.1%. Improvements to the current generation of models will require that both the functional form and absolute absorption of the water vapor spectrum be calibrated and validated. Several laboratory techniques for measuring atmospheric absorption and refractivity were investigated, including absorption cells, single and multimode rectangular cavity resonators, and Fabry-Perot resonators. Semi-confocal Fabry-Perot resonators were shown to provide the most cost-effective and accurate method of measuring atmospheric gas refractivity. The need for accurate environmental measurement and control was also addressed. A preliminary design for the environmental control and measurement system was developed to aid in identifying significant design issues. The analysis indicated that overall measurement accuracy will be limited by measurement errors and imprecise control of the gas sample's thermodynamic state, thermal expansion and vibration- induced deformation of the resonator structure, and electronic measurement error. The central problem is to identify systematic errors because random errors can be reduced by averaging. Calibrating the resonator measurements by checking the refractivity of dry gases which are known to better than 0.1% provides a method of controlling the systematic errors to 0.1%. The primary source of error in absorptivity and refractivity measurements is thus the ability to measure the concentration of water vapor in the resonator path. Over the whole thermodynamic range of interest the accuracy of water vapor measurement is 1.5%. However, over the range responsible for most of the radio delay (i.e. conditions in the bottom two kilometers of the atmosphere) the accuracy of water vapor measurements ranges from 0.5% to 1.0%. Therefore the precision of the resonator measurements could be held to 0.3% and the overall absolute accuracy of resonator-based absorption and refractivity measurements will range from 0.6% to 1.

  13. The novel application of artificial neural network on bioelectrical impedance analysis to assess the body composition in elderly

    PubMed Central

    2013-01-01

    Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042

  14. Advanced Computational Methods for High-accuracy Refinement of Protein Low-quality Models

    NASA Astrophysics Data System (ADS)

    Zang, Tianwu

    Predicting the 3-dimentional structure of protein has been a major interest in the modern computational biology. While lots of successful methods can generate models with 3˜5A root-mean-square deviation (RMSD) from the solution, the progress of refining these models is quite slow. It is therefore urgently needed to develop effective methods to bring low-quality models to higher-accuracy ranges (e.g., less than 2 A RMSD). In this thesis, I present several novel computational methods to address the high-accuracy refinement problem. First, an enhanced sampling method, named parallel continuous simulated tempering (PCST), is developed to accelerate the molecular dynamics (MD) simulation. Second, two energy biasing methods, Structure-Based Model (SBM) and Ensemble-Based Model (EBM), are introduced to perform targeted sampling around important conformations. Third, a three-step method is developed to blindly select high-quality models along the MD simulation. These methods work together to make significant refinement of low-quality models without any knowledge of the solution. The effectiveness of these methods is examined in different applications. Using the PCST-SBM method, models with higher global distance test scores (GDT_TS) are generated and selected in the MD simulation of 18 targets from the refinement category of the 10th Critical Assessment of Structure Prediction (CASP10). In addition, in the refinement test of two CASP10 targets using the PCST-EBM method, it is indicated that EBM may bring the initial model to even higher-quality levels. Furthermore, a multi-round refinement protocol of PCST-SBM improves the model quality of a protein to the level that is sufficient high for the molecular replacement in X-ray crystallography. Our results justify the crucial position of enhanced sampling in the protein structure prediction and demonstrate that a considerable improvement of low-accuracy structures is still achievable with current force fields.

  15. Comparing conventional and computer-assisted surgery baseplate and screw placement in reverse shoulder arthroplasty.

    PubMed

    Venne, Gabriel; Rasquinha, Brian J; Pichora, David; Ellis, Randy E; Bicknell, Ryan

    2015-07-01

    Preoperative planning and intraoperative navigation technologies have each been shown separately to be beneficial for optimizing screw and baseplate positioning in reverse shoulder arthroplasty (RSA) but to date have not been combined. This study describes development of a system for performing computer-assisted RSA glenoid baseplate and screw placement, including preoperative planning, intraoperative navigation, and postoperative evaluation, and compares this system with a conventional approach. We used a custom-designed system allowing computed tomography (CT)-based preoperative planning, intraoperative navigation, and postoperative evaluation. Five orthopedic surgeons defined common preoperative plans on 3-dimensional CT reconstructed cadaveric shoulders. Each surgeon performed 3 computer-assisted and 3 conventional simulated procedures. The 3-dimensional CT reconstructed postoperative units were digitally matched to the preoperative model for evaluation of entry points, end points, and angulations of screws and baseplate. Values were used to find accuracy and precision of the 2 groups with respect to the defined placement. Statistical analysis was performed by t tests (α = .05). Comparison of the groups revealed no difference in accuracy or precision of screws or baseplate entry points (P > .05). Accuracy and precision were improved with use of navigation for end points and angulations of 3 screws (P < .05). Accuracy of the inferior screw showed a trend of improvement with navigation (P > .05). Navigated baseplate end point precision was improved (P < .05), with a trend toward improved accuracy (P > .05). We conclude that CT-based preoperative planning and intraoperative navigation allow improved accuracy and precision for screw placement and precision for baseplate positioning with respect to a predefined placement compared with conventional techniques in RSA. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  16. New Global Calculation of Nuclear Masses and Fission Barriers for Astrophysical Applications

    NASA Astrophysics Data System (ADS)

    Möller, P.; Sierk, A. J.; Bengtsson, R.; Ichikawa, T.; Iwamoto, A.

    2008-05-01

    The FRDM(1992) mass model [1] has an accuracy of 0.669 MeV in the region where its parameters were determined. For the 529 masses that have been measured since, its accuracy is 0.46 MeV, which is encouraging for applications far from stability in astrophysics. We are developing an improved mass model, the FRDM(2008). The improvements in the calculations with respect to the FRDM(1992) are in two main areas. (1) The macroscopic model parameters are better optimized. By simulation (adjusting to a limited set of now known nuclei) we can show that this actually makes the results more reliable in new regions of nuclei. (2) The ground-state deformation parameters are more accurately calculated. We minimize the energy in a four-dimensional deformation space (ɛ2, V3, V4, V6,) using a grid interval of 0.01 in all 4 deformation variables. The (non-finalized) FRDM (2008-a) has an accuracy of 0.596 MeV with respect to the 2003 Audi mass evaluation before triaxial shape degrees of freedom are included (in progress). When triaxiality effects are incorporated preliminary results indicate that the model accuracy will improve further, to about 0.586 MeV. We also discuss very large-scale fission-barrier calculations in the related FRLDM (2002) model, which has been shown to reproduce very satisfactorily known fission properties, for example barrier heights from 70Se to the heaviest elements, multiple fission modes in the Ra region, asymmetry of mass division in fission and the triple-humped structure found in light actinides. In the superheavy region we find barriers consistent with the observed half-lives. We have completed production calculations and obtain barrier heights for 5254 nuclei heavier than A = 170 for all nuclei between the proton and neutron drip lines. The energy is calculated for 5009325 different shapes for each nucleus and the optimum barrier between ground state and separated fragments is determined by use of an ``immersion'' technique.

  17. Use of Cell Viability Assay Data Improves the Prediction Accuracy of Conventional Quantitative Structure–Activity Relationship Models of Animal Carcinogenicity

    PubMed Central

    Zhu, Hao; Rusyn, Ivan; Richard, Ann; Tropsha, Alexander

    2008-01-01

    Background To develop efficient approaches for rapid evaluation of chemical toxicity and human health risk of environmental compounds, the National Toxicology Program (NTP) in collaboration with the National Center for Chemical Genomics has initiated a project on high-throughput screening (HTS) of environmental chemicals. The first HTS results for a set of 1,408 compounds tested for their effects on cell viability in six different cell lines have recently become available via PubChem. Objectives We have explored these data in terms of their utility for predicting adverse health effects of the environmental agents. Methods and results Initially, the classification k nearest neighbor (kNN) quantitative structure–activity relationship (QSAR) modeling method was applied to the HTS data only, for a curated data set of 384 compounds. The resulting models had prediction accuracies for training, test (containing 275 compounds together), and external validation (109 compounds) sets as high as 89%, 71%, and 74%, respectively. We then asked if HTS results could be of value in predicting rodent carcinogenicity. We identified 383 compounds for which data were available from both the Berkeley Carcinogenic Potency Database and NTP–HTS studies. We found that compounds classified by HTS as “actives” in at least one cell line were likely to be rodent carcinogens (sensitivity 77%); however, HTS “inactives” were far less informative (specificity 46%). Using chemical descriptors only, kNN QSAR modeling resulted in 62.3% prediction accuracy for rodent carcinogenicity applied to this data set. Importantly, the prediction accuracy of the model was significantly improved (72.7%) when chemical descriptors were augmented by HTS data, which were regarded as biological descriptors. Conclusions Our studies suggest that combining NTP–HTS profiles with conventional chemical descriptors could considerably improve the predictive power of computational approaches in toxicology. PMID:18414635

  18. NRL/VOA Modifications to IONCAP as of 12 July 1988

    DTIC Science & Technology

    1989-08-02

    suitable for wide-area coverage studies), to incorporate a newer noise model , to improve the accuracy of some calculations, to correct a few...with IONANT ............................................................... 13 C. Incorporation of an Updated Noise Model into IONCAP...LISTINGS OF FOUR IONCAP SUBROUTINES SUPPORTING THE UPDATED NOISE MODEL ................................................................... 42 VI. LISTING

  19. Prediction of pesticide acute toxicity using two-dimensional chemical descriptors and target species classification

    EPA Science Inventory

    Previous modelling of the median lethal dose (oral rat LD50) has indicated that local class-based models yield better correlations than global models. We evaluated the hypothesis that dividing the dataset by pesticidal mechanisms would improve prediction accuracy. A linear discri...

  20. INITIAL STUDY OF HPAC MODELED DISPERSION DRIVEN BY MM5 WITH AND WITHOUT URBAN CANOPY PARAMETERIZATIONS

    EPA Science Inventory

    Improving the accuracy and capability of transport and dispersion models in urban areas is essential for current and future urban applications. These models must reflect more realistically the presence and details of urban canopy features. Such features markedly influence the flo...

  1. Predictive models for Escherichia coli concentrations at inland lake beaches and relationship of model variables to pathogen detection

    EPA Science Inventory

    Methods are needed improve the timeliness and accuracy of recreational water‐quality assessments. Traditional culture methods require 18–24 h to obtain results and may not reflect current conditions. Predictive models, based on environmental and water quality variables, have been...

  2. Rapid condition assessment of structural condition after a blast using state-space identification

    NASA Astrophysics Data System (ADS)

    Eskew, Edward; Jang, Shinae

    2015-04-01

    After a blast event, it is important to quickly quantify the structural damage for emergency operations. In order improve the speed, accuracy, and efficiency of condition assessments after a blast, the authors have previously performed work to develop a methodology for rapid assessment of the structural condition of a building after a blast. The method involved determining a post-event equivalent stiffness matrix using vibration measurements and a finite element (FE) model. A structural model was built for the damaged structure based on the equivalent stiffness, and inter-story drifts from the blast are determined using numerical simulations, with forces determined from the blast parameters. The inter-story drifts are then compared to blast design conditions to assess the structures damage. This method still involved engineering judgment in terms of determining significant frequencies, which can lead to error, especially with noisy measurements. In an effort to improve accuracy and automate the process, this paper will look into a similar method of rapid condition assessment using subspace state-space identification. The accuracy of the method will be tested using a benchmark structural model, as well as experimental testing. The blast damage assessments will be validated using pressure-impulse (P-I) diagrams, which present the condition limits across blast parameters. Comparisons between P-I diagrams generated using the true system parameters and equivalent parameters will show the accuracy of the rapid condition based blast assessments.

  3. Open-Source Digital Elevation Model (DEMs) Evaluation with GPS and LiDAR Data

    NASA Astrophysics Data System (ADS)

    Khalid, N. F.; Din, A. H. M.; Omar, K. M.; Khanan, M. F. A.; Omar, A. H.; Hamid, A. I. A.; Pa'suya, M. F.

    2016-09-01

    Advanced Spaceborne Thermal Emission and Reflection Radiometer-Global Digital Elevation Model (ASTER GDEM), Shuttle Radar Topography Mission (SRTM), and Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) are freely available Digital Elevation Model (DEM) datasets for environmental modeling and studies. The quality of spatial resolution and vertical accuracy of the DEM data source has a great influence particularly on the accuracy specifically for inundation mapping. Most of the coastal inundation risk studies used the publicly available DEM to estimated the coastal inundation and associated damaged especially to human population based on the increment of sea level. In this study, the comparison between ground truth data from Global Positioning System (GPS) observation and DEM is done to evaluate the accuracy of each DEM. The vertical accuracy of SRTM shows better result against ASTER and GMTED10 with an RMSE of 6.054 m. On top of the accuracy, the correlation of DEM is identified with the high determination of coefficient of 0.912 for SRTM. For coastal zone area, DEMs based on airborne light detection and ranging (LiDAR) dataset was used as ground truth data relating to terrain height. In this case, the LiDAR DEM is compared against the new SRTM DEM after applying the scale factor. From the findings, the accuracy of the new DEM model from SRTM can be improved by applying scale factor. The result clearly shows that the value of RMSE exhibit slightly different when it reached 0.503 m. Hence, this new model is the most suitable and meets the accuracy requirement for coastal inundation risk assessment using open source data. The suitability of these datasets for further analysis on coastal management studies is vital to assess the potentially vulnerable areas caused by coastal inundation.

  4. Autonomous Navigation of Small Uavs Based on Vehicle Dynamic Model

    NASA Astrophysics Data System (ADS)

    Khaghani, M.; Skaloud, J.

    2016-03-01

    This paper presents a novel approach to autonomous navigation for small UAVs, in which the vehicle dynamic model (VDM) serves as the main process model within the navigation filter. The proposed method significantly increases the accuracy and reliability of autonomous navigation, especially for small UAVs with low-cost IMUs on-board. This is achieved with no extra sensor added to the conventional INS/GNSS setup. This improvement is of special interest in case of GNSS outages, where inertial coasting drifts very quickly. In the proposed architecture, the solution to VDM equations provides the estimate of position, velocity, and attitude, which is updated within the navigation filter based on available observations, such as IMU data or GNSS measurements. The VDM is also fed with the control input to the UAV, which is available within the control/autopilot system. The filter is capable of estimating wind velocity and dynamic model parameters, in addition to navigation states and IMU sensor errors. Monte Carlo simulations reveal major improvements in navigation accuracy compared to conventional INS/GNSS navigation system during the autonomous phase, when satellite signals are not available due to physical obstruction or electromagnetic interference for example. In case of GNSS outages of a few minutes, position and attitude accuracy experiences improvements of orders of magnitude compared to inertial coasting. It means that during such scenario, the position-velocity-attitude (PVA) determination is sufficiently accurate to navigate the UAV to a home position without any signal that depends on vehicle environment.

  5. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  6. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.

  7. Improved Prediction of Blood-Brain Barrier Permeability Through Machine Learning with Combined Use of Molecular Property-Based Descriptors and Fingerprints.

    PubMed

    Yuan, Yaxia; Zheng, Fang; Zhan, Chang-Guo

    2018-03-21

    Blood-brain barrier (BBB) permeability of a compound determines whether the compound can effectively enter the brain. It is an essential property which must be accounted for in drug discovery with a target in the brain. Several computational methods have been used to predict the BBB permeability. In particular, support vector machine (SVM), which is a kernel-based machine learning method, has been used popularly in this field. For SVM training and prediction, the compounds are characterized by molecular descriptors. Some SVM models were based on the use of molecular property-based descriptors (including 1D, 2D, and 3D descriptors) or fragment-based descriptors (known as the fingerprints of a molecule). The selection of descriptors is critical for the performance of a SVM model. In this study, we aimed to develop a generally applicable new SVM model by combining all of the features of the molecular property-based descriptors and fingerprints to improve the accuracy for the BBB permeability prediction. The results indicate that our SVM model has improved accuracy compared to the currently available models of the BBB permeability prediction.

  8. Effect of genetic architecture on the prediction accuracy of quantitative traits in samples of unrelated individuals.

    PubMed

    Morgante, Fabio; Huang, Wen; Maltecca, Christian; Mackay, Trudy F C

    2018-06-01

    Predicting complex phenotypes from genomic data is a fundamental aim of animal and plant breeding, where we wish to predict genetic merits of selection candidates; and of human genetics, where we wish to predict disease risk. While genomic prediction models work well with populations of related individuals and high linkage disequilibrium (LD) (e.g., livestock), comparable models perform poorly for populations of unrelated individuals and low LD (e.g., humans). We hypothesized that low prediction accuracies in the latter situation may occur when the genetics architecture of the trait departs from the infinitesimal and additive architecture assumed by most prediction models. We used simulated data for 10,000 lines based on sequence data from a population of unrelated, inbred Drosophila melanogaster lines to evaluate this hypothesis. We show that, even in very simplified scenarios meant as a stress test of the commonly used Genomic Best Linear Unbiased Predictor (G-BLUP) method, using all common variants yields low prediction accuracy regardless of the trait genetic architecture. However, prediction accuracy increases when predictions are informed by the genetic architecture inferred from mapping the top variants affecting main effects and interactions in the training data, provided there is sufficient power for mapping. When the true genetic architecture is largely or partially due to epistatic interactions, the additive model may not perform well, while models that account explicitly for interactions generally increase prediction accuracy. Our results indicate that accounting for genetic architecture can improve prediction accuracy for quantitative traits.

  9. Clinical pharmacology quality assurance program: models for longitudinal analysis of antiretroviral proficiency testing for international laboratories.

    PubMed

    DiFrancesco, Robin; Rosenkranz, Susan L; Taylor, Charlene R; Pande, Poonam G; Siminski, Suzanne M; Jenny, Richard W; Morse, Gene D

    2013-10-01

    Among National Institutes of Health HIV Research Networks conducting multicenter trials, samples from protocols that span several years are analyzed at multiple clinical pharmacology laboratories (CPLs) for multiple antiretrovirals. Drug assay data are, in turn, entered into study-specific data sets that are used for pharmacokinetic analyses, merged to conduct cross-protocol pharmacokinetic analysis, and integrated with pharmacogenomics research to investigate pharmacokinetic-pharmacogenetic associations. The CPLs participate in a semiannual proficiency testing (PT) program implemented by the Clinical Pharmacology Quality Assurance program. Using results from multiple PT rounds, longitudinal analyses of recovery are reflective of accuracy and precision within/across laboratories. The objectives of this longitudinal analysis of PT across multiple CPLs were to develop and test statistical models that longitudinally: (1) assess the precision and accuracy of concentrations reported by individual CPLs and (2) determine factors associated with round-specific and long-term assay accuracy, precision, and bias using a new regression model. A measure of absolute recovery is explored as a simultaneous measure of accuracy and precision. Overall, the analysis outcomes assured 97% accuracy (±20% of the final target concentration of all (21) drug concentration results reported for clinical trial samples by multiple CPLs). Using the Clinical Laboratory Improvement Act acceptance of meeting criteria for ≥2/3 consecutive rounds, all 10 laboratories that participated in 3 or more rounds per analyte maintained Clinical Laboratory Improvement Act proficiency. Significant associations were present between magnitude of error and CPL (Kruskal-Wallis P < 0.001) and antiretroviral (Kruskal-Wallis P < 0.001).

  10. Does NASA SMAP Improve the Accuracy of Power Outage Models?

    NASA Astrophysics Data System (ADS)

    Quiring, S. M.; McRoberts, D. B.; Toy, B.; Alvarado, B.

    2016-12-01

    Electric power utilities make critical decisions in the days prior to hurricane landfall that are primarily based on the estimated impact to their service area. For example, utilities must determine how many repair crews to request from other utilities, the amount of material and equipment they will need to make repairs, and where in their geographically expansive service area to station crews and materials. Accurate forecasts of the impact of an approaching hurricane within their service area are critical for utilities in balancing the costs and benefits of different levels of resources. The Hurricane Outage Prediction Model (HOPM) are a family of statistical models that utilize predictions of tropical cyclone windspeed and duration of strong winds, along with power system and environmental variables (e.g., soil moisture, long-term precipitation), to forecast the number and location of power outages. This project assesses whether using NASA SMAP soil moisture improves the accuracy of power outage forecasts as compared to using model-derived soil moisture from NLDAS-2. A sensitivity analysis is employed since there have been very few tropical cyclones making landfall in the United States since SMAP was launched. The HOPM is used to predict power outages for 13 historical tropical cyclones and the model is run using twice, once with NLDAS soil moisture and once with SMAP soil moisture. Our results demonstrate that using SMAP soil moisture can have a significant impact on power outage predictions. SMAP has the potential to enhance the accuracy of power outage forecasts. Improved outage forecasts reduce the duration of power outages which reduces economic losses and accelerates recovery.

  11. Electromagnetic Launch Vehicle Fairing and Acoustic Blanket Model of Received Power Using FEKO

    NASA Technical Reports Server (NTRS)

    Trout, Dawn H.; Stanley, James E.; Wahid, Parveen F.

    2011-01-01

    Evaluating the impact of radio frequency transmission in vehicle fairings is important to electromagnetically sensitive spacecraft. This study employs the multilevel fast multipole method (MLFMM) from a commercial electromagnetic tool, FEKO, to model the fairing electromagnetic environment in the presence of an internal transmitter with improved accuracy over industry applied techniques. This fairing model includes material properties representative of acoustic blanketing commonly used in vehicles. Equivalent surface material models within FEKO were successfully applied to simulate the test case. Finally, a simplified model is presented using Nicholson Ross Weir derived blanket material properties. These properties are implemented with the coated metal option to reduce the model to one layer within the accuracy of the original three layer simulation.

  12. Advances in Applications of Hierarchical Bayesian Methods with Hydrological Models

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Schwarz, G. E.; Boyer, E. W.

    2017-12-01

    Mechanistic and empirical watershed models are increasingly used to inform water resource decisions. Growing access to historical stream measurements and data from in-situ sensor technologies has increased the need for improved techniques for coupling models with hydrological measurements. Techniques that account for the intrinsic uncertainties of both models and measurements are especially needed. Hierarchical Bayesian methods provide an efficient modeling tool for quantifying model and prediction uncertainties, including those associated with measurements. Hierarchical methods can also be used to explore spatial and temporal variations in model parameters and uncertainties that are informed by hydrological measurements. We used hierarchical Bayesian methods to develop a hybrid (statistical-mechanistic) SPARROW (SPAtially Referenced Regression On Watershed attributes) model of long-term mean annual streamflow across diverse environmental and climatic drainages in 18 U.S. hydrological regions. Our application illustrates the use of a new generation of Bayesian methods that offer more advanced computational efficiencies than the prior generation. Evaluations of the effects of hierarchical (regional) variations in model coefficients and uncertainties on model accuracy indicates improved prediction accuracies (median of 10-50%) but primarily in humid eastern regions, where model uncertainties are one-third of those in arid western regions. Generally moderate regional variability is observed for most hierarchical coefficients. Accounting for measurement and structural uncertainties, using hierarchical state-space techniques, revealed the effects of spatially-heterogeneous, latent hydrological processes in the "localized" drainages between calibration sites; this improved model precision, with only minor changes in regional coefficients. Our study can inform advances in the use of hierarchical methods with hydrological models to improve their integration with stream measurements.

  13. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  14. Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models

    DOE PAGES

    Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...

    2016-12-15

    We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less

  15. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning

    PubMed Central

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-01-01

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787

  16. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning.

    PubMed

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-03-16

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.

  17. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  18. How accurately can the microcanonical ensemble describe small isolated quantum systems?

    NASA Astrophysics Data System (ADS)

    Ikeda, Tatsuhiko N.; Ueda, Masahito

    2015-08-01

    We numerically investigate quantum quenches of a nonintegrable hard-core Bose-Hubbard model to test the accuracy of the microcanonical ensemble in small isolated quantum systems. We show that, in a certain range of system size, the accuracy increases with the dimension of the Hilbert space D as 1 /D . We ascribe this rapid improvement to the absence of correlations between many-body energy eigenstates. Outside of that range, the accuracy is found to scale either as 1 /√{D } or algebraically with the system size.

  19. Improved forest change detection with terrain illumination corrected landsat images

    USDA-ARS?s Scientific Manuscript database

    An illumination correction algorithm has been developed to improve the accuracy of forest change detection from Landsat reflectance data. This algorithm is based on an empirical rotation model and was tested on the Landsat imagery pair over Cherokee National Forest, Tennessee, Uinta-Wasatch-Cache N...

  20. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  1. Data Prediction for Public Events in Professional Domains Based on Improved RNN- LSTM

    NASA Astrophysics Data System (ADS)

    Song, Bonan; Fan, Chunxiao; Wu, Yuexin; Sun, Juanjuan

    2018-02-01

    The traditional data services of prediction for emergency or non-periodic events usually cannot generate satisfying result or fulfill the correct prediction purpose. However, these events are influenced by external causes, which mean certain a priori information of these events generally can be collected through the Internet. This paper studied the above problems and proposed an improved model—LSTM (Long Short-term Memory) dynamic prediction and a priori information sequence generation model by combining RNN-LSTM and public events a priori information. In prediction tasks, the model is qualified for determining trends, and its accuracy also is validated. This model generates a better performance and prediction results than the previous one. Using a priori information can increase the accuracy of prediction; LSTM can better adapt to the changes of time sequence; LSTM can be widely applied to the same type of prediction tasks, and other prediction tasks related to time sequence.

  2. A multicriteria decision making approach based on fuzzy theory and credibility mechanism for logistics center location selection.

    PubMed

    Wang, Bowen; Xiong, Haitao; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center.

  3. Short-term Power Load Forecasting Based on Balanced KNN

    NASA Astrophysics Data System (ADS)

    Lv, Xianlong; Cheng, Xingong; YanShuang; Tang, Yan-mei

    2018-03-01

    To improve the accuracy of load forecasting, a short-term load forecasting model based on balanced KNN algorithm is proposed; According to the load characteristics, the historical data of massive power load are divided into scenes by the K-means algorithm; In view of unbalanced load scenes, the balanced KNN algorithm is proposed to classify the scene accurately; The local weighted linear regression algorithm is used to fitting and predict the load; Adopting the Apache Hadoop programming framework of cloud computing, the proposed algorithm model is parallelized and improved to enhance its ability of dealing with massive and high-dimension data. The analysis of the household electricity consumption data for a residential district is done by 23-nodes cloud computing cluster, and experimental results show that the load forecasting accuracy and execution time by the proposed model are the better than those of traditional forecasting algorithm.

  4. A Multicriteria Decision Making Approach Based on Fuzzy Theory and Credibility Mechanism for Logistics Center Location Selection

    PubMed Central

    Wang, Bowen; Jiang, Chengrui

    2014-01-01

    As a hot topic in supply chain management, fuzzy method has been widely used in logistics center location selection to improve the reliability and suitability of the logistics center location selection with respect to the impacts of both qualitative and quantitative factors. However, it does not consider the consistency and the historical assessments accuracy of experts in predecisions. So this paper proposes a multicriteria decision making model based on credibility of decision makers by introducing priority of consistency and historical assessments accuracy mechanism into fuzzy multicriteria decision making approach. In this way, only decision makers who pass the credibility check are qualified to perform the further assessment. Finally, a practical example is analyzed to illustrate how to use the model. The result shows that the fuzzy multicriteria decision making model based on credibility mechanism can improve the reliability and suitability of site selection for the logistics center. PMID:25215319

  5. Clinical time series prediction: towards a hierarchical dynamical system framework

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2014-01-01

    Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671

  6. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  7. Development of response models for the Earth Radiation Budget Experiment (ERBE) sensors. Part 1: Dynamic models and computer simulations for the ERBE nonscanner, scanner and solar monitor sensors

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Choi, Sang H.; Chrisman, Dan A., Jr.; Samms, Richard W.

    1987-01-01

    Dynamic models and computer simulations were developed for the radiometric sensors utilized in the Earth Radiation Budget Experiment (ERBE). The models were developed to understand performance, improve measurement accuracy by updating model parameters and provide the constants needed for the count conversion algorithms. Model simulations were compared with the sensor's actual responses demonstrated in the ground and inflight calibrations. The models consider thermal and radiative exchange effects, surface specularity, spectral dependence of a filter, radiative interactions among an enclosure's nodes, partial specular and diffuse enclosure surface characteristics and steady-state and transient sensor responses. Relatively few sensor nodes were chosen for the models since there is an accuracy tradeoff between increasing the number of nodes and approximating parameters such as the sensor's size, material properties, geometry, and enclosure surface characteristics. Given that the temperature gradients within a node and between nodes are small enough, approximating with only a few nodes does not jeopardize the accuracy required to perform the parameter estimates and error analyses.

  8. Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose

    NASA Astrophysics Data System (ADS)

    Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-03-01

    Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.

  9. Mapping Species Composition of Forests and Tree Plantations in Northeastern Costa Rica with an Integration of Hyperspectral and Multitemporal Landsat Imagery

    NASA Technical Reports Server (NTRS)

    Fagan, Matthew E.; Defries, Ruth S.; Sesnie, Steven E.; Arroyo-Mora, J. Pablo; Soto, Carlomagno; Singh, Aditya; Townsend, Philip A.; Chazdon, Robin L.

    2015-01-01

    An efficient means to map tree plantations is needed to detect tropical land use change and evaluate reforestation projects. To analyze recent tree plantation expansion in northeastern Costa Rica, we examined the potential of combining moderate-resolution hyperspectral imagery (2005 HyMap mosaic) with multitemporal, multispectral data (Landsat) to accurately classify (1) general forest types and (2) tree plantations by species composition. Following a linear discriminant analysis to reduce data dimensionality, we compared four Random Forest classification models: hyperspectral data (HD) alone; HD plus interannual spectral metrics; HD plus a multitemporal forest regrowth classification; and all three models combined. The fourth, combined model achieved overall accuracy of 88.5%. Adding multitemporal data significantly improved classification accuracy (p less than 0.0001) of all forest types, although the effect on tree plantation accuracy was modest. The hyperspectral data alone classified six species of tree plantations with 75% to 93% producer's accuracy; adding multitemporal spectral data increased accuracy only for two species with dense canopies. Non-native tree species had higher classification accuracy overall and made up the majority of tree plantations in this landscape. Our results indicate that combining occasionally acquired hyperspectral data with widely available multitemporal satellite imagery enhances mapping and monitoring of reforestation in tropical landscapes.

  10. Mathematical-Artificial Neural Network Hybrid Model to Predict Roll Force during Hot Rolling of Steel

    NASA Astrophysics Data System (ADS)

    Rath, S.; Sengupta, P. P.; Singh, A. P.; Marik, A. K.; Talukdar, P.

    2013-07-01

    Accurate prediction of roll force during hot strip rolling is essential for model based operation of hot strip mills. Traditionally, mathematical models based on theory of plastic deformation have been used for prediction of roll force. In the last decade, data driven models like artificial neural network have been tried for prediction of roll force. Pure mathematical models have accuracy limitations whereas data driven models have difficulty in convergence when applied to industrial conditions. Hybrid models by integrating the traditional mathematical formulations and data driven methods are being developed in different parts of world. This paper discusses the methodology of development of an innovative hybrid mathematical-artificial neural network model. In mathematical model, the most important factor influencing accuracy is flow stress of steel. Coefficients of standard flow stress equation, calculated by parameter estimation technique, have been used in the model. The hybrid model has been trained and validated with input and output data collected from finishing stands of Hot Strip Mill, Bokaro Steel Plant, India. It has been found that the model accuracy has been improved with use of hybrid model, over the traditional mathematical model.

  11. D Modelling with the Samsung Gear 360

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2017-02-01

    The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.

  12. Ground target geolocation based on digital elevation model for airborne wide-area reconnaissance system

    NASA Astrophysics Data System (ADS)

    Qiao, Chuan; Ding, Yalin; Xu, Yongsen; Xiu, Jihong

    2018-01-01

    To obtain the geographical position of the ground target accurately, a geolocation algorithm based on the digital elevation model (DEM) is developed for an airborne wide-area reconnaissance system. According to the platform position and attitude information measured by the airborne position and orientation system and the gimbal angles information from the encoder, the line-of-sight pointing vector in the Earth-centered Earth-fixed coordinate frame is solved by the homogeneous coordinate transformation. The target longitude and latitude can be solved with the elliptical Earth model and the global DEM. The influences of the systematic error and measurement error on ground target geolocation calculation accuracy are analyzed by the Monte Carlo method. The simulation results show that this algorithm can improve the geolocation accuracy of ground target in rough terrain area obviously. The geolocation accuracy of moving ground target can be improved by moving average filtering (MAF). The validity of the geolocation algorithm is verified by the flight test in which the plane flies at a geodetic height of 15,000 m and the outer gimbal angle is <47°. The geolocation root mean square error of the target trajectory is <45 and <7 m after MAF.

  13. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  14. Simultaneous Assimilation of AMSR-E Brightness Temperature and MODIS LST to Improve Soil Moisture with Dual Ensemble Kalman Smoother

    NASA Astrophysics Data System (ADS)

    Huang, Chunlin; Chen, Weijin; Wang, Weizhen; Gu, Juan

    2017-04-01

    Uncertainties in model parameters can easily cause systematic differences between model states and observations from ground or satellites, which significantly affect the accuracy of soil moisture estimation in data assimilation systems. In this paper, a novel soil moisture assimilation scheme is developed to simultaneously assimilate AMSR-E brightness temperature (TB) and MODIS Land Surface Temperature (LST), which can correct model bias by simultaneously updating model states and parameters with dual ensemble Kalman filter (DEnKS). The Common Land Model (CoLM) and a Q-h Radiative Transfer Model (RTM) are adopted as model operator and observation operator, respectively. The assimilation experiment is conducted in Naqu, Tibet Plateau, from May 31 to September 27, 2011. Compared with in-situ measurements, the accuracy of soil moisture estimation is tremendously improved in terms of a variety of scales. The updated soil temperature by assimilating MODIS LST as input of RTM can reduce the differences between the simulated and observed brightness temperatures to a certain degree, which helps to improve the estimation of soil moisture and model parameters. The updated parameters show large discrepancy with the default ones and the former effectively reduces the states bias of CoLM. Results demonstrate the potential of assimilating both microwave TB and MODIS LST to improve the estimation of soil moisture and related parameters. Furthermore, this study also indicates that the developed scheme is an effective soil moisture downscaling approach for coarse-scale microwave TB.

  15. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  16. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  17. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    PubMed

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. RNA secondary structure prediction with pseudoknots: Contribution of algorithm versus energy model.

    PubMed

    Jabbari, Hosna; Wark, Ian; Montemagno, Carlo

    2018-01-01

    RNA is a biopolymer with various applications inside the cell and in biotechnology. Structure of an RNA molecule mainly determines its function and is essential to guide nanostructure design. Since experimental structure determination is time-consuming and expensive, accurate computational prediction of RNA structure is of great importance. Prediction of RNA secondary structure is relatively simpler than its tertiary structure and provides information about its tertiary structure, therefore, RNA secondary structure prediction has received attention in the past decades. Numerous methods with different folding approaches have been developed for RNA secondary structure prediction. While methods for prediction of RNA pseudoknot-free structure (structures with no crossing base pairs) have greatly improved in terms of their accuracy, methods for prediction of RNA pseudoknotted secondary structure (structures with crossing base pairs) still have room for improvement. A long-standing question for improving the prediction accuracy of RNA pseudoknotted secondary structure is whether to focus on the prediction algorithm or the underlying energy model, as there is a trade-off on computational cost of the prediction algorithm versus the generality of the method. The aim of this work is to argue when comparing different methods for RNA pseudoknotted structure prediction, the combination of algorithm and energy model should be considered and a method should not be considered superior or inferior to others if they do not use the same scoring model. We demonstrate that while the folding approach is important in structure prediction, it is not the only important factor in prediction accuracy of a given method as the underlying energy model is also as of great value. Therefore we encourage researchers to pay particular attention in comparing methods with different energy models.

  19. Experimentally validated modification to Cook-Torrance BRDF model for improved accuracy

    NASA Astrophysics Data System (ADS)

    Butler, Samuel D.; Ethridge, James A.; Nauyoks, Stephen E.; Marciniak, Michael A.

    2017-09-01

    The BRDF describes optical scatter off realistic surfaces. The microfacet BRDF model assumes geometric optics but is computationally simple compared to wave optics models. In this work, MERL BRDF data is fitted to the original Cook-Torrance microfacet model, and a modified Cook-Torrance model using the polarization factor in place of the mathematically problematic cross section conversion and geometric attenuation terms. The results provide experimental evidence that this modified Cook-Torrance model leads to improved fits, particularly for large incident and scattered angles. These results are expected to lead to more accurate BRDF modeling for remote sensing.

  20. An improved thermoregulatory model for cooling garment applications with transient metabolic rates

    NASA Astrophysics Data System (ADS)

    Westin, Johan K.

    Current state-of-the-art thermoregulatory models do not predict body temperatures with the accuracies that are required for the development of automatic cooling control in liquid cooling garment (LCG) systems. Automatic cooling control would be beneficial in a variety of space, aviation, military, and industrial environments for optimizing cooling efficiency, for making LCGs as portable and practical as possible, for alleviating the individual from manual cooling control, and for improving thermal comfort and cognitive performance. In this study, we adopt the Fiala thermoregulatory model, which has previously demonstrated state-of-the-art predictive abilities in air environments, for use in LCG environments. We validate the numerical formulation with analytical solutions to the bioheat equation, and find our model to be accurate and stable with a variety of different grid configurations. We then compare the thermoregulatory model's tissue temperature predictions with experimental data where individuals, equipped with an LCG, exercise according to a 700 W rectangular type activity schedule. The root mean square (RMS) deviation between the model response and the mean experimental group response is 0.16°C for the rectal temperature and 0.70°C for the mean skin temperature, which is within state-of-the-art variations. However, with a mean absolute body heat storage error 3¯ BHS of 9.7 W˙h, the model fails to satisfy the +/-6.5 W˙h accuracy that is required for the automatic LCG cooling control development. In order to improve model predictions, we modify the blood flow dynamics of the thermoregulatory model. Instead of using step responses to changing requirements, we introduce exponential responses to the muscle blood flow and the vasoconstriction command. We find that such modifications have an insignificant effect on temperature predictions. However, a new vasoconstriction dependency, i.e. the rate of change of hypothalamus temperature weighted by the hypothalamus error signal (DeltaThy˙ dThy/dt), proves to be an important signal that governs the thermoregulatory response during conditions of simultaneously increasing core and decreasing skin temperatures, which is a common scenario in LCG environments. With the new ?DeltaThy˙dThy /dt dependency in the vasoconstriction command, the 3¯ BHS for the exercise period is reduced by 59% (from 12.9 W˙h to 5.2 W˙h). Even though the new 3¯ BHS of 5.8 W˙h for the total activity schedule is within the target accuracy of +/-6.5 W˙h, 3¯ BHS fails to stay within the target accuracy during the entire activity schedule. With additional improvements to the central blood pool formulation, the LCG boundary condition, and the agreement between model set-points and actual experimental initial conditions, it seems possible to achieve the strict accuracy that is needed for automatic cooling control development.

  1. Ionospheric effects in uncalibrated phase delay estimation and ambiguity-fixed PPP based on raw observable model

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Shi, Chuang; Lou, Yidong; Liu, Jingnan

    2015-05-01

    Zero-difference (ZD) ambiguity resolution (AR) reveals the potential to further improve the performance of precise point positioning (PPP). Traditionally, PPP AR is achieved by Melbourne-Wübbena and ionosphere-free combinations in which the ionosphere effect are removed. To exploit the ionosphere characteristics, PPP AR with L1 and L2 raw observable has also been developed recently. In this study, we apply this new approach in uncalibrated phase delay (UPD) generation and ZD AR and compare it with the traditional model. The raw observable processing strategy treats each ionosphere delay as an unknown parameter. In this manner, both a priori ionosphere correction model and its spatio-temporal correlation can be employed as constraints to improve the ambiguity resolution. However, theoretical analysis indicates that for the wide-lane (WL) UPD retrieved from L1/L2 ambiguities to benefit from this raw observable approach, high precision ionosphere correction of better than 0.7 total electron content unit (TECU) is essential. This conclusion is then confirmed with over 1 year data collected at about 360 stations. Firstly, both global and regional ionosphere model were generated and evaluated, the results of which demonstrated that, for large-scale ionosphere modeling, only an accuracy of 3.9 TECU can be achieved on average for the vertical delays, and this accuracy can be improved to about 0.64 TECU when dense network is involved. Based on these ionosphere products, WL/narrow-lane (NL) UPDs are then extracted with the raw observable model. The NL ambiguity reveals a better stability and consistency compared to traditional approach. Nonetheless, the WL ambiguity can be hardly improved even constrained with the high spatio-temporal resolution ionospheric corrections. By applying both these approaches in PPP-RTK, it is interesting to find that the traditional model is more efficient in AR as evidenced by the shorter time to first fix, while the three-dimensional positioning accuracy of the RAW model outperforms the combination model by about . This reveals that, with the current ionosphere models, there is actually no optimal strategy for the dual-frequency ZD ambiguity resolution, and the combination approach and raw approach each has merits and demerits.

  2. [In vivo model to evaluate the accuracy of complete-tooth spectrophotometer for dental clinics].

    PubMed

    Liu, Feng; Yang, Jian; Xu, Tong-Kai; Xu, Ming-Ming; Ma, Yu

    2011-02-01

    To test ΔE between measured value and right value from the Crystaleye complete-tooth spectrophotometer, and to evaluate the accuracy rate of the spectrophotometer. Twenty prosthodontists participated in the study. Each of them used Vita 3D-Master shadeguide to do the shade matching, and used Crystaleye complete-tooth spectrophotometer (before and after the test training) tested the middle of eight fixed tabs from shadeguide in the dark box. The results of shade matching and spectrophotometer were recorded. The accuracy rate of shade matching and the spectrophotometer before and after training were calculated. The average accuracy rate of shade matching was 49%. The average accuracy rate of the spectrophotometer before and after training was 83% and 99%. The accuracy of the spectrophotometer was significant higher than that in shade matching, and training can improve the accuracy rate.

  3. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    NASA Astrophysics Data System (ADS)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  4. Are people excessive or judicious in their egocentrism? A modeling approach to understanding bias and accuracy in people's optimism.

    PubMed

    Windschitl, Paul D; Rose, Jason P; Stalkfleet, Michael T; Smith, Andrew R

    2008-08-01

    People are often egocentric when judging their likelihood of success in competitions, leading to overoptimism about winning when circumstances are generally easy and to overpessimism when the circumstances are difficult. Yet, egocentrism might be grounded in a rational tendency to favor highly reliable information (about the self) more so than less reliable information (about others). A general theory of probability called extended support theory was used to conceptualize and assess the role of egocentrism and its consequences for the accuracy of people's optimism in 3 competitions (Studies 1-3, respectively). Also, instructions were manipulated to test whether people who were urged to avoid egocentrism would show improved or worsened accuracy in their likelihood judgments. Egocentrism was found to have a potentially helpful effect on one form of accuracy, but people generally showed too much egocentrism. Debias instructions improved one form of accuracy but had no impact on another. The advantages of using the EST framework for studying optimism and other types of judgments (e.g., comparative ability judgments) are discussed. (c) 2008 APA, all rights reserved

  5. Improvement of Dimensional Accuracy of 3-D Printed Parts using an Additive/Subtractive Based Hybrid Prototyping Approach

    NASA Astrophysics Data System (ADS)

    Amanullah Tomal, A. N. M.; Saleh, Tanveer; Raisuddin Khan, Md.

    2017-11-01

    At present, two important processes, namely CNC machining and rapid prototyping (RP) are being used to create prototypes and functional products. Combining both additive and subtractive processes into a single platform would be advantageous. However, there are two important aspects need to be taken into consideration for this process hybridization. First is the integration of two different control systems for two processes and secondly maximizing workpiece alignment accuracy during the changeover step. Recently we have developed a new hybrid system which incorporates Fused Deposition Modelling (FDM) as RP Process and CNC grinding operation as subtractive manufacturing process into a single setup. Several objects were produced with different layer thickness for example 0.1 mm, 0.15 mm and 0.2 mm. It was observed that pure FDM method is unable to attain desired dimensional accuracy and can be improved by a considerable margin about 66% to 80%, if finishing operation by grinding is carried out. It was also observed layer thickness plays a role on the dimensional accuracy and best accuracy is achieved with the minimum layer thickness (0.1 mm).

  6. A new computational strategy for predicting essential genes.

    PubMed

    Cheng, Jian; Wu, Wenwu; Zhang, Yinwen; Li, Xiangchen; Jiang, Xiaoqian; Wei, Gehong; Tao, Shiheng

    2013-12-21

    Determination of the minimum gene set for cellular life is one of the central goals in biology. Genome-wide essential gene identification has progressed rapidly in certain bacterial species; however, it remains difficult to achieve in most eukaryotic species. Several computational models have recently been developed to integrate gene features and used as alternatives to transfer gene essentiality annotations between organisms. We first collected features that were widely used by previous predictive models and assessed the relationships between gene features and gene essentiality using a stepwise regression model. We found two issues that could significantly reduce model accuracy: (i) the effect of multicollinearity among gene features and (ii) the diverse and even contrasting correlations between gene features and gene essentiality existing within and among different species. To address these issues, we developed a novel model called feature-based weighted Naïve Bayes model (FWM), which is based on Naïve Bayes classifiers, logistic regression, and genetic algorithm. The proposed model assesses features and filters out the effects of multicollinearity and diversity. The performance of FWM was compared with other popular models, such as support vector machine, Naïve Bayes model, and logistic regression model, by applying FWM to reciprocally predict essential genes among and within 21 species. Our results showed that FWM significantly improves the accuracy and robustness of essential gene prediction. FWM can remarkably improve the accuracy of essential gene prediction and may be used as an alternative method for other classification work. This method can contribute substantially to the knowledge of the minimum gene sets required for living organisms and the discovery of new drug targets.

  7. Using Dirichlet Priors to Improve Model Parameter Plausibility

    ERIC Educational Resources Information Center

    Rai, Dovan; Gong, Yue; Beck, Joseph E.

    2009-01-01

    Student modeling is a widely used approach to make inference about a student's attributes like knowledge, learning, etc. If we wish to use these models to analyze and better understand student learning there are two problems. First, a model's ability to predict student performance is at best weakly related to the accuracy of any one of its…

  8. A Modified LS+AR Model to Improve the Accuracy of the Short-term Polar Motion Prediction

    NASA Astrophysics Data System (ADS)

    Wang, Z. W.; Wang, Q. X.; Ding, Y. Q.; Zhang, J. J.; Liu, S. S.

    2017-03-01

    There are two problems of the LS (Least Squares)+AR (AutoRegressive) model in polar motion forecast: the inner residual value of LS fitting is reasonable, but the residual value of LS extrapolation is poor; and the LS fitting residual sequence is non-linear. It is unsuitable to establish an AR model for the residual sequence to be forecasted, based on the residual sequence before forecast epoch. In this paper, we make solution to those two problems with two steps. First, restrictions are added to the two endpoints of LS fitting data to fix them on the LS fitting curve. Therefore, the fitting values next to the two endpoints are very close to the observation values. Secondly, we select the interpolation residual sequence of an inward LS fitting curve, which has a similar variation trend as the LS extrapolation residual sequence, as the modeling object of AR for the residual forecast. Calculation examples show that this solution can effectively improve the short-term polar motion prediction accuracy by the LS+AR model. In addition, the comparison results of the forecast models of RLS (Robustified Least Squares)+AR, RLS+ARIMA (AutoRegressive Integrated Moving Average), and LS+ANN (Artificial Neural Network) confirm the feasibility and effectiveness of the solution for the polar motion forecast. The results, especially for the polar motion forecast in the 1-10 days, show that the forecast accuracy of the proposed model can reach the world level.

  9. Improved Accuracy of Automated Estimation of Cardiac Output Using Circulation Time in Patients with Heart Failure.

    PubMed

    Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi

    2016-11-01

    Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. MicroRNA based Pan-Cancer Diagnosis and Treatment Recommendation.

    PubMed

    Cheerla, Nikhil; Gevaert, Olivier

    2017-01-13

    The current state-of-the-art in cancer diagnosis and treatment is not ideal; diagnostic tests are accurate but invasive, and treatments are "one-size fits-all" instead of being personalized. Recently, miRNA's have garnered significant attention as cancer biomarkers, owing to their ease of access (circulating miRNA in the blood) and stability. There have been many studies showing the effectiveness of miRNA data in diagnosing specific cancer types, but few studies explore the role of miRNA in predicting treatment outcome. Here we go a step further, using tissue miRNA and clinical data across 21 cancers from the 'The Cancer Genome Atlas' (TCGA) database. We use machine learning techniques to create an accurate pan-cancer diagnosis system, and a prediction model for treatment outcomes. Finally, using these models, we create a web-based tool that diagnoses cancer and recommends the best treatment options. We achieved 97.2% accuracy for classification using a support vector machine classifier with radial basis. The accuracies improved to 99.9-100% when climbing up the embryonic tree and classifying cancers at different stages. We define the accuracy as the ratio of the total number of instances correctly classified to the total instances. The classifier also performed well, achieving greater than 80% sensitivity for many cancer types on independent validation datasets. Many miRNAs selected by our feature selection algorithm had strong previous associations to various cancers and tumor progression. Then, using miRNA, clinical and treatment data and encoding it in a machine-learning readable format, we built a prognosis predictor model to predict the outcome of treatment with 85% accuracy. We used this model to create a tool that recommends personalized treatment regimens. Both the diagnosis and prognosis model, incorporating semi-supervised learning techniques to improve their accuracies with repeated use, were uploaded online for easy access. Our research is a step towards the final goal of diagnosing cancer and predicting treatment recommendations using non-invasive blood tests.

  11. Sex-specific developmental models for Creophilus maxillosus (L.) (Coleoptera: Staphylinidae): searching for larger accuracy of insect age estimates.

    PubMed

    Frątczak-Łagiewska, Katarzyna; Matuszewski, Szymon

    2018-05-01

    Differences in size between males and females, called the sexual size dimorphism, are common in insects. These differences may be followed by differences in the duration of development. Accordingly, it is believed that insect sex may be used to increase the accuracy of insect age estimates in forensic entomology. Here, the sex-specific differences in the development of Creophilus maxillosus were studied at seven constant temperatures. We have also created separate developmental models for males and females of C. maxillosus and tested them in a validation study to answer a question whether sex-specific developmental models improve the accuracy of insect age estimates. Results demonstrate that males of C. maxillosus developed significantly longer than females. The sex-specific and general models for the total immature development had the same optimal temperature range and similar developmental threshold but different thermal constant K, which was the largest in the case of the male-specific model and the smallest in the case of the female-specific model. Despite these differences, validation study revealed just minimal and statistically insignificant differences in the accuracy of age estimates using sex-specific and general thermal summation models. This finding indicates that in spite of statistically significant differences in the duration of immature development between females and males of C. maxillosus, there is no increase in the accuracy of insect age estimates while using the sex-specific thermal summation models compared to the general model. Accordingly, this study does not support the use of sex-specific developmental data for the estimation of insect age in forensic entomology.

  12. Analysis of near infrared spectra for age-grading of wild populations of Anopheles gambiae.

    PubMed

    Krajacich, Benjamin J; Meyers, Jacob I; Alout, Haoues; Dabiré, Roch K; Dowell, Floyd E; Foy, Brian D

    2017-11-07

    Understanding the age-structure of mosquito populations, especially malaria vectors such as Anopheles gambiae, is important for assessing the risk of infectious mosquitoes, and how vector control interventions may impact this risk. The use of near-infrared spectroscopy (NIRS) for age-grading has been demonstrated previously on laboratory and semi-field mosquitoes, but to date has not been utilized on wild-caught mosquitoes whose age is externally validated via parity status or parasite infection stage. In this study, we developed regression and classification models using NIRS on datasets of wild An. gambiae (s.l.) reared from larvae collected from the field in Burkina Faso, and two laboratory strains. We compared the accuracy of these models for predicting the ages of wild-caught mosquitoes that had been scored for their parity status as well as for positivity for Plasmodium sporozoites. Regression models utilizing variable selection increased predictive accuracy over the more common full-spectrum partial least squares (PLS) approach for cross-validation of the datasets, validation, and independent test sets. Models produced from datasets that included the greatest range of mosquito samples (i.e. different sampling locations and times) had the highest predictive accuracy on independent testing sets, though overall accuracy on these samples was low. For classification, we found that intramodel accuracy ranged between 73.5-97.0% for grouping of mosquitoes into "early" and "late" age classes, with the highest prediction accuracy found in laboratory colonized mosquitoes. However, this accuracy was decreased on test sets, with the highest classification of an independent set of wild-caught larvae reared to set ages being 69.6%. Variation in NIRS data, likely from dietary, genetic, and other factors limits the accuracy of this technique with wild-caught mosquitoes. Alternative algorithms may help improve prediction accuracy, but care should be taken to either maximize variety in models or minimize confounders.

  13. A time series modeling approach in risk appraisal of violent and sexual recidivism.

    PubMed

    Bani-Yaghoub, Majid; Fedoroff, J Paul; Curry, Susan; Amundsen, David E

    2010-10-01

    For over half a century, various clinical and actuarial methods have been employed to assess the likelihood of violent recidivism. Yet there is a need for new methods that can improve the accuracy of recidivism predictions. This study proposes a new time series modeling approach that generates high levels of predictive accuracy over short and long periods of time. The proposed approach outperformed two widely used actuarial instruments (i.e., the Violence Risk Appraisal Guide and the Sex Offender Risk Appraisal Guide). Furthermore, analysis of temporal risk variations based on specific time series models can add valuable information into risk assessment and management of violent offenders.

  14. A multi-agency nutrient dataset used to estimate loads, improve monitoring design, and calibrate regional nutrient SPARROW models

    USGS Publications Warehouse

    Saad, David A.; Schwarz, Gregory E.; Robertson, Dale M.; Booth, Nathaniel

    2011-01-01

    Stream-loading information was compiled from federal, state, and local agencies, and selected universities as part of an effort to develop regional SPAtially Referenced Regressions On Watershed attributes (SPARROW) models to help describe the distribution, sources, and transport of nutrients in streams throughout much of the United States. After screening, 2,739 sites, sampled by 73 agencies, were identified as having suitable data for calculating long-term mean annual nutrient loads required for SPARROW model calibration. These sites had a wide range in nutrient concentrations, loads, and yields, and environmental characteristics in their basins. An analysis of the accuracy in load estimates relative to site attributes indicated that accuracy in loads improve with increases in the number of observations, the proportion of uncensored data, and the variability in flow on observation days, whereas accuracy declines with increases in the root mean square error of the water-quality model, the flow-bias ratio, the number of days between samples, the variability in daily streamflow for the prediction period, and if the load estimate has been detrended. Based on compiled data, all areas of the country had recent declines in the number of sites with sufficient water-quality data to compute accurate annual loads and support regional modeling analyses. These declines were caused by decreases in the number of sites being sampled and data not being entered in readily accessible databases.

  15. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    NASA Astrophysics Data System (ADS)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  16. Different Coefficients and Exponents for Metabolic Body Weight in a Model to Estimate Individual Feed Intake for Growing-finishing Pigs

    PubMed Central

    Lee, S. A.; Kong, C.; Adeola, O.; Kim, B. G.

    2016-01-01

    Estimation of feed intake (FI) for individual animals within a pen is needed in situations where more than one animal share a feeder during feeding trials. A partitioning method (PM) was previously published as a model to estimate the individual FI (IFI). Briefly, the IFI of a pig within the pen was calculated by partitioning IFI into IFI for maintenance (IFIm) and IFI for growth. In the PM, IFIm is determined based on the metabolic body weight (BW), which is calculated using the coefficient of 106 and exponent of 0.75. Two simulation studies were conducted to test the hypothesis that the use of different coefficients and exponents for metabolic BW to calculate IFIm improves the accuracy of the estimates of IFI for pigs, and that PM is applied to pigs fed in group-housing systems. The accuracy of prediction represented by difference between actual and estimated IFI was compared using PM, ratio (RM), or averaging method (AM). In simulation studies 1 and 2, the PM estimated IFI better than the AM and RM during most of the periods (p<0.05). The use of 0.60 as the exponent and the coefficient of 197 to calculate metabolic BW did not improve the accuracy of the IFI estimates in both simulation studies 1 and 2. The results imply that the use of 197 kcal×kg BW0.60 as metabolizable energy for maintenance in PM does not improve the accuracy of IFI estimations compared with the use of 106 kcal×kg BW0.75 and that the PM estimates the IFI of pigs with greater accuracy compared with the averaging or ratio methods in group-housing systems. PMID:27608642

  17. Improving zero-training brain-computer interfaces by mixing model estimators

    NASA Astrophysics Data System (ADS)

    Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.

    2017-06-01

    Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.

  18. Towards an Online Seizure Advisory System-An Adaptive Seizure Prediction Framework Using Active Learning Heuristics.

    PubMed

    Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana

    2018-05-24

    In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.

  19. First-principles calculations of mobility

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Karthik

    First-principles calculations can be a powerful predictive tool for studying, modeling and understanding the fundamental scattering mechanisms impacting carrier transport in materials. In the past, calculations have provided important qualitative insights, but numerical accuracy has been limited due to computational challenges. In this talk, we will discuss some of the challenges involved in calculating electron-phonon scattering and carrier mobility, and outline approaches to overcome them. Topics will include the limitations of models for electron-phonon interaction, the importance of grid sampling, and the use of Gaussian smearing to replace energy-conserving delta functions. Using prototypical examples of oxides that are of technological importance-SrTiO3, BaSnO3, Ga2O3, and WO3-we will demonstrate computational approaches to overcome these challenges and improve the accuracy. One approach that leads to a distinct improvement in the accuracy is the use of analytic functions for the band dispersion, which allows for an exact solution of the energy-conserving delta function. For select cases, we also discuss direct quantitative comparisons with experimental results. The computational approaches and methodologies discussed in the talk are general and applicable to other materials, and greatly improve the numerical accuracy of the calculated transport properties, such as carrier mobility, conductivity and Seebeck coefficient. This work was performed in collaboration with B. Himmetoglu, Y. Kang, W. Wang, A. Janotti and C. G. Van de Walle, and supported by the LEAST Center, the ONR EXEDE MURI, and NSF.

  20. Improved Forecasting Methods for Naval Manpower Studies

    DTIC Science & Technology

    2015-03-25

    Using monthly data is likely to improve the overall fit of the models and the accuracy of the BP test . A measure of unemployment to control for...measure of the relative goodness of fit of a statistical model. It is grounded in the concept of information entropy, in effect, offering a relative...the Kullback – Leibler divergence, DKL(f,g1); similarly, the information lost from using g2 to

  1. Estimation of power lithium-ion battery SOC based on fuzzy optimal decision

    NASA Astrophysics Data System (ADS)

    He, Dongmei; Hou, Enguang; Qiao, Xin; Liu, Guangmin

    2018-06-01

    In order to improve vehicle performance and safety, need to accurately estimate the power lithium battery state of charge (SOC), analyzing the common SOC estimation methods, according to the characteristics open circuit voltage and Kalman filter algorithm, using T - S fuzzy model, established a lithium battery SOC estimation method based on the fuzzy optimal decision. Simulation results show that the battery model accuracy can be improved.

  2. The Research and Evaluation of Road Environment in the Block of City Based on 3-D Streetscape Data

    NASA Astrophysics Data System (ADS)

    Guan, L.; Ding, Y.; Ge, J.; Yang, H.; Feng, X.; Chen, P.

    2018-04-01

    This paper focus on the problem of the street environment of block unit, based on making clear the acquisition mode and characteristics of 3D streetscape data, the paper designs the assessment model of regional block unit based on 3D streetscape data. The 3D streetscape data with the aid of oblique photogrammetry surveying and mobile equipment, will greatly improve the efficiency and accuracy of urban regional assessment, and expand the assessment scope. Based on the latest urban regional assessment model, with the street environment assessment model of the current situation, this paper analyzes the street form and street environment assessment of current situation in the typical area of Beijing. Through the street environment assessment of block unit, we found that in the megacity street environment assessment model of block unit based on 3D streetscape data has greatly help to improve the assessment efficiency and accuracy. At the same time, motor vehicle lane, green shade deficiency, bad railings and street lost situation is still very serious in Beijing, the street environment improvement of the block unit is still a heavy task. The research results will provide data support for urban fine management and urban design, and provide a solid foundation for the improvement of city image.

  3. An Improved Strong Tracking Cubature Kalman Filter for GPS/INS Integrated Navigation Systems.

    PubMed

    Feng, Kaiqiang; Li, Jie; Zhang, Xi; Zhang, Xiaoming; Shen, Chong; Cao, Huiliang; Yang, Yanyu; Liu, Jun

    2018-06-12

    The cubature Kalman filter (CKF) is widely used in the application of GPS/INS integrated navigation systems. However, its performance may decline in accuracy and even diverge in the presence of process uncertainties. To solve the problem, a new algorithm named improved strong tracking seventh-degree spherical simplex-radial cubature Kalman filter (IST-7thSSRCKF) is proposed in this paper. In the proposed algorithm, the effect of process uncertainty is mitigated by using the improved strong tracking Kalman filter technique, in which the hypothesis testing method is adopted to identify the process uncertainty and the prior state estimate covariance in the CKF is further modified online according to the change in vehicle dynamics. In addition, a new seventh-degree spherical simplex-radial rule is employed to further improve the estimation accuracy of the strong tracking cubature Kalman filter. In this way, the proposed comprehensive algorithm integrates the advantage of 7thSSRCKF’s high accuracy and strong tracking filter’s strong robustness against process uncertainties. The GPS/INS integrated navigation problem with significant dynamic model errors is utilized to validate the performance of proposed IST-7thSSRCKF. Results demonstrate that the improved strong tracking cubature Kalman filter can achieve higher accuracy than the existing CKF and ST-CKF, and is more robust for the GPS/INS integrated navigation system.

  4. 3D anisotropic modeling and identification for airborne EM systems based on the spectral-element method

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Yin, Chang-Chun; Cao, Xiao-Yue; Liu, Yun-He; Zhang, Bo; Cai, Jing

    2017-09-01

    The airborne electromagnetic (AEM) method has a high sampling rate and survey flexibility. However, traditional numerical modeling approaches must use high-resolution physical grids to guarantee modeling accuracy, especially for complex geological structures such as anisotropic earth. This can lead to huge computational costs. To solve this problem, we propose a spectral-element (SE) method for 3D AEM anisotropic modeling, which combines the advantages of spectral and finite-element methods. Thus, the SE method has accuracy as high as that of the spectral method and the ability to model complex geology inherited from the finite-element method. The SE method can improve the modeling accuracy within discrete grids and reduce the dependence of modeling results on the grids. This helps achieve high-accuracy anisotropic AEM modeling. We first introduced a rotating tensor of anisotropic conductivity to Maxwell's equations and described the electrical field via SE basis functions based on GLL interpolation polynomials. We used the Galerkin weighted residual method to establish the linear equation system for the SE method, and we took a vertical magnetic dipole as the transmission source for our AEM modeling. We then applied fourth-order SE calculations with coarse physical grids to check the accuracy of our modeling results against a 1D semi-analytical solution for an anisotropic half-space model and verified the high accuracy of the SE. Moreover, we conducted AEM modeling for different anisotropic 3D abnormal bodies using two physical grid scales and three orders of SE to obtain the convergence conditions for different anisotropic abnormal bodies. Finally, we studied the identification of anisotropy for single anisotropic abnormal bodies, anisotropic surrounding rock, and single anisotropic abnormal body embedded in an anisotropic surrounding rock. This approach will play a key role in the inversion and interpretation of AEM data collected in regions with anisotropic geology.

  5. AERONET Version 3 Release: Providing Significant Improvements for Multi-Decadal Global Aerosol Database and Near Real-Time Validation

    NASA Technical Reports Server (NTRS)

    Holben, Brent; Slutsker, Ilya; Giles, David; Eck, Thomas; Smirnov, Alexander; Sinyuk, Aliaksandr; Schafer, Joel; Sorokin, Mikhail; Rodriguez, Jon; Kraft, Jason; hide

    2016-01-01

    Aerosols are highly variable in space, time and properties. Global assessment from satellite platforms and model predictions rely on validation from AERONET, a highly accurate ground-based network. Ver. 3 represents a significant improvement in accuracy and quality.

  6. Effects of a risk-based online mammography intervention on accuracy of perceived risk and mammography intentions.

    PubMed

    Seitz, Holli H; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M; Armstrong, Katrina; Cappella, Joseph N

    2016-10-01

    This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. 2918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (<1.5%; ≥1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) x 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk ≤1.5%. For women with a risk≤1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Effects of a Risk-based Online Mammography Intervention on Accuracy of Perceived Risk and Mammography Intentions

    PubMed Central

    Seitz, Holli H.; Gibson, Laura; Skubisz, Christine; Forquer, Heather; Mello, Susan; Schapira, Marilyn M.; Armstrong, Katrina; Cappella, Joseph N.

    2016-01-01

    Objective This experiment tested the effects of an individualized risk-based online mammography decision intervention. The intervention employs exemplification theory and the Elaboration Likelihood Model of persuasion to improve the match between breast cancer risk and mammography intentions. Methods 2,918 women ages 35-49 were stratified into two levels of 10-year breast cancer risk (< 1.5%; ≥ 1.5%) then randomly assigned to one of eight conditions: two comparison conditions and six risk-based intervention conditions that varied according to a 2 (amount of content: brief vs. extended) × 3 (format: expository vs. untailored exemplar [example case] vs. tailored exemplar) design. Outcomes included mammography intentions and accuracy of perceived breast cancer risk. Results Risk-based intervention conditions improved the match between objective risk estimates and perceived risk, especially for high-numeracy women with a 10-year breast cancer risk <1.5%. For women with a risk < 1.5%, exemplars improved accuracy of perceived risk and all risk-based interventions increased intentions to wait until age 50 to screen. Conclusion A risk-based mammography intervention improved accuracy of perceived risk and the match between objective risk estimates and mammography intentions. Practice Implications Interventions could be applied in online or clinical settings to help women understand risk and make mammography decisions. PMID:27178707

  8. Modelling for Prediction vs. Modelling for Understanding: Commentary on Musso et al. (2013)

    ERIC Educational Resources Information Center

    Edelsbrunner, Peter; Schneider, Michael

    2013-01-01

    Musso et al. (2013) predict students' academic achievement with high accuracy one year in advance from cognitive and demographic variables, using artificial neural networks (ANNs). They conclude that ANNs have high potential for theoretical and practical improvements in learning sciences. ANNs are powerful statistical modelling tools but they can…

  9. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Treesearch

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  10. Tracking of cell nuclei for assessment of in vitro uptake kinetics in ultrasound-mediated drug delivery using fibered confocal fluorescence microscopy.

    PubMed

    Derieppe, Marc; de Senneville, Baudouin Denis; Kuijf, Hugo; Moonen, Chrit; Bos, Clemens

    2014-10-01

    Previously, we demonstrated the feasibility to monitor ultrasound-mediated uptake of a cell-impermeable model drug in real time with fibered confocal fluorescence microscopy. Here, we present a complete post-processing methodology, which corrects for cell displacements, to improve the accuracy of pharmacokinetic parameter estimation. Nucleus detection was performed based on the radial symmetry transform algorithm. Cell tracking used an iterative closest point approach. Pharmacokinetic parameters were calculated by fitting a two-compartment model to the time-intensity curves of individual cells. Cells were tracked successfully, improving time-intensity curve accuracy and pharmacokinetic parameter estimation. With tracking, 93 % of the 370 nuclei showed a fluorescence signal variation that was well-described by a two-compartment model. In addition, parameter distributions were narrower, thus increasing precision. Dedicated image analysis was implemented and enabled studying ultrasound-mediated model drug uptake kinetics in hundreds of cells per experiment, using fiber-based confocal fluorescence microscopy.

  11. CT image segmentation methods for bone used in medical additive manufacturing.

    PubMed

    van Eijnatten, Maureen; van Dijk, Roelof; Dobbe, Johannes; Streekstra, Geert; Koivisto, Juha; Wolff, Jan

    2018-01-01

    The accuracy of additive manufactured medical constructs is limited by errors introduced during image segmentation. The aim of this study was to review the existing literature on different image segmentation methods used in medical additive manufacturing. Thirty-two publications that reported on the accuracy of bone segmentation based on computed tomography images were identified using PubMed, ScienceDirect, Scopus, and Google Scholar. The advantages and disadvantages of the different segmentation methods used in these studies were evaluated and reported accuracies were compared. The spread between the reported accuracies was large (0.04 mm - 1.9 mm). Global thresholding was the most commonly used segmentation method with accuracies under 0.6 mm. The disadvantage of this method is the extensive manual post-processing required. Advanced thresholding methods could improve the accuracy to under 0.38 mm. However, such methods are currently not included in commercial software packages. Statistical shape model methods resulted in accuracies from 0.25 mm to 1.9 mm but are only suitable for anatomical structures with moderate anatomical variations. Thresholding remains the most widely used segmentation method in medical additive manufacturing. To improve the accuracy and reduce the costs of patient-specific additive manufactured constructs, more advanced segmentation methods are required. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Correction for photobleaching in dynamic fluorescence microscopy: application in the assessment of pharmacokinetic parameters in ultrasound-mediated drug delivery

    NASA Astrophysics Data System (ADS)

    Derieppe, M.; Bos, C.; de Greef, M.; Moonen, C.; de Senneville, B. Denis

    2016-01-01

    We have previously demonstrated the feasibility of monitoring ultrasound-mediated uptake of a hydrophilic model drug in real time with dynamic confocal fluorescence microscopy. In this study, we evaluate and correct the impact of photobleaching to improve the accuracy of pharmacokinetic parameter estimates. To model photobleaching of the fluorescent model drug SYTOX Green, a photobleaching process was added to the current two-compartment model describing cell uptake. After collection of the uptake profile, a second acquisition was performed when SYTOX Green was equilibrated, to evaluate the photobleaching rate experimentally. Photobleaching rates up to 5.0 10-3 s-1 were measured when applying power densities up to 0.2 W.cm-2. By applying the three-compartment model, the model drug uptake rate of 6.0 10-3 s-1 was measured independent of the applied laser power. The impact of photobleaching on uptake rate estimates measured by dynamic fluorescence microscopy was evaluated. Subsequent compensation improved the accuracy of pharmacokinetic parameter estimates in the cell population subjected to sonopermeabilization.

  13. COMPASS: A computational model to predict changes in MMSE scores 24-months after initial assessment of Alzheimer's disease.

    PubMed

    Zhu, Fan; Panwar, Bharat; Dodge, Hiroko H; Li, Hongdong; Hampstead, Benjamin M; Albin, Roger L; Paulson, Henry L; Guan, Yuanfang

    2016-10-05

    We present COMPASS, a COmputational Model to Predict the development of Alzheimer's diSease Spectrum, to model Alzheimer's disease (AD) progression. This was the best-performing method in recent crowdsourcing benchmark study, DREAM Alzheimer's Disease Big Data challenge to predict changes in Mini-Mental State Examination (MMSE) scores over 24-months using standardized data. In the present study, we conducted three additional analyses beyond the DREAM challenge question to improve the clinical contribution of our approach, including: (1) adding pre-validated baseline cognitive composite scores of ADNI-MEM and ADNI-EF, (2) identifying subjects with significant declines in MMSE scores, and (3) incorporating SNPs of top 10 genes connected to APOE identified from functional-relationship network. For (1) above, we significantly improved predictive accuracy, especially for the Mild Cognitive Impairment (MCI) group. For (2), we achieved an area under ROC of 0.814 in predicting significant MMSE decline: our model has 100% precision at 5% recall, and 91% accuracy at 10% recall. For (3), "genetic only" model has Pearson's correlation of 0.15 to predict progression in the MCI group. Even though addition of this limited genetic model to COMPASS did not improve prediction of progression of MCI group, the predictive ability of SNP information extended beyond well-known APOE allele.

  14. Improved automation of dissolved organic carbon sampling for organic-rich surface waters.

    PubMed

    Grayson, Richard P; Holden, Joseph

    2016-02-01

    In-situ UV-Vis spectrophotometers offer the potential for improved estimates of dissolved organic carbon (DOC) fluxes for organic-rich systems such as peatlands because they are able to sample and log DOC proxies automatically through time at low cost. In turn, this could enable improved total carbon budget estimates for peatlands. The ability of such instruments to accurately measure DOC depends on a number of factors, not least of which is how absorbance measurements relate to DOC and the environmental conditions. Here we test the ability of a S::can Spectro::lyser™ for measuring DOC in peatland streams with routinely high DOC concentrations. Through analysis of the spectral response data collected by the instrument we have been able to accurately measure DOC up to 66 mg L(-1), which is more than double the original upper calibration limit for this particular instrument. A linear regression modelling approach resulted in an accuracy >95%. The greatest accuracy was achieved when absorbance values for several different wavelengths were used at the same time in the model. However, an accuracy >90% was achieved using absorbance values for a single wavelength to predict DOC concentration. Our calculations indicated that, for organic-rich systems, in-situ measurement with a scanning spectrophotometer can improve fluvial DOC flux estimates by 6 to 8% compared with traditional sampling methods. Thus, our techniques pave the way for improved long-term carbon budget calculations from organic-rich systems such as peatlands. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Restoration Of MEX SRC Images For Improved Topography: A New Image Product

    NASA Astrophysics Data System (ADS)

    Duxbury, T. C.

    2012-12-01

    Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived topographic accuracy.

  16. A retrospective evaluation of traffic forecasting techniques.

    DOT National Transportation Integrated Search

    2016-08-01

    Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...

  17. LLNL Location and Detection Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, S C; Harris, D B; Anderson, M L

    2003-07-16

    We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less

  18. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique.

    PubMed

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young

    2014-03-01

    This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.

  19. Online Knowledge-Based Model for Big Data Topic Extraction.

    PubMed

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.

  20. Remote sensing of aquatic vegetation distribution in Taihu Lake using an improved classification tree with modified thresholds.

    PubMed

    Zhao, Dehua; Jiang, Hao; Yang, Tangwu; Cai, Ying; Xu, Delin; An, Shuqing

    2012-03-01

    Classification trees (CT) have been used successfully in the past to classify aquatic vegetation from spectral indices (SI) obtained from remotely-sensed images. However, applying CT models developed for certain image dates to other time periods within the same year or among different years can reduce the classification accuracy. In this study, we developed CT models with modified thresholds using extreme SI values (CT(m)) to improve the stability of the models when applying them to different time periods. A total of 903 ground-truth samples were obtained in September of 2009 and 2010 and classified as emergent, floating-leaf, or submerged vegetation or other cover types. Classification trees were developed for 2009 (Model-09) and 2010 (Model-10) using field samples and a combination of two images from winter and summer. Overall accuracies of these models were 92.8% and 94.9%, respectively, which confirmed the ability of CT analysis to map aquatic vegetation in Taihu Lake. However, Model-10 had only 58.9-71.6% classification accuracy and 31.1-58.3% agreement (i.e., pixels classified the same in the two maps) for aquatic vegetation when it was applied to image pairs from both a different time period in 2010 and a similar time period in 2009. We developed a method to estimate the effects of extrinsic (EF) and intrinsic (IF) factors on model uncertainty using Modis images. Results indicated that 71.1% of the instability in classification between time periods was due to EF, which might include changes in atmospheric conditions, sun-view angle and water quality. The remainder was due to IF, such as phenological and growth status differences between time periods. The modified version of Model-10 (i.e. CT(m)) performed better than traditional CT with different image dates. When applied to 2009 images, the CT(m) version of Model-10 had very similar thresholds and performance as Model-09, with overall accuracies of 92.8% and 90.5% for Model-09 and the CT(m) version of Model-10, respectively. CT(m) decreased the variability related to EF and IF and thereby improved the applicability of the models to different time periods. In both practice and theory, our results suggested that CT(m) was more stable than traditional CT models and could be used to map aquatic vegetation in time periods other than the one for which the model was developed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Towards accurate modeling of noncovalent interactions for protein rigidity analysis.

    PubMed

    Fox, Naomi; Streinu, Ileana

    2013-01-01

    Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu.

  2. Towards accurate modeling of noncovalent interactions for protein rigidity analysis

    PubMed Central

    2013-01-01

    Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209

  3. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.

    PubMed

    Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  4. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    PubMed Central

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  5. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  6. Predictive accuracy of combined genetic and environmental risk scores.

    PubMed

    Dudbridge, Frank; Pashayan, Nora; Yang, Jian

    2018-02-01

    The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. © 2017 WILEY PERIODICALS, INC.

  7. Predictive accuracy of combined genetic and environmental risk scores

    PubMed Central

    Pashayan, Nora; Yang, Jian

    2017-01-01

    ABSTRACT The substantial heritability of most complex diseases suggests that genetic data could provide useful risk prediction. To date the performance of genetic risk scores has fallen short of the potential implied by heritability, but this can be explained by insufficient sample sizes for estimating highly polygenic models. When risk predictors already exist based on environment or lifestyle, two key questions are to what extent can they be improved by adding genetic information, and what is the ultimate potential of combined genetic and environmental risk scores? Here, we extend previous work on the predictive accuracy of polygenic scores to allow for an environmental score that may be correlated with the polygenic score, for example when the environmental factors mediate the genetic risk. We derive common measures of predictive accuracy and improvement as functions of the training sample size, chip heritabilities of disease and environmental score, and genetic correlation between disease and environmental risk factors. We consider simple addition of the two scores and a weighted sum that accounts for their correlation. Using examples from studies of cardiovascular disease and breast cancer, we show that improvements in discrimination are generally small but reasonable degrees of reclassification could be obtained with current sample sizes. Correlation between genetic and environmental scores has only minor effects on numerical results in realistic scenarios. In the longer term, as the accuracy of polygenic scores improves they will come to dominate the predictive accuracy compared to environmental scores. PMID:29178508

  8. Big Data: A Parallel Particle Swarm Optimization-Back-Propagation Neural Network Algorithm Based on MapReduce

    PubMed Central

    Cao, Jianfang; Cui, Hongyan; Shi, Hao; Jiao, Lijuan

    2016-01-01

    A back-propagation (BP) neural network can solve complicated random nonlinear mapping problems; therefore, it can be applied to a wide range of problems. However, as the sample size increases, the time required to train BP neural networks becomes lengthy. Moreover, the classification accuracy decreases as well. To improve the classification accuracy and runtime efficiency of the BP neural network algorithm, we proposed a parallel design and realization method for a particle swarm optimization (PSO)-optimized BP neural network based on MapReduce on the Hadoop platform using both the PSO algorithm and a parallel design. The PSO algorithm was used to optimize the BP neural network’s initial weights and thresholds and improve the accuracy of the classification algorithm. The MapReduce parallel programming model was utilized to achieve parallel processing of the BP algorithm, thereby solving the problems of hardware and communication overhead when the BP neural network addresses big data. Datasets on 5 different scales were constructed using the scene image library from the SUN Database. The classification accuracy of the parallel PSO-BP neural network algorithm is approximately 92%, and the system efficiency is approximately 0.85, which presents obvious advantages when processing big data. The algorithm proposed in this study demonstrated both higher classification accuracy and improved time efficiency, which represents a significant improvement obtained from applying parallel processing to an intelligent algorithm on big data. PMID:27304987

  9. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    NASA Astrophysics Data System (ADS)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.

  10. The RAPIDD ebola forecasting challenge: Synthesis and lessons learnt.

    PubMed

    Viboud, Cécile; Sun, Kaiyuan; Gaffey, Robert; Ajelli, Marco; Fumanelli, Laura; Merler, Stefano; Zhang, Qian; Chowell, Gerardo; Simonsen, Lone; Vespignani, Alessandro

    2018-03-01

    Infectious disease forecasting is gaining traction in the public health community; however, limited systematic comparisons of model performance exist. Here we present the results of a synthetic forecasting challenge inspired by the West African Ebola crisis in 2014-2015 and involving 16 international academic teams and US government agencies, and compare the predictive performance of 8 independent modeling approaches. Challenge participants were invited to predict 140 epidemiological targets across 5 different time points of 4 synthetic Ebola outbreaks, each involving different levels of interventions and "fog of war" in outbreak data made available for predictions. Prediction targets included 1-4 week-ahead case incidences, outbreak size, peak timing, and several natural history parameters. With respect to weekly case incidence targets, ensemble predictions based on a Bayesian average of the 8 participating models outperformed any individual model and did substantially better than a null auto-regressive model. There was no relationship between model complexity and prediction accuracy; however, the top performing models for short-term weekly incidence were reactive models with few parameters, fitted to a short and recent part of the outbreak. Individual model outputs and ensemble predictions improved with data accuracy and availability; by the second time point, just before the peak of the epidemic, estimates of final size were within 20% of the target. The 4th challenge scenario - mirroring an uncontrolled Ebola outbreak with substantial data reporting noise - was poorly predicted by all modeling teams. Overall, this synthetic forecasting challenge provided a deep understanding of model performance under controlled data and epidemiological conditions. We recommend such "peace time" forecasting challenges as key elements to improve coordination and inspire collaboration between modeling groups ahead of the next pandemic threat, and to assess model forecasting accuracy for a variety of known and hypothetical pathogens. Published by Elsevier B.V.

  11. Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method

    NASA Astrophysics Data System (ADS)

    Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang

    2017-06-01

    Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.

  12. Adjustment of regional regression models of urban-runoff quality using data for Chattanooga, Knoxville, and Nashville, Tennessee

    USGS Publications Warehouse

    Hoos, Anne B.; Patel, Anant R.

    1996-01-01

    Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.

  13. Alternative methods to determine headwater benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Y.S.; Perlack, R.D.; Sale, M.J.

    1997-11-10

    In 1992, the Federal Energy Regulatory Commission (FERC) began using a Flow Duration Analysis (FDA) methodology to assess headwater benefits in river basins where use of the Headwater Benefits Energy Gains (HWBEG) model may not result in significant improvements in modeling accuracy. The purpose of this study is to validate the accuracy and appropriateness of the FDA method for determining energy gains in less complex basins. This report presents the results of Oak Ridge National Laboratory`s (ORNL`s) validation of the FDA method. The validation is based on a comparison of energy gains using the FDA method with energy gains calculatedmore » using the MWBEG model. Comparisons of energy gains are made on a daily and monthly basis for a complex river basin (the Alabama River Basin) and a basin that is considered relatively simple hydrologically (the Stanislaus River Basin). In addition to validating the FDA method, ORNL was asked to suggest refinements and improvements to the FDA method. Refinements and improvements to the FDA method were carried out using the James River Basin as a test case.« less

  14. PCVs Estimation and their Impacts on Precise Orbit Determination of LEOs

    NASA Astrophysics Data System (ADS)

    Chunmei, Z.; WANG, X.

    2017-12-01

    In the last decade the precise orbit determination (POD) based on GNSS, such as GPS, has been considered as one of the efficient methods to derive orbits of Low Earth Orbiters (LEOs) that demand accuracy requirements. The Earth gravity field recovery and its related researches require precise dynamic orbits of LEOs. With the improvements of GNSS satellites' orbit and clock accuracy, the algorithm optimization and the refinement of perturbation force models, the antenna phase-center variations (PCVs) of space-borne GNSS receiver have become an increasingly important factor that affects POD accuracy. A series of LEOs such as HY-2, ZY-3 and FY-3 with homebred space-borne GNSS receivers have been launched in the past several years in China. Some of these LEOs load dual-mode GNSS receivers of GPS and BDS signals. The reliable performance of these space-borne receivers has been establishing an important foundation for the future launches of China gravity satellites. Therefore, we first evaluate the data quality of on-board GNSS measurement by examining integrity, multipath error, cycle slip ratio and other quality indices. Then we determine the orbits of several LEOs at different altitudes by the reduced dynamic orbit determination method. The corresponding ionosphere-free carrier phase post-fit residual time series are obtained. And then we establish the PCVs model by the ionosphere-free residual approach and analyze the effects of antenna phase-center variation on orbits. It is shown that orbit accuracy of LEO satellites is greatly improved after in-flight PCV calibration. Finally, focus on the dual-mode receiver of FY-3 satellite we analyze the quality of onboard BDS data and then evaluate the accuracy of the FY-3 orbit determined using only BDS measurement onboard. The accuracy of LEO satellites orbit based on BDS would be well improved with the global completion of BDS by 2020.

  15. On central-difference and upwind schemes

    NASA Technical Reports Server (NTRS)

    Swanson, R. C.; Turkel, Eli

    1990-01-01

    A class of numerical dissipation models for central-difference schemes constructed with second- and fourth-difference terms is considered. The notion of matrix dissipation associated with upwind schemes is used to establish improved shock capturing capability for these models. In addition, conditions are given that guarantee that such dissipation models produce a Total Variation Diminishing (TVD) scheme. Appropriate switches for this type of model to ensure satisfaction of the TVD property are presented. Significant improvements in the accuracy of a central-difference scheme are demonstrated by computing both inviscid and viscous transonic airfoil flows.

  16. Estimation of genomic breeding values for residual feed intake in a multibreed cattle population.

    PubMed

    Khansefid, M; Pryce, J E; Bolormaa, S; Miller, S P; Wang, Z; Li, C; Goddard, M E

    2014-08-01

    Residual feed intake (RFI) is a measure of the efficiency of animals in feed utilization. The accuracies of GEBV for RFI could be improved by increasing the size of the reference population. Combining RFI records of different breeds is a way to do that. The aims of this study were to 1) develop a method for calculating GEBV in a multibreed population and 2) improve the accuracies of GEBV by using SNP associated with RFI. An alternative method for calculating accuracies of GEBV using genomic BLUP (GBLUP) equations is also described and compared to cross-validation tests. The dataset included RFI records and 606,096 SNP genotypes for 5,614 Bos taurus animals including 842 Holstein heifers and 2,009 Australian and 2,763 Canadian beef cattle. A range of models were tested for combining genotype and phenotype information from different breeds and the best model included an overall effect of each SNP, an effect of each SNP specific to a breed, and a small residual polygenic effect defined by the pedigree. In this model, the Holsteins and some Angus cattle were combined into 1 "breed class" because they were the only cattle measured for RFI at an early age (6-9 mo of age) and were fed a similar diet. The average empirical accuracy (0.31), estimated by calculating the correlation between GEBV and actual phenotypes divided by the square root of estimated heritability in 5-fold cross-validation tests, was near to that expected using the GBLUP equations (0.34). The average empirical and expected accuracies were 0.30 and 0.31, respectively, when the GEBV were estimated for each breed separately. Therefore, the across-breed reference population increased the accuracy of GEBV slightly, although the gain was greater for breeds with smaller number of individuals in the reference population (0.08 in Murray Grey and 0.11 in Hereford for empirical accuracy). In a second approach, SNP that were significantly (P < 0.001) associated with RFI in the beef cattle genomewide association studies were used to create an auxiliary genomic relationship matrix for estimating GEBV in Holstein heifers. The empirical (and expected) accuracy of GEBV within Holsteins increased from 0.33 (0.35) to 0.39 (0.36) and improved even more to 0.43 (0.50) when using a multibreed reference population. Therefore, a multibreed reference population is a useful resource to find SNP with a greater than average association with RFI in 1 breed and use them to estimate GEBV in another breed.

  17. Improving the spectral measurement accuracy based on temperature distribution and spectra-temperature relationship

    NASA Astrophysics Data System (ADS)

    Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin

    2018-05-01

    Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.

  18. A deep learning method for early screening of lung cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Kunpeng; Jiang, Huiqin; Ma, Ling; Gao, Jianbo; Yang, Xiaopeng

    2018-04-01

    Lung cancer is the leading cause of cancer-related deaths among men. In this paper, we propose a pulmonary nodule detection method for early screening of lung cancer based on the improved AlexNet model. In order to maintain the same image quality as the existing B/S architecture PACS system, we convert the original CT image into JPEG format image by analyzing the DICOM file firstly. Secondly, in view of the large size and complex background of CT chest images, we design the convolution neural network on basis of AlexNet model and sparse convolution structure. At last we train our models on the software named DIGITS which is provided by NVIDIA. The main contribution of this paper is to apply the convolutional neural network for the early screening of lung cancer and improve the screening accuracy by combining the AlexNet model with the sparse convolution structure. We make a series of experiments on the chest CT images using the proposed method, of which the sensitivity and specificity indicates that the method presented in this paper can effectively improve the accuracy of early screening of lung cancer and it has certain clinical significance at the same time.

  19. RB Particle Filter Time Synchronization Algorithm Based on the DPM Model.

    PubMed

    Guo, Chunsheng; Shen, Jia; Sun, Yao; Ying, Na

    2015-09-03

    Time synchronization is essential for node localization, target tracking, data fusion, and various other Wireless Sensor Network (WSN) applications. To improve the estimation accuracy of continuous clock offset and skew of mobile nodes in WSNs, we propose a novel time synchronization algorithm, the Rao-Blackwellised (RB) particle filter time synchronization algorithm based on the Dirichlet process mixture (DPM) model. In a state-space equation with a linear substructure, state variables are divided into linear and non-linear variables by the RB particle filter algorithm. These two variables can be estimated using Kalman filter and particle filter, respectively, which improves the computational efficiency more so than if only the particle filter was used. In addition, the DPM model is used to describe the distribution of non-deterministic delays and to automatically adjust the number of Gaussian mixture model components based on the observational data. This improves the estimation accuracy of clock offset and skew, which allows achieving the time synchronization. The time synchronization performance of this algorithm is also validated by computer simulations and experimental measurements. The results show that the proposed algorithm has a higher time synchronization precision than traditional time synchronization algorithms.

  20. Including α s1 casein gene information in genomic evaluations of French dairy goats.

    PubMed

    Carillier-Jacquin, Céline; Larroque, Hélène; Robert-Granié, Christèle

    2016-08-04

    Genomic best linear unbiased prediction methods assume that all markers explain the same fraction of the genetic variance and do not account effectively for genes with major effects such as the α s1 casein polymorphism in dairy goats. In this study, we investigated methods to include the available α s1 casein genotype effect in genomic evaluations of French dairy goats. First, the α s1 casein genotype was included as a fixed effect in genomic evaluation models based only on bucks that were genotyped at the α s1 casein locus. Less than 1 % of the females with phenotypes were genotyped at the α s1 casein gene. Thus, to incorporate these female phenotypes in the genomic evaluation, two methods that allowed for this large number of missing α s1 casein genotypes were investigated. Probabilities for each possible α s1 casein genotype were first estimated for each female of unknown genotype based on iterative peeling equations. The second method is based on a multiallelic gene content approach. For each model tested, we used three datasets each divided into a training and a validation set: (1) two-breed population (Alpine + Saanen), (2) Alpine population, and (3) Saanen population. The α s1 casein genotype had a significant effect on milk yield, fat content and protein content. Including an α s1 casein effect in genetic and genomic evaluations based only on male known α s1 casein genotypes improved accuracies (from 6 to 27 %). In genomic evaluations based on all female phenotypes, the gene content approach performed better than the other tested methods but the improvement in accuracy was only slightly better (from 1 to 14 %) than that of a genomic model without the α s1 casein effect. Including the α s1 casein effect in a genomic evaluation model for French dairy goats is possible and useful to improve accuracy. Difficulties in predicting the genotypes for ungenotyped animals limited the improvement in accuracy of the obtained estimated breeding values.

Top