Sample records for model averaging method

  1. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

  2. A comparative analysis of 9 multi-model averaging approaches in hydrological continuous streamflow simulation

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc

    2015-10-01

    This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.

  3. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    PubMed

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  4. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    PubMed Central

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  5. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  6. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  7. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  8. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    NASA Astrophysics Data System (ADS)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  9. Measured values of coal mine stopping resistance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oswald, N.; Prosser, B.; Ruckman, R.

    2008-12-15

    As coal mines become larger, the number of stoppings in the ventilation system increases. Each stopping represents a potential leakage path which must be adequately represented in the ventilation model. Stopping resistance can be calculated using two methods, the USBM method, used to determine a resistance for a single stopping, and the MVS technique, in which an average resistance is calculated for multiple stoppings. Through MVS data collected from ventilation surveys of different subsurface coal mines, average resistances for stoppings were determined for stopping in poor, average, good, and excellent conditions. The calculated average stoppings resistance were determined for concretemore » block and Kennedy stopping. Using the average stopping resistance, measured and calculated using the MVS method, provides a ventilation modeling tool which can be used to construct more accurate and useful ventilation models. 3 refs., 3 figs.« less

  10. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  11. Assessing the Resolution Adaptability of the Zhang-McFarlane Cumulus Parameterization With Spatial and Temporal Averaging: RESOLUTION ADAPTABILITY OF ZM SCHEME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yun, Yuxing; Fan, Jiwen; Xiao, Heng

    Realistic modeling of cumulus convection at fine model resolutions (a few to a few tens of km) is problematic since it requires the cumulus scheme to adapt to higher resolution than they were originally designed for (~100 km). To solve this problem, we implement the spatial averaging method proposed in Xiao et al. (2015) and also propose a temporal averaging method for the large-scale convective available potential energy (CAPE) tendency in the Zhang-McFarlane (ZM) cumulus parameterization. The resolution adaptability of the original ZM scheme, the scheme with spatial averaging, and the scheme with both spatial and temporal averaging at 4-32more » km resolution is assessed using the Weather Research and Forecasting (WRF) model, by comparing with Cloud Resolving Model (CRM) results. We find that the original ZM scheme has very poor resolution adaptability, with sub-grid convective transport and precipitation increasing significantly as the resolution increases. The spatial averaging method improves the resolution adaptability of the ZM scheme and better conserves the total transport of moist static energy and total precipitation. With the temporal averaging method, the resolution adaptability of the scheme is further improved, with sub-grid convective precipitation becoming smaller than resolved precipitation for resolution higher than 8 km, which is consistent with the results from the CRM simulation. Both the spatial distribution and time series of precipitation are improved with the spatial and temporal averaging methods. The results may be helpful for developing resolution adaptability for other cumulus parameterizations that are based on quasi-equilibrium assumption.« less

  12. Forecasting coconut production in the Philippines with ARIMA model

    NASA Astrophysics Data System (ADS)

    Lim, Cristina Teresa

    2015-02-01

    The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.

  13. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment

    PubMed Central

    2014-01-01

    Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387

  14. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment.

    PubMed

    Cao, Renzhi; Wang, Zheng; Cheng, Jianlin

    2014-04-15

    Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.

  15. Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA

    NASA Technical Reports Server (NTRS)

    Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.

    2017-01-01

    Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.

  16. Elucidating fluctuating diffusivity in center-of-mass motion of polymer models with time-averaged mean-square-displacement tensor

    NASA Astrophysics Data System (ADS)

    Miyaguchi, Tomoshige

    2017-10-01

    There have been increasing reports that the diffusion coefficient of macromolecules depends on time and fluctuates randomly. Here a method is developed to elucidate this fluctuating diffusivity from trajectory data. Time-averaged mean-square displacement (MSD), a common tool in single-particle-tracking (SPT) experiments, is generalized to a second-order tensor with which both magnitude and orientation fluctuations of the diffusivity can be clearly detected. This method is used to analyze the center-of-mass motion of four fundamental polymer models: the Rouse model, the Zimm model, a reptation model, and a rigid rodlike polymer. It is found that these models exhibit distinctly different types of magnitude and orientation fluctuations of diffusivity. This is an advantage of the present method over previous ones, such as the ergodicity-breaking parameter and a non-Gaussian parameter, because with either of these parameters it is difficult to distinguish the dynamics of the four polymer models. Also, the present method of a time-averaged MSD tensor could be used to analyze trajectory data obtained in SPT experiments.

  17. Spatial Interpretation of Tower, Chamber and Modelled Terrestrial Fluxes in a Tropical Forest Plantation

    NASA Astrophysics Data System (ADS)

    Whidden, E.; Roulet, N.

    2003-04-01

    Interpretation of a site average terrestrial flux may be complicated in the presence of inhomogeneities. Inhomogeneity may invalidate the basic assumptions of aerodynamic flux measurement. Chamber measurement may miss or misinterpret important temporal or spatial anomalies. Models may smooth over important nonlinearities depending on the scale of application. Although inhomogeneity is usually seen as a design problem, many sites have spatial variance that may have a large impact on net flux, and in many cases a large homogeneous surface is unrealistic. The sensitivity and validity of a site average flux are investigated in the presence of an inhomogeneous site. Directional differences are used to evaluate the validity of aerodynamic methods and the computation of a site average tower flux. Empirical and modelling methods are used to interpret the spatial controls on flux. An ecosystem model, Ecosys, is used to assess spatial length scales appropriate to the ecophysiologic controls. A diffusion model is used to compare tower, chamber, and model data, by spatially weighting contributions within the tower footprint. Diffusion model weighting is also used to improve tower flux estimates by producing footprint averaged ecological parameters (soil moisture, soil temperature, etc.). Although uncertainty remains in the validity of measurement methods and the accuracy of diffusion models, a detailed spatial interpretation is required at an inhomogeneous site. Flux estimation between methods improves with spatial interpretation, showing the importance to an estimation of a site average flux. Small-scale temporal and spatial anomalies may be relatively unimportant to overall flux, but accounting for medium-scale differences in ecophysiological controls is necessary. A combination of measurements and modelling can be used to define the appropriate time and length scales of significant non-linearity due to inhomogeneity.

  18. Dimension reduction method for SPH equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tartakovsky, Alexandre M.; Scheibe, Timothy D.

    2011-08-26

    Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less

  19. Combining forecast weights: Why and how?

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Kok-Haur, Ng; Hock-Eam, Lim

    2012-09-01

    This paper proposes a procedure called forecast weight averaging which is a specific combination of forecast weights obtained from different methods of constructing forecast weights for the purpose of improving the accuracy of pseudo out of sample forecasting. It is found that under certain specified conditions, forecast weight averaging can lower the mean squared forecast error obtained from model averaging. In addition, we show that in a linear and homoskedastic environment, this superior predictive ability of forecast weight averaging holds true irrespective whether the coefficients are tested by t statistic or z statistic provided the significant level is within the 10% range. By theoretical proofs and simulation study, we have shown that model averaging like, variance model averaging, simple model averaging and standard error model averaging, each produces mean squared forecast error larger than that of forecast weight averaging. Finally, this result also holds true marginally when applied to business and economic empirical data sets, Gross Domestic Product (GDP growth rate), Consumer Price Index (CPI) and Average Lending Rate (ALR) of Malaysia.

  20. Application of Bayesian model averaging to measurements of the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Parkinson, David; Liddle, Andrew R.

    2010-11-01

    Cosmological parameter uncertainties are often stated assuming a particular model, neglecting the model uncertainty, even when Bayesian model selection is unable to identify a conclusive best model. Bayesian model averaging is a method for assessing parameter uncertainties in situations where there is also uncertainty in the underlying model. We apply model averaging to the estimation of the parameters associated with the primordial power spectra of curvature and tensor perturbations. We use CosmoNest and MultiNest to compute the model evidences and posteriors, using cosmic microwave data from WMAP, ACBAR, BOOMERanG, and CBI, plus large-scale structure data from the SDSS DR7. We find that the model-averaged 95% credible interval for the spectral index using all of the data is 0.940

  1. Comparing daily temperature averaging methods: the role of surface and atmosphere variables in determining spatial and seasonal variability

    NASA Astrophysics Data System (ADS)

    Bernhardt, Jase; Carleton, Andrew M.

    2018-05-01

    The two main methods for determining the average daily near-surface air temperature, twice-daily averaging (i.e., [Tmax+Tmin]/2) and hourly averaging (i.e., the average of 24 hourly temperature measurements), typically show differences associated with the asymmetry of the daily temperature curve. To quantify the relative influence of several land surface and atmosphere variables on the two temperature averaging methods, we correlate data for 215 weather stations across the Contiguous United States (CONUS) for the period 1981-2010 with the differences between the two temperature-averaging methods. The variables are land use-land cover (LULC) type, soil moisture, snow cover, cloud cover, atmospheric moisture (i.e., specific humidity, dew point temperature), and precipitation. Multiple linear regression models explain the spatial and monthly variations in the difference between the two temperature-averaging methods. We find statistically significant correlations between both the land surface and atmosphere variables studied with the difference between temperature-averaging methods, especially for the extreme (i.e., summer, winter) seasons (adjusted R2 > 0.50). Models considering stations with certain LULC types, particularly forest and developed land, have adjusted R2 values > 0.70, indicating that both surface and atmosphere variables control the daily temperature curve and its asymmetry. This study improves our understanding of the role of surface and near-surface conditions in modifying thermal climates of the CONUS for a wide range of environments, and their likely importance as anthropogenic forcings—notably LULC changes and greenhouse gas emissions—continues.

  2. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  3. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  4. Dynamics of a prey-predator system under Poisson white noise excitation

    NASA Astrophysics Data System (ADS)

    Pan, Shan-Shan; Zhu, Wei-Qiu

    2014-10-01

    The classical Lotka-Volterra (LV) model is a well-known mathematical model for prey-predator ecosystems. In the present paper, the pulse-type version of stochastic LV model, in which the effect of a random natural environment has been modeled as Poisson white noise, is investigated by using the stochastic averaging method. The averaged generalized Itô stochastic differential equation and Fokker-Planck-Kolmogorov (FPK) equation are derived for prey-predator ecosystem driven by Poisson white noise. Approximate stationary solution for the averaged generalized FPK equation is obtained by using the perturbation method. The effect of prey self-competition parameter ɛ2 s on ecosystem behavior is evaluated. The analytical result is confirmed by corresponding Monte Carlo (MC) simulation.

  5. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  6. Variable speed limit strategies analysis with link transmission model on urban expressway

    NASA Astrophysics Data System (ADS)

    Li, Shubin; Cao, Danni

    2018-02-01

    The variable speed limit (VSL) is a kind of active traffic management method. Most of the strategies are used in the expressway traffic flow control in order to ensure traffic safety. However, the urban expressway system is the main artery, carrying most traffic pressure. It has similar traffic characteristics with the expressways between cities. In this paper, the improved link transmission model (LTM) combined with VSL strategies is proposed, based on the urban expressway network. The model can simulate the movement of the vehicles and the shock wave, and well balance the relationship between the amount of calculation and accuracy. Furthermore, the optimal VSL strategy can be proposed based on the simulation method. It can provide management strategies for managers. Finally, a simple example is given to illustrate the model and method. The selected indexes are the average density, the average speed and the average flow on the traffic network in the simulation. The simulation results show that the proposed model and method are feasible. The VSL strategy can effectively alleviate traffic congestion in some cases, and greatly promote the efficiency of the transportation system.

  7. The influence of averaging procedure on the accuracy of IVIVC predictions: immediate release dosage form case study.

    PubMed

    Ostrowski, Michalł; Wilkowska, Ewa; Baczek, Tomasz

    2010-12-01

    In vivo-in vitro correlation (IVIVC) is an effective tool to predict absorption behavior of active substances from pharmaceutical dosage forms. The model for immediate release dosage form containing amoxicillin was used in the presented study to check if the calculation method of absorption profiles can influence final results achieved. The comparison showed that an averaging of individual absorption profiles performed by Wagner-Nelson (WN) conversion method can lead to lose the discrimination properties of the model. The approach considering individual plasma concentration versus time profiles enabled to average absorption profiles prior WN conversion. In turn, that enabled to find differences between dispersible tablets and capsules. It was concluded that in the case of immediate release dosage form, the decision to use averaging method should be based on an individual situation; however, it seems that the influence of such a procedure on the discrimination properties of the model is then more significant. © 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  8. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  9. Phase averaging method for the modeling of the multiprobe and cutaneous cryosurgery

    NASA Astrophysics Data System (ADS)

    E Shilnikov, K.; Kudryashov, N. A.; Y Gaiur, I.

    2017-12-01

    In this paper we consider the problem of planning and optimization of the cutaneous and multiprobe cryosurgery operations. An explicit scheme based on the finite volume approximation of phase averaged Pennes bioheat transfer model is applied. The flux relaxation method is used for the stability improvement of scheme. Skin tissue is considered as strongly inhomogeneous media. Computerized planning tool is tested on model cryotip-based and cutaneous cryosurgery problems. For the case of cutaneous cryosurgery the method of an additional freezing element mounting is studied as an approach to optimize the cellular necrosis front propagation.

  10. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  11. Metainference: A Bayesian inference method for heterogeneous systems.

    PubMed

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.

  12. Accounting for uncertainty in health economic decision models by using model averaging.

    PubMed

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-04-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.

  13. Comparative Assessment of Models and Methods To Calculate Grid Electricity Emissions.

    PubMed

    Ryan, Nicole A; Johnson, Jeremiah X; Keoleian, Gregory A

    2016-09-06

    Due to the complexity of power systems, tracking emissions attributable to a specific electrical load is a daunting challenge but essential for many environmental impact studies. Currently, no consensus exists on appropriate methods for quantifying emissions from particular electricity loads. This paper reviews a wide range of the existing methods, detailing their functionality, tractability, and appropriate use. We identified and reviewed 32 methods and models and classified them into two distinct categories: empirical data and relationship models and power system optimization models. To illustrate the impact of method selection, we calculate the CO2 combustion emissions factors associated with electric-vehicle charging using 10 methods at nine charging station locations around the United States. Across the methods, we found an up to 68% difference from the mean CO2 emissions factor for a given charging site among both marginal and average emissions factors and up to a 63% difference from the average across average emissions factors. Our results underscore the importance of method selection and the need for a consensus on approaches appropriate for particular loads and research questions being addressed in order to achieve results that are more consistent across studies and allow for soundly supported policy decisions. The paper addresses this issue by offering a set of recommendations for determining an appropriate model type on the basis of the load characteristics and study objectives.

  14. Separation of antibody drug conjugate species by RPLC: A generic method development approach.

    PubMed

    Fekete, Szabolcs; Molnár, Imre; Guillarme, Davy

    2017-04-15

    This study reports the use of modelling software for the successful method development of IgG1 cysteine conjugated antibody drug conjugate (ADC) in RPLC. The goal of such a method is to be able to calculate the average drug to antibody ratio (DAR) of and ADC product. A generic method development strategy was proposed including the optimization of mobile phase temperature, gradient profile and mobile phase ternary composition. For the first time, a 3D retention modelling was presented for large therapeutic protein. Based on a limited number of preliminary experiments, a fast and efficient separation of the DAR species of a commercial ADC sample, namely brentuximab vedotin, was achieved. The prediction offered by the retention model was found to be highly reliable, with an average error of retention time prediction always lower than 0.5% using a 2D or 3D retention models. For routine purpose, four to six initial experiments were required to build the 2D retention models, while 12 experiments were recommended to create the 3D model. At the end, RPLC can therefore be considered as a good method for estimating the average DAR of an ADC, based on the observed peak area ratios of RPLC chromatogram of the reduced ADC sample. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  16. A FEniCS-based programming framework for modeling turbulent flow by the Reynolds-averaged Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter; Wells, Garth N.

    2011-09-01

    Finding an appropriate turbulence model for a given flow case usually calls for extensive experimentation with both models and numerical solution methods. This work presents the design and implementation of a flexible, programmable software framework for assisting with numerical experiments in computational turbulence. The framework targets Reynolds-averaged Navier-Stokes models, discretized by finite element methods. The novel implementation makes use of Python and the FEniCS package, the combination of which leads to compact and reusable code, where model- and solver-specific code resemble closely the mathematical formulation of equations and algorithms. The presented ideas and programming techniques are also applicable to other fields that involve systems of nonlinear partial differential equations. We demonstrate the framework in two applications and investigate the impact of various linearizations on the convergence properties of nonlinear solvers for a Reynolds-averaged Navier-Stokes model.

  17. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  18. Calibration of a texture-based model of a ground-water flow system, western San Joaquin Valley, California

    USGS Publications Warehouse

    Phillips, Steven P.; Belitz, Kenneth

    1991-01-01

    The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.

  19. Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models

    NASA Astrophysics Data System (ADS)

    Shen, Haibo; Zhou, Weican; Zhao, Haikun

    2017-09-01

    Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.

  20. Nonlinear ARMA models for the D(st) index and their physical interpretation

    NASA Technical Reports Server (NTRS)

    Vassiliadis, D.; Klimas, A. J.; Baker, D. N.

    1996-01-01

    Time series models successfully reproduce or predict geomagnetic activity indices from solar wind parameters. A method is presented that converts a type of nonlinear filter, the nonlinear Autoregressive Moving Average (ARMA) model to the nonlinear damped oscillator physical model. The oscillator parameters, the growth and decay, the oscillation frequencies and the coupling strength to the input are derived from the filter coefficients. Mathematical methods are derived to obtain unique and consistent filter coefficients while keeping the prediction error low. These methods are applied to an oscillator model for the Dst geomagnetic index driven by the solar wind input. A data set is examined in two ways: the model parameters are calculated as averages over short time intervals, and a nonlinear ARMA model is calculated and the model parameters are derived as a function of the phase space.

  1. Accounting for uncertainty in health economic decision models by using model averaging

    PubMed Central

    Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D

    2009-01-01

    Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329

  2. Parameter regionalisation methods for a semi-distributed rainfall-runoff model: application to a Northern Apennine region

    NASA Astrophysics Data System (ADS)

    Neri, Mattia; Toth, Elena

    2017-04-01

    The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.

  3. The Stagger-grid: A grid of 3D stellar atmosphere models. II. Horizontal and temporal averaging and spectral line formation

    NASA Astrophysics Data System (ADS)

    Magic, Z.; Collet, R.; Hayek, W.; Asplund, M.

    2013-12-01

    Aims: We study the implications of averaging methods with different reference depth scales for 3D hydrodynamical model atmospheres computed with the Stagger-code. The temporally and spatially averaged (hereafter denoted as ⟨3D⟩) models are explored in the light of local thermodynamic equilibrium (LTE) spectral line formation by comparing spectrum calculations using full 3D atmosphere structures with those from ⟨3D⟩ averages. Methods: We explored methods for computing mean ⟨3D⟩ stratifications from the Stagger-grid time-dependent 3D radiative hydrodynamical atmosphere models by considering four different reference depth scales (geometrical depth, column-mass density, and two optical depth scales). Furthermore, we investigated the influence of alternative averages (logarithmic, enforced hydrostatic equilibrium, flux-weighted temperatures). For the line formation we computed curves of growth for Fe i and Fe ii lines in LTE. Results: The resulting ⟨3D⟩ stratifications for the four reference depth scales can be very different. We typically find that in the upper atmosphere and in the superadiabatic region just below the optical surface, where the temperature and density fluctuations are highest, the differences become considerable and increase for higher Teff, lower log g, and lower [Fe / H]. The differential comparison of spectral line formation shows distinctive differences depending on which ⟨3D⟩ model is applied. The averages over layers of constant column-mass density yield the best mean ⟨3D⟩ representation of the full 3D models for LTE line formation, while the averages on layers at constant geometrical height are the least appropriate. Unexpectedly, the usually preferred averages over layers of constant optical depth are prone to increasing interference by reversed granulation towards higher effective temperature, in particular at low metallicity. Appendix A is available in electronic form at http://www.aanda.orgMean ⟨3D⟩ models are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/560/A8 as well as at http://www.stagger-stars.net

  4. An empirical investigation on the forecasting ability of mallows model averaging in a macro economic environment

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.

  5. SPARSE—A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure

    PubMed Central

    Davis, Sean L.; Sen, Oishik; Udaykumar, H. S.

    2017-01-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian–Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles. PMID:28413341

  6. SPARSE-A subgrid particle averaged Reynolds stress equivalent model: testing with a priori closure.

    PubMed

    Davis, Sean L; Jacobs, Gustaaf B; Sen, Oishik; Udaykumar, H S

    2017-03-01

    A Lagrangian particle cloud model is proposed that accounts for the effects of Reynolds-averaged particle and turbulent stresses and the averaged carrier-phase velocity of the subparticle cloud scale on the averaged motion and velocity of the cloud. The SPARSE (subgrid particle averaged Reynolds stress equivalent) model is based on a combination of a truncated Taylor expansion of a drag correction function and Reynolds averaging. It reduces the required number of computational parcels to trace a cloud of particles in Eulerian-Lagrangian methods for the simulation of particle-laden flow. Closure is performed in an a priori manner using a reference simulation where all particles in the cloud are traced individually with a point-particle model. Comparison of a first-order model and SPARSE with the reference simulation in one dimension shows that both the stress and the averaging of the carrier-phase velocity on the cloud subscale affect the averaged motion of the particle. A three-dimensional isotropic turbulence computation shows that only one computational parcel is sufficient to accurately trace a cloud of tens of thousands of particles.

  7. Cycle-averaged dynamics of a periodically driven, closed-loop circulation model

    NASA Technical Reports Server (NTRS)

    Heldt, T.; Chang, J. L.; Chen, J. J. S.; Verghese, G. C.; Mark, R. G.

    2005-01-01

    Time-varying elastance models have been used extensively in the past to simulate the pulsatile nature of cardiovascular waveforms. Frequently, however, one is interested in dynamics that occur over longer time scales, in which case a detailed simulation of each cardiac contraction becomes computationally burdensome. In this paper, we apply circuit-averaging techniques to a periodically driven, closed-loop, three-compartment recirculation model. The resultant cycle-averaged model is linear and time invariant, and greatly reduces the computational burden. It is also amenable to systematic order reduction methods that lead to further efficiencies. Despite its simplicity, the averaged model captures the dynamics relevant to the representation of a range of cardiovascular reflex mechanisms. c2004 Elsevier Ltd. All rights reserved.

  8. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  9. Metainference: A Bayesian inference method for heterogeneous systems

    PubMed Central

    Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele

    2016-01-01

    Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called “metainference,” that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors. PMID:26844300

  10. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  11. Chemometrics-assisted spectrophotometry method for the determination of chemical oxygen demand in pulping effluent.

    PubMed

    Chen, Honglei; Chen, Yuancai; Zhan, Huaiyu; Fu, Shiyu

    2011-04-01

    A new method has been developed for the determination of chemical oxygen demand (COD) in pulping effluent using chemometrics-assisted spectrophotometry. Two calibration models were established by inducing UV-visible spectroscopy (model 1) and derivative spectroscopy (model 2), combined with the chemometrics software Smica-P. Correlation coefficients of the two models are 0.9954 (model 1) and 0.9963 (model 2) when COD of samples is in the range of 0 to 405 mg/L. Sensitivities of the two models are 0.0061 (model 1) and 0.0056 (model 2) and method detection limits are 2.02-2.45 mg/L (model 1) and 2.13-2.51 mg/L (model 2). Validation experiment showed that the average standard deviation of model 2 was 1.11 and that of model 1 was 1.54. Similarly, average relative error of model 2 (4.25%) was lower than model 1 (5.00%), which indicated that the predictability of model 2 was better than that of model 1. Chemometrics-assisted spectrophotometry method did not need chemical reagents and digestion which were required in the conventional methods, and the testing time of the new method was significantly shorter than the conventional ones. The proposed method can be used to measure COD in pulping effluent as an environmentally friendly approach with satisfactory results.

  12. Turbulence modeling for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, J. G.; Coakley, T. J.

    1989-01-01

    Turbulence modeling for high speed compressible flows is described and discussed. Starting with the compressible Navier-Stokes equations, methods of statistical averaging are described by means of which the Reynolds-averaged Navier-Stokes equations are developed. Unknown averages in these equations are approximated using various closure concepts. Zero-, one-, and two-equation eddy viscosity models, algebraic stress models and Reynolds stress transport models are discussed. Computations of supersonic and hypersonic flows obtained using several of the models are discussed and compared with experimental results. Specific examples include attached boundary layer flows, shock wave boundary layer interactions and compressible shear layers. From these examples, conclusions regarding the status of modeling and recommendations for future studies are discussed.

  13. [A new kinematics method of determing elbow rotation axis and evaluation of its feasibility].

    PubMed

    Han, W; Song, J; Wang, G Z; Ding, H; Li, G S; Gong, M Q; Jiang, X Y; Wang, M Y

    2016-04-18

    To study a new positioning method of elbow external fixation rotation axis, and to evaluate its feasibility. Four normal adult volunteers and six Sawbone elbow models were brought into this experiment. The kinematic data of five elbow flexion were collected respectively by optical positioning system. The rotation axes of the elbow joints were fitted by the least square method. The kinematic data and fitting results were visually displayed. According to the fitting results, the average moving planes and rotation axes were calculated. Thus, the rotation axes of new kinematic methods were obtained. By using standard clinical methods, the entrance and exit points of rotation axes of six Sawbone elbow models were located under X-ray. And The kirschner wires were placed as the representatives of rotation axes using traditional positioning methods. Then, the entrance point deviation, the exit point deviation and the angle deviation of two kinds of located rotation axes were compared. As to the four volunteers, the indicators represented circular degree and coplanarity of elbow flexion movement trajectory of each volunteer were both about 1 mm. All the distance deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 3 mm. All the angle deviations of the moving axes to the average moving rotation axes of the five volunteers were less than 5°. As to the six Sawbone models, the average entrance point deviations, the average exit point deviations and the average angle deviations of two different rotation axes determined by two kinds of located methods were respectively 1.697 2 mm, 1.838 3 mm and 1.321 7°. All the deviations were very small. They were all in an acceptable range of clinical practice. The values that represent circular degree and coplanarity of volunteer's elbow single curvature movement trajectory are very small. The result shows that the elbow single curvature movement can be regarded as the approximate fixed axis movement. The new method can replace the traditional method in accuracy. It can make up the deficiency of the traditional fixed axis method.

  14. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  15. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10... (Reapproved 2008) c. Oxides of nitrogen 53 parts per million dry volume 3-run average (1 hour minimum sample... average (1 hour minimum sample time per run) Performance test (Method 6 or 6c at 40 CFR part 60, appendix...

  16. Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.

    2004-03-01

    The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates basedmore » on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.« less

  17. Application of scl - pbl method to increase quality learning of industrial statistics course in department of industrial engineering pancasila university

    NASA Astrophysics Data System (ADS)

    Darmawan, M.; Hidayah, N. Y.

    2017-12-01

    Currently, there has been a change of new paradigm in the learning model in college, ie from Teacher Centered Learning (TCL) model to Student Centered Learing (SCL). It is generally assumed that the SCL model is better than the TCL model. The Courses of 2nd Industrial Statistics in the Department Industrial Engineering Pancasila University is the subject that belongs to the Basic Engineering group. So far, the applied learning model refers more to the TCL model, and field facts show that the learning outcomes are less satisfactory. Of the three consecutive semesters, ie even semester 2013/2014, 2014/2015, and 2015/2016 obtained grade average is equal to 56.0; 61.1, and 60.5. In the even semester of 2016/2017, Classroom Action Research (CAR) is conducted for this course through the implementation of SCL model with Problem Based Learning (PBL) methods. The hypothesis proposed is that the SCL-PBL model will be able to improve the final grade of the course. The results shows that the average grade of the course can be increased to 73.27. This value was then tested using the ANOVA and the test results concluded that the average grade was significantly different from the average grade value in the previous three semesters.

  18. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  19. Complementary nonparametric analysis of covariance for logistic regression in a randomized clinical trial setting.

    PubMed

    Tangen, C M; Koch, G G

    1999-03-01

    In the randomized clinical trial setting, controlling for covariates is expected to produce variance reduction for the treatment parameter estimate and to adjust for random imbalances of covariates between the treatment groups. However, for the logistic regression model, variance reduction is not obviously obtained. This can lead to concerns about the assumptions of the logistic model. We introduce a complementary nonparametric method for covariate adjustment. It provides results that are usually compatible with expectations for analysis of covariance. The only assumptions required are based on randomization and sampling arguments. The resulting treatment parameter is a (unconditional) population average log-odds ratio that has been adjusted for random imbalance of covariates. Data from a randomized clinical trial are used to compare results from the traditional maximum likelihood logistic method with those from the nonparametric logistic method. We examine treatment parameter estimates, corresponding standard errors, and significance levels in models with and without covariate adjustment. In addition, we discuss differences between unconditional population average treatment parameters and conditional subpopulation average treatment parameters. Additional features of the nonparametric method, including stratified (multicenter) and multivariate (multivisit) analyses, are illustrated. Extensions of this methodology to the proportional odds model are also made.

  20. Translating landfill methane generation parameters among first-order decay models.

    PubMed

    Krause, Max J; Chickering, Giles W; Townsend, Timothy G

    2016-11-01

    Landfill gas (LFG) generation is predicted by a first-order decay (FOD) equation that incorporates two parameters: a methane generation potential (L 0 ) and a methane generation rate (k). Because non-hazardous waste landfills may accept many types of waste streams, multiphase models have been developed in an attempt to more accurately predict methane generation from heterogeneous waste streams. The ability of a single-phase FOD model to predict methane generation using weighted-average methane generation parameters and tonnages translated from multiphase models was assessed in two exercises. In the first exercise, waste composition from four Danish landfills represented by low-biodegradable waste streams was modeled in the Afvalzorg Multiphase Model and methane generation was compared to the single-phase Intergovernmental Panel on Climate Change (IPCC) Waste Model and LandGEM. In the second exercise, waste composition represented by IPCC waste components was modeled in the multiphase IPCC and compared to single-phase LandGEM and Australia's Solid Waste Calculator (SWC). In both cases, weight-averaging of methane generation parameters from waste composition data in single-phase models was effective in predicting cumulative methane generation from -7% to +6% of the multiphase models. The results underscore the understanding that multiphase models will not necessarily improve LFG generation prediction because the uncertainty of the method rests largely within the input parameters. A unique method of calculating the methane generation rate constant by mass of anaerobically degradable carbon was presented (k c ) and compared to existing methods, providing a better fit in 3 of 8 scenarios. Generally, single phase models with weighted-average inputs can accurately predict methane generation from multiple waste streams with varied characteristics; weighted averages should therefore be used instead of regional default values when comparing models. Translating multiphase first-order decay model input parameters by weighted average shows that single-phase models can predict cumulative methane generation within the level of uncertainty of many of the input parameters as defined by the Intergovernmental Panel on Climate Change (IPCC), which indicates that decreasing the uncertainty of the input parameters will make the model more accurate rather than adding multiple phases or input parameters.

  1. Free-free opacity in dense plasmas with an average atom model

    DOE PAGES

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick; ...

    2017-02-28

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  2. Free-free opacity in dense plasmas with an average atom model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaffer, Nathaniel R.; Ferris, Natalie G.; Colgan, James Patrick

    A model for the free-free opacity of dense plasmas is presented. The model uses a previously developed average atom model, together with the Kubo-Greenwood model for optical conductivity. This, in turn, is used to calculate the opacity with the Kramers-Kronig dispersion relations. Furthermore, comparisons to other methods for dense deuterium results in excellent agreement with DFT-MD simulations, and reasonable agreement with a simple Yukawa screening model corrected to satisfy the conductivity sum rule.

  3. Generalized Seasonal Autoregressive Integrated Moving Average Models for Count Data with Application to Malaria Time Series with Low Case Numbers

    PubMed Central

    Briët, Olivier J. T.; Amerasinghe, Priyanie H.; Vounatsou, Penelope

    2013-01-01

    Introduction With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions’ impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during “consolidation” and “pre-elimination” phases. Methods Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. Results The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. Conclusions G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low. PMID:23785448

  4. How well the Reliable Ensemble Averaging Method (REA) for 15 CMIP5 GCMs simulations works for Mexico?

    NASA Astrophysics Data System (ADS)

    Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.

    2013-05-01

    15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.

  5. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    NASA Astrophysics Data System (ADS)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  6. Modeling Interactions Among Turbulence, Gas-Phase Chemistry, Soot and Radiation Using Transported PDF Methods

    NASA Astrophysics Data System (ADS)

    Haworth, Daniel

    2013-11-01

    The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.

  7. SU-F-BRD-01: A Logistic Regression Model to Predict Objective Function Weights in Prostate Cancer IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, J; Chan, T; Lee, T

    2014-06-15

    Purpose: To develop a statistical model that predicts optimization objective function weights from patient geometry for intensity-modulation radiotherapy (IMRT) of prostate cancer. Methods: A previously developed inverse optimization method (IOM) is applied retrospectively to determine optimal weights for 51 treated patients. We use an overlap volume ratio (OVR) of bladder and rectum for different PTV expansions in order to quantify patient geometry in explanatory variables. Using the optimal weights as ground truth, we develop and train a logistic regression (LR) model to predict the rectum weight and thus the bladder weight. Post hoc, we fix the weights of the leftmore » femoral head, right femoral head, and an artificial structure that encourages conformity to the population average while normalizing the bladder and rectum weights accordingly. The population average of objective function weights is used for comparison. Results: The OVR at 0.7cm was found to be the most predictive of the rectum weights. The LR model performance is statistically significant when compared to the population average over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and mean voxel dose to the bladder, rectum, CTV, and PTV. On average, the LR model predicted bladder and rectum weights that are both 63% closer to the optimal weights compared to the population average. The treatment plans resulting from the LR weights have, on average, a rectum V70Gy that is 35% closer to the clinical plan and a bladder V70Gy that is 43% closer. Similar results are seen for bladder V54Gy and rectum V54Gy. Conclusion: Statistical modelling from patient anatomy can be used to determine objective function weights in IMRT for prostate cancer. Our method allows the treatment planners to begin the personalization process from an informed starting point, which may lead to more consistent clinical plans and reduce overall planning time.« less

  8. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    PubMed

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Approximation to cutoffs of higher modes of Rayleigh waves for a layered earth model

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2009-01-01

    A cutoff defines the long-period termination of a Rayleigh-wave higher mode and, therefore is a key characteristic of higher mode energy relationship to several material properties of the subsurface. Cutoffs have been used to estimate the shear-wave velocity of an underlying half space of a layered earth model. In this study, we describe a method that replaces the multilayer earth model with a single surface layer overlying the half-space model, accomplished by harmonic averaging of velocities and arithmetic averaging of densities. Using numerical comparisons with theoretical models validates the single-layer approximation. Accuracy of this single-layer approximation is best defined by values of the calculated error in the frequency and phase velocity estimate at a cutoff. Our proposed method is intuitively explained using ray theory. Numerical results indicate that a cutoffs frequency is controlled by the averaged elastic properties within the passing depth of Rayleigh waves and the shear-wave velocity of the underlying half space. ?? Birkh??user Verlag, Basel 2009.

  10. Hyperspectral remote sensing of plant biochemistry using Bayesian model averaging with variable and band selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Kaiguang; Valle, Denis; Popescu, Sorin

    2013-05-15

    Model specification remains challenging in spectroscopy of plant biochemistry, as exemplified by the availability of various spectral indices or band combinations for estimating the same biochemical. This lack of consensus in model choice across applications argues for a paradigm shift in hyperspectral methods to address model uncertainty and misspecification. We demonstrated one such method using Bayesian model averaging (BMA), which performs variable/band selection and quantifies the relative merits of many candidate models to synthesize a weighted average model with improved predictive performances. The utility of BMA was examined using a portfolio of 27 foliage spectral–chemical datasets representing over 80 speciesmore » across the globe to estimate multiple biochemical properties, including nitrogen, hydrogen, carbon, cellulose, lignin, chlorophyll (a or b), carotenoid, polar and nonpolar extractives, leaf mass per area, and equivalent water thickness. We also compared BMA with partial least squares (PLS) and stepwise multiple regression (SMR). Results showed that all the biochemicals except carotenoid were accurately estimated from hyerspectral data with R2 values > 0.80.« less

  11. A model-averaging method for assessing groundwater conceptual model uncertainty.

    PubMed

    Ye, Ming; Pohlmann, Karl F; Chapman, Jenny B; Pohll, Greg M; Reeves, Donald M

    2010-01-01

    This study evaluates alternative groundwater models with different recharge and geologic components at the northern Yucca Flat area of the Death Valley Regional Flow System (DVRFS), USA. Recharge over the DVRFS has been estimated using five methods, and five geological interpretations are available at the northern Yucca Flat area. Combining the recharge and geological components together with additional modeling components that represent other hydrogeological conditions yields a total of 25 groundwater flow models. As all the models are plausible given available data and information, evaluating model uncertainty becomes inevitable. On the other hand, hydraulic parameters (e.g., hydraulic conductivity) are uncertain in each model, giving rise to parametric uncertainty. Propagation of the uncertainty in the models and model parameters through groundwater modeling causes predictive uncertainty in model predictions (e.g., hydraulic head and flow). Parametric uncertainty within each model is assessed using Monte Carlo simulation, and model uncertainty is evaluated using the model averaging method. Two model-averaging techniques (on the basis of information criteria and GLUE) are discussed. This study shows that contribution of model uncertainty to predictive uncertainty is significantly larger than that of parametric uncertainty. For the recharge and geological components, uncertainty in the geological interpretations has more significant effect on model predictions than uncertainty in the recharge estimates. In addition, weighted residuals vary more for the different geological models than for different recharge models. Most of the calibrated observations are not important for discriminating between the alternative models, because their weighted residuals vary only slightly from one model to another.

  12. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  13. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  14. A state-based probabilistic model for tumor respiratory motion prediction

    NASA Astrophysics Data System (ADS)

    Kalet, Alan; Sandison, George; Wu, Huanmei; Schmitz, Ruth

    2010-12-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more general HMM-type predictive models. RMS errors for the time average model approach the theoretical limit of the HMM, and predicted state sequences are well correlated with sequences known to fit the data.

  15. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  16. Reliability ensemble averaging of 21st century projections of terrestrial net primary productivity reduces global and regional uncertainties

    NASA Astrophysics Data System (ADS)

    Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew

    2018-02-01

    Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) business as usual emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.

  17. Appropriateness of selecting different averaging times for modelling chronic and acute exposure to environmental odours

    NASA Astrophysics Data System (ADS)

    Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.

    Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.

  18. An empirical investigation on different methods of economic growth rate forecast and its behavior from fifteen countries across five continents

    NASA Astrophysics Data System (ADS)

    Yin, Yip Chee; Hock-Eam, Lim

    2012-09-01

    Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.

  19. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  20. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    NASA Astrophysics Data System (ADS)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  1. Bayesian model averaging using particle filtering and Gaussian mixture modeling: Theory, concepts, and simulation experiments

    NASA Astrophysics Data System (ADS)

    Rings, Joerg; Vrugt, Jasper A.; Schoups, Gerrit; Huisman, Johan A.; Vereecken, Harry

    2012-05-01

    Bayesian model averaging (BMA) is a standard method for combining predictive distributions from different models. In recent years, this method has enjoyed widespread application and use in many fields of study to improve the spread-skill relationship of forecast ensembles. The BMA predictive probability density function (pdf) of any quantity of interest is a weighted average of pdfs centered around the individual (possibly bias-corrected) forecasts, where the weights are equal to posterior probabilities of the models generating the forecasts, and reflect the individual models skill over a training (calibration) period. The original BMA approach presented by Raftery et al. (2005) assumes that the conditional pdf of each individual model is adequately described with a rather standard Gaussian or Gamma statistical distribution, possibly with a heteroscedastic variance. Here we analyze the advantages of using BMA with a flexible representation of the conditional pdf. A joint particle filtering and Gaussian mixture modeling framework is presented to derive analytically, as closely and consistently as possible, the evolving forecast density (conditional pdf) of each constituent ensemble member. The median forecasts and evolving conditional pdfs of the constituent models are subsequently combined using BMA to derive one overall predictive distribution. This paper introduces the theory and concepts of this new ensemble postprocessing method, and demonstrates its usefulness and applicability by numerical simulation of the rainfall-runoff transformation using discharge data from three different catchments in the contiguous United States. The revised BMA method receives significantly lower-prediction errors than the original default BMA method (due to filtering) with predictive uncertainty intervals that are substantially smaller but still statistically coherent (due to the use of a time-variant conditional pdf).

  2. A novel application of artificial neural network for wind speed estimation

    NASA Astrophysics Data System (ADS)

    Fang, Da; Wang, Jianzhou

    2017-05-01

    Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.

  3. Development and evaluation of a hybrid averaged orbit generator

    NASA Technical Reports Server (NTRS)

    Mcclain, W. D.; Long, A. C.; Early, L. W.

    1978-01-01

    A rapid orbit generator based on a first-order application of the Generalized Method of Averaging has been developed for the Research and Development (R&D) version of the Goddard Trajectory Determination System (GTDS). The evaluation of the averaged equations of motion can use both numerically averaged and recursively evaluated, analytically averaged perturbation models. These equations are numerically integrated to obtain the secular and long-period motion. Factors affecting efficient orbit prediction are discussed and guidelines are presented for treatment of each major perturbation. Guidelines for obtaining initial mean elements compatible with the theory are presented. An overview of the orbit generator is presented and comparisons with high precision methods are given.

  4. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  5. Model selection and model averaging in phylogenetics: advantages of akaike information criterion and bayesian approaches over likelihood ratio tests.

    PubMed

    Posada, David; Buckley, Thomas R

    2004-10-01

    Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).

  6. The averaging method in applied problems

    NASA Astrophysics Data System (ADS)

    Grebenikov, E. A.

    1986-04-01

    The totality of methods, allowing to research complicated non-linear oscillating systems, named in the literature "averaging method" has been given. THe author is describing the constructive part of this method, or a concrete form and corresponding algorithms, on mathematical models, sufficiently general , but built on concrete problems. The style of the book is that the reader interested in the Technics and algorithms of the asymptotic theory of the ordinary differential equations, could solve individually such problems. For specialists in the area of applied mathematics and mechanics.

  7. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  8. Estimating the average treatment effect on survival based on observational data and using partly conditional modeling.

    PubMed

    Gong, Qi; Schaubel, Douglas E

    2017-03-01

    Treatments are frequently evaluated in terms of their effect on patient survival. In settings where randomization of treatment is not feasible, observational data are employed, necessitating correction for covariate imbalances. Treatments are usually compared using a hazard ratio. Most existing methods which quantify the treatment effect through the survival function are applicable to treatments assigned at time 0. In the data structure of our interest, subjects typically begin follow-up untreated; time-until-treatment, and the pretreatment death hazard are both heavily influenced by longitudinal covariates; and subjects may experience periods of treatment ineligibility. We propose semiparametric methods for estimating the average difference in restricted mean survival time attributable to a time-dependent treatment, the average effect of treatment among the treated, under current treatment assignment patterns. The pre- and posttreatment models are partly conditional, in that they use the covariate history up to the time of treatment. The pre-treatment model is estimated through recently developed landmark analysis methods. For each treated patient, fitted pre- and posttreatment survival curves are projected out, then averaged in a manner which accounts for the censoring of treatment times. Asymptotic properties are derived and evaluated through simulation. The proposed methods are applied to liver transplant data in order to estimate the effect of liver transplantation on survival among transplant recipients under current practice patterns. © 2016, The International Biometric Society.

  9. The statistical average of optical properties for alumina particle cluster in aircraft plume

    NASA Astrophysics Data System (ADS)

    Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin

    2018-04-01

    We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.

  10. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... meter 3-run average (1 hour minimum sample time per run) Performance test (Method 29 of appendix A of this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10, 10A, or 10B, of appendix A of this part) Dioxins/furans...

  11. An impact analysis of forecasting methods and forecasting parameters on bullwhip effect

    NASA Astrophysics Data System (ADS)

    Silitonga, R. Y. H.; Jelly, N.

    2018-04-01

    Bullwhip effect is an increase of variance of demand fluctuation from downstream to upstream of supply chain. Forecasting methods and forecasting parameters were recognized as some factors that affect bullwhip phenomena. To study these factors, we can develop simulations. There are several ways to simulate bullwhip effect in previous studies, such as mathematical equation modelling, information control modelling, computer program, and many more. In this study a spreadsheet program named Bullwhip Explorer was used to simulate bullwhip effect. Several scenarios were developed to show the change in bullwhip effect ratio because of the difference in forecasting methods and forecasting parameters. Forecasting methods used were mean demand, moving average, exponential smoothing, demand signalling, and minimum expected mean squared error. Forecasting parameters were moving average period, smoothing parameter, signalling factor, and safety stock factor. It showed that decreasing moving average period, increasing smoothing parameter, increasing signalling factor can create bigger bullwhip effect ratio. Meanwhile, safety stock factor had no impact to bullwhip effect.

  12. Parameterisation of multi-scale continuum perfusion models from discrete vascular networks.

    PubMed

    Hyde, Eoin R; Michler, Christian; Lee, Jack; Cookson, Andrew N; Chabiniok, Radek; Nordsletten, David A; Smith, Nicolas P

    2013-05-01

    Experimental data and advanced imaging techniques are increasingly enabling the extraction of detailed vascular anatomy from biological tissues. Incorporation of anatomical data within perfusion models is non-trivial, due to heterogeneous vessel density and disparate radii scales. Furthermore, previous idealised networks have assumed a spatially repeating motif or periodic canonical cell, thereby allowing for a flow solution via homogenisation. However, such periodicity is not observed throughout anatomical networks. In this study, we apply various spatial averaging methods to discrete vascular geometries in order to parameterise a continuum model of perfusion. Specifically, a multi-compartment Darcy model was used to provide vascular scale separation for the fluid flow. Permeability tensor fields were derived from both synthetic and anatomically realistic networks using (1) porosity-scaled isotropic, (2) Huyghe and Van Campen, and (3) projected-PCA methods. The Darcy pressure fields were compared via a root-mean-square error metric to an averaged Poiseuille pressure solution over the same domain. The method of Huyghe and Van Campen performed better than the other two methods in all simulations, even for relatively coarse networks. Furthermore, inter-compartment volumetric flux fields, determined using the spatially averaged discrete flux per unit pressure difference, were shown to be accurate across a range of pressure boundary conditions. This work justifies the application of continuum flow models to characterise perfusion resulting from flow in an underlying vascular network.

  13. The stock-flow model of spatial data infrastructure development refined by fuzzy logic.

    PubMed

    Abdolmajidi, Ehsan; Harrie, Lars; Mansourian, Ali

    2016-01-01

    The system dynamics technique has been demonstrated to be a proper method by which to model and simulate the development of spatial data infrastructures (SDI). An SDI is a collaborative effort to manage and share spatial data at different political and administrative levels. It is comprised of various dynamically interacting quantitative and qualitative (linguistic) variables. To incorporate linguistic variables and their joint effects in an SDI-development model more effectively, we suggest employing fuzzy logic. Not all fuzzy models are able to model the dynamic behavior of SDIs properly. Therefore, this paper aims to investigate different fuzzy models and their suitability for modeling SDIs. To that end, two inference and two defuzzification methods were used for the fuzzification of the joint effect of two variables in an existing SDI model. The results show that the Average-Average inference and Center of Area defuzzification can better model the dynamics of SDI development.

  14. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  15. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  16. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  17. Building generic anatomical models using virtual model cutting and iterative registration.

    PubMed

    Xiao, Mei; Soh, Jung; Meruvia-Pastor, Oscar; Schmidt, Eric; Hallgrímsson, Benedikt; Sensen, Christoph W

    2010-02-08

    Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. The method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Our method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.

  18. Explicitly Representing the Solvation Shell in Continuum Solvent Calculations

    PubMed Central

    Svendsen, Hallvard F.; Merz, Kenneth M.

    2009-01-01

    A method is presented to explicitly represent the first solvation shell in continuum solvation calculations. Initial solvation shell geometries were generated with classical molecular dynamics simulations. Clusters consisting of solute and 5 solvent molecules were fully relaxed in quantum mechanical calculations. The free energy of solvation of the solute was calculated from the free energy of formation of the cluster and the solvation free energy of the cluster calculated with continuum solvation models. The method has been implemented with two continuum solvation models, a Poisson-Boltzmann model and the IEF-PCM model. Calculations were carried out for a set of 60 ionic species. Implemented with the Poisson-Boltzmann model the method gave an unsigned average error of 2.1 kcal/mol and a RMSD of 2.6 kcal/mol for anions, for cations the unsigned average error was 2.8 kcal/mol and the RMSD 3.9 kcal/mol. Similar results were obtained with the IEF-PCM model. PMID:19425558

  19. Area-averaged evapotranspiration over a heterogeneous land surface: aggregation of multi-point EC flux measurements with a high-resolution land-cover map and footprint analysis

    NASA Astrophysics Data System (ADS)

    Xu, Feinan; Wang, Weizhen; Wang, Jiemin; Xu, Ziwei; Qi, Yuan; Wu, Yueru

    2017-08-01

    The determination of area-averaged evapotranspiration (ET) at the satellite pixel scale/model grid scale over a heterogeneous land surface plays a significant role in developing and improving the parameterization schemes of the remote sensing based ET estimation models and general hydro-meteorological models. The Heihe Watershed Allied Telemetry Experimental Research (HiWATER) flux matrix provided a unique opportunity to build an aggregation scheme for area-averaged fluxes. On the basis of the HiWATER flux matrix dataset and high-resolution land-cover map, this study focused on estimating the area-averaged ET over a heterogeneous landscape with footprint analysis and multivariate regression. The procedure is as follows. Firstly, quality control and uncertainty estimation for the data of the flux matrix, including 17 eddy-covariance (EC) sites and four groups of large-aperture scintillometers (LASs), were carefully done. Secondly, the representativeness of each EC site was quantitatively evaluated; footprint analysis was also performed for each LAS path. Thirdly, based on the high-resolution land-cover map derived from aircraft remote sensing, a flux aggregation method was established combining footprint analysis and multiple-linear regression. Then, the area-averaged sensible heat fluxes obtained from the EC flux matrix were validated by the LAS measurements. Finally, the area-averaged ET of the kernel experimental area of HiWATER was estimated. Compared with the formerly used and rather simple approaches, such as the arithmetic average and area-weighted methods, the present scheme is not only with a much better database, but also has a solid grounding in physics and mathematics in the integration of area-averaged fluxes over a heterogeneous surface. Results from this study, both instantaneous and daily ET at the satellite pixel scale, can be used for the validation of relevant remote sensing models and land surface process models. Furthermore, this work will be extended to the water balance study of the whole Heihe River basin.

  20. Model Identification of Integrated ARMA Processes

    ERIC Educational Resources Information Center

    Stadnytska, Tetiana; Braun, Simone; Werner, Joachim

    2008-01-01

    This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…

  1. A Comparison of Averaged and Full Models to Study the Third-Body Perturbation

    PubMed Central

    Solórzano, Carlos Renato Huaura; Prado, Antonio Fernando Bertachini de Almeida

    2013-01-01

    The effects of a third-body travelling in a circular orbit around a main body on a massless satellite that is orbiting the same main body are studied under two averaged models, single and double, where expansions of the disturbing function are made, and the full restricted circular three-body problem. The goal is to compare the behavior of these two averaged models against the full problem for long-term effects, in order to have some knowledge of their differences. The single averaged model eliminates the terms due to the short period of the spacecraft. The double average is taken over the mean motion of the satellite and the mean motion of the disturbing body, so removing both short period terms. As an example of the methods, an artificial satellite around the Earth perturbed by the Moon is used. A detailed study of the effects of different initial conditions in the orbit of the spacecraft is made. PMID:24319348

  2. A comparison of averaged and full models to study the third-body perturbation.

    PubMed

    Solórzano, Carlos Renato Huaura; Prado, Antonio Fernando Bertachini de Almeida

    2013-01-01

    The effects of a third-body travelling in a circular orbit around a main body on a massless satellite that is orbiting the same main body are studied under two averaged models, single and double, where expansions of the disturbing function are made, and the full restricted circular three-body problem. The goal is to compare the behavior of these two averaged models against the full problem for long-term effects, in order to have some knowledge of their differences. The single averaged model eliminates the terms due to the short period of the spacecraft. The double average is taken over the mean motion of the satellite and the mean motion of the disturbing body, so removing both short period terms. As an example of the methods, an artificial satellite around the Earth perturbed by the Moon is used. A detailed study of the effects of different initial conditions in the orbit of the spacecraft is made.

  3. United3D: a protein model quality assessment program that uses two consensus based methods.

    PubMed

    Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko

    2012-01-01

    In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.

  4. Probabilistic models for capturing more physicochemical properties on protein-protein interface.

    PubMed

    Guo, Fei; Li, Shuai Cheng; Du, Pufeng; Wang, Lusheng

    2014-06-23

    Protein-protein interactions play a key role in a multitude of biological processes, such as signal transduction, de novo drug design, immune responses, and enzymatic activities. It is of great interest to understand how proteins interact with each other. The general approach is to explore all possible poses and identify near-native ones with the energy function. The key issue here is to design an effective energy function, based on various physicochemical properties. In this paper, we first identify two new features, the coupled dihedral angles on the interfaces and the geometrical information on π-π interactions. We study these two features through statistical methods: a mixture of bivariate von Mises distributions is used to model the correlation of the coupled dihedral angles, while a mixture of bivariate normal distributions is used to model the orientation of the aromatic rings on π-π interactions. Using 6438 complexes, we parametrize the joint distribution of each new feature. Then, we propose a novel method to construct the energy function for protein-protein interface prediction, which includes the new features as well as the existing energy items such as dDFIRE energy, side-chain energy, atom contact energy, and amino acid energy. Experiments show that our method outperforms the state-of-the-art methods, ZRANK and ClusPro. We use the CAPRI evaluation criteria, Irmsd value, and Fnat value. On Benchmark v4.0, our method has an average Irmsd value of 3.39 Å and Fnat value of 62%, which improves upon the average Irmsd value of 3.89 Å and Fnat value of 49% for ZRANK, and the average Irmsd value of 3.99 Å and Fnat value of 46% for ClusPro. On the CAPRI targets, our method has an average Irmsd value of 3.56 Å and Fnat value of 42%, which improves upon the average Irmsd value of 4.27 Å and Fnat value of 39% for ZRANK, the average Irmsd value of 5.15 Å and Fnat value of 30% for ClusPro.

  5. Generalized seasonal autoregressive integrated moving average models for count data with application to malaria time series with low case numbers.

    PubMed

    Briët, Olivier J T; Amerasinghe, Priyanie H; Vounatsou, Penelope

    2013-01-01

    With the renewed drive towards malaria elimination, there is a need for improved surveillance tools. While time series analysis is an important tool for surveillance, prediction and for measuring interventions' impact, approximations by commonly used Gaussian methods are prone to inaccuracies when case counts are low. Therefore, statistical methods appropriate for count data are required, especially during "consolidation" and "pre-elimination" phases. Generalized autoregressive moving average (GARMA) models were extended to generalized seasonal autoregressive integrated moving average (GSARIMA) models for parsimonious observation-driven modelling of non Gaussian, non stationary and/or seasonal time series of count data. The models were applied to monthly malaria case time series in a district in Sri Lanka, where malaria has decreased dramatically in recent years. The malaria series showed long-term changes in the mean, unstable variance and seasonality. After fitting negative-binomial Bayesian models, both a GSARIMA and a GARIMA deterministic seasonality model were selected based on different criteria. Posterior predictive distributions indicated that negative-binomial models provided better predictions than Gaussian models, especially when counts were low. The G(S)ARIMA models were able to capture the autocorrelation in the series. G(S)ARIMA models may be particularly useful in the drive towards malaria elimination, since episode count series are often seasonal and non-stationary, especially when control is increased. Although building and fitting GSARIMA models is laborious, they may provide more realistic prediction distributions than do Gaussian methods and may be more suitable when counts are low.

  6. Variability analysis of SAR from 20 MHz to 2.4 GHz for different adult and child models using finite-difference time-domain

    NASA Astrophysics Data System (ADS)

    Conil, E.; Hadjem, A.; Lacroux, F.; Wong, M. F.; Wiart, J.

    2008-03-01

    This paper deals with the variability of body models used in numerical dosimetry studies. Six adult anthropomorphic voxel models have been collected and used to build 5-, 8- and 12-year-old children using a morphing method respecting anatomical parameters. Finite-difference time-domain calculations of a specific absorption rate (SAR) have been performed for a range of frequencies from 20 MHz to 2.4 GHz for isolated models illuminated by plane waves. A whole-body-averaged SAR is presented as well as the average on specific tissues such as skin, muscles, fat or bones and the average on specific parts of the body such as head, legs, arms or torso. Results point out the variability of adult models. The standard deviation of whole-body-averaged SAR of adult models can reach 40%. All phantoms are exposed to the ICNIRP reference levels. Results show that for adults, compliance with reference levels ensures compliance with basic restrictions, but concerning children models involved in this study, the whole-body-averaged SAR goes over the fundamental safety limits up to 40%. For more information on this article, see medicalphysicsweb.org

  7. The Objective Borderline Method: A Probabilistic Method for Standard Setting

    ERIC Educational Resources Information Center

    Shulruf, Boaz; Poole, Phillippa; Jones, Philip; Wilkinson, Tim

    2015-01-01

    A new probability-based standard setting technique, the Objective Borderline Method (OBM), was introduced recently. This was based on a mathematical model of how test scores relate to student ability. The present study refined the model and tested it using 2500 simulated data-sets. The OBM was feasible to use. On average, the OBM performed well…

  8. Parametric Cost and Schedule Modeling for Early Technology Development

    DTIC Science & Technology

    2018-04-02

    Best Paper in the Analysis Methods Category and 2017 Best Paper Overall awards. It was also presented at the 2017 NASA Cost and Schedule Symposium... Methods over the Project Life Cycle .............................................................................................. 2 Figure 2. Average...information contribute to the lack of data, objective models, and methods that can be broadly applied in early planning stages. Scientific

  9. Statistically Assessing Time-Averaged and Paleosecular Variation Field Models Against Paleomagnetic Directional Data Sets. Can Likely non-Zonal Features be Detected in a Robust way ?

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Khokhlov, A.

    2007-12-01

    We recently introduced a method to rigorously test the statistical compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models against any lava flow paleomagnetic database (Khokhlov et al., 2001, 2006). Applying this method to test (TAF+PSV) models against synthetic data produced from those shows that the method is very efficient at discriminating models, and very sensitive, provided those data errors are properly taken into account. This prompted us to test a variety of published combined (TAF+PSV) models against a test Bruhnes stable polarity data set extracted from the Quidelleur et al. (1994) data base. Not surprisingly, ignoring data errors leads all models to be rejected. But taking data errors into account leads to the stimulating conclusion that at least one (TAF+PSV) model appears to be compatible with the selected data set, this model being purely axisymmetric. This result shows that in practice also, and with the data bases currently available, the method can discriminate various candidate models and decide which actually best fits a given data set. But it also shows that likely non-zonal signatures of non-homogeneous boundary conditions imposed by the mantle are difficult to identify as statistically robust from paleomagnetic directional data sets. In the present paper, we will discuss the possibility that such signatures could eventually be identified as robust with the help of more recent data sets (such as the one put together under the collaborative "TAFI" effort, see e.g. Johnson et al. abstract #GP21A-0013, AGU Fall Meeting, 2005) or by taking additional information into account (such as the possible coincidence of non-zonal time-averaged field patterns with analogous patterns in the modern field).

  10. The hybrid RANS/LES of partially premixed supersonic combustion using G/Z flamelet model

    NASA Astrophysics Data System (ADS)

    Wu, Jinshui; Wang, Zhenguo; Bai, Xuesong; Sun, Mingbo; Wang, Hongbo

    2016-10-01

    In order to describe partially premixed supersonic combustion numerically, G/Z flamelet model is developed and compared with finite rate model in hybrid RANS/LES simulation to study the strut-injection supersonic combustion flow field designed by the German Aerospace Center. A new temperature calculation method based on time-splitting method of total energy is introduced in G/Z flamelet model. Simulation results show that temperature predictions in partially premixed zone by G/Z flamelet model are more consistent with experiment than finite rate model. It is worth mentioning that low temperature reaction zone behind the strut is well reproduced. Other quantities such as average velocity and average velocity fluctuation obtained by developed G/Z flamelet model are also in good agreement with experiment. Besides, simulation results by G/Z flamelet also reveal the mechanism of partially premixed supersonic combustion by the analyses of the interaction between turbulent burning velocity and flow field.

  11. Accurate template-based modeling in CASP12 using the IntFOLD4-TS, ModFOLD6, and ReFOLD methods.

    PubMed

    McGuffin, Liam J; Shuid, Ahmad N; Kempster, Robert; Maghrabi, Ali H A; Nealon, John O; Salehe, Bajuna R; Atkins, Jennifer D; Roche, Daniel B

    2018-03-01

    Our aim in CASP12 was to improve our Template-Based Modeling (TBM) methods through better model selection, accuracy self-estimate (ASE) scores and refinement. To meet this aim, we developed two new automated methods, which we used to score, rank, and improve upon the provided server models. Firstly, the ModFOLD6_rank method, for improved global Quality Assessment (QA), model ranking and the detection of local errors. Secondly, the ReFOLD method for fixing errors through iterative QA guided refinement. For our automated predictions we developed the IntFOLD4-TS protocol, which integrates the ModFOLD6_rank method for scoring the multiple-template models that were generated using a number of alternative sequence-structure alignments. Overall, our selection of top models and ASE scores using ModFOLD6_rank was an improvement on our previous approaches. In addition, it was worthwhile attempting to repair the detected errors in the top selected models using ReFOLD, which gave us an overall gain in performance. According to the assessors' formula, the IntFOLD4 server ranked 3rd/5th (average Z-score > 0.0/-2.0) on the server only targets, and our manual predictions (McGuffin group) ranked 1st/2nd (average Z-score > -2.0/0.0) compared to all other groups. © 2017 Wiley Periodicals, Inc.

  12. A method to characterize average cervical spine ligament response based on raw data sets for implementation into injury biomechanics models.

    PubMed

    Mattucci, Stephen F E; Cronin, Duane S

    2015-01-01

    Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  14. 40 CFR Table 2 to Subpart Ffff of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... micrograms per dry standard cubic meter 3-run average (1 hour minimum sample time per run) Method 29 of appendix A of this part. 2. Carbon monoxide 40 parts per million by dry volume 3-run average (1 hour minimum sample time per run during performance test), and 12-hour rolling averages measured using CEMS b...

  15. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... this part) Carbon monoxide 157 parts per million by dry volume 3-run average (1 hour minimum sample... per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method... appendix A of this part) Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour...

  16. Online quantitative analysis of multispectral images of human body tissues

    NASA Astrophysics Data System (ADS)

    Lisenko, S. A.

    2013-08-01

    A method is developed for online monitoring of structural and morphological parameters of biological tissues (haemoglobin concentration, degree of blood oxygenation, average diameter of capillaries and the parameter characterising the average size of tissue scatterers), which involves multispectral tissue imaging, image normalisation to one of its spectral layers and determination of unknown parameters based on their stable regression relation with the spectral characteristics of the normalised image. Regression is obtained by simulating numerically the diffuse reflectance spectrum of the tissue by the Monte Carlo method at a wide variation of model parameters. The correctness of the model calculations is confirmed by the good agreement with the experimental data. The error of the method is estimated under conditions of general variability of structural and morphological parameters of the tissue. The method developed is compared with the traditional methods of interpretation of multispectral images of biological tissues, based on the solution of the inverse problem for each pixel of the image in the approximation of different analytical models.

  17. Prognostics of slurry pumps based on a moving-average wear degradation index and a general sequential Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Tse, Peter W.

    2015-05-01

    Slurry pumps are commonly used in oil-sand mining for pumping mixtures of abrasive liquids and solids. These operations cause constant wear of slurry pump impellers, which results in the breakdown of the slurry pumps. This paper develops a prognostic method for estimating remaining useful life of slurry pump impellers. First, a moving-average wear degradation index is proposed to assess the performance degradation of the slurry pump impeller. Secondly, the state space model of the proposed health index is constructed. A general sequential Monte Carlo method is employed to derive the parameters of the state space model. The remaining useful life of the slurry pump impeller is estimated by extrapolating the established state space model to a specified alert threshold. Data collected from an industrial oil sand pump were used to validate the developed method. The results show that the accuracy of the developed method improves as more data become available.

  18. Estimating V̄s(30) (or NEHRP site classes) from shallow velocity models (depths < 30 m)

    USGS Publications Warehouse

    Boore, David M.

    2004-01-01

    The average velocity to 30 m [V??s(30)] is a widely used parameter for classifying sites to predict their potential to amplify seismic shaking. In many cases, however, models of shallow shear-wave velocities, from which V??s(30) can be computed, do not extend to 30 m. If the data for these cases are to be used, some method of extrapolating the velocities must be devised. Four methods for doing this are described here and are illustrated using data from 135 boreholes in California for which the velocity model extends to at least 30 m. Methods using correlations between shallow velocity and V??s(30) result in significantly less bias for shallow models than the simplest method of assuming that the lowermost velocity extends to 30 m. In addition, for all methods the percent of sites misclassified is generally less than 10% and falls to negligible values for velocity models extending to at least 25 m. Although the methods using correlations do a better job on average of estimating V??s(30), the simplest method will generally result in a lower value of V??s(30) and thus yield a more conservative estimate of ground motion [which generally increases as V??s(30) decreases].

  19. Local and average structure of Mn- and La-substituted BiFeO3

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space group symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.

  20. Local and average structure of Mn- and La-substituted BiFeO 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bo; Selbach, Sverre M.

    2017-06-01

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO 3 is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space groupmore » symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO 3. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions.« less

  1. An automatic step adjustment method for average power analysis technique used in fiber amplifiers

    NASA Astrophysics Data System (ADS)

    Liu, Xue-Ming

    2006-04-01

    An automatic step adjustment (ASA) method for average power analysis (APA) technique used in fiber amplifiers is proposed in this paper for the first time. In comparison with the traditional APA technique, the proposed method has suggested two unique merits such as a higher order accuracy and an ASA mechanism, so that it can significantly shorten the computing time and improve the solution accuracy. A test example demonstrates that, by comparing to the APA technique, the proposed method increases the computing speed by more than a hundredfold under the same errors. By computing the model equations of erbium-doped fiber amplifiers, the numerical results show that our method can improve the solution accuracy by over two orders of magnitude at the same amplifying section number. The proposed method has the capacity to rapidly and effectively compute the model equations of fiber Raman amplifiers and semiconductor lasers.

  2. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network

    PubMed Central

    Yu, Ying; Wang, Yirui; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527

  3. Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.

    PubMed

    Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng

    2017-01-01

    With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.

  4. Monitoring gray wolf populations using multiple survey methods

    USGS Publications Warehouse

    Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.

    2013-01-01

    The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method. Occupancy modeling may be useful for standardizing estimates across large landscapes, even if survey methods differ across regions, allowing for inferences about broad-scale population dynamics of wolves.

  5. High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.

    PubMed

    Haxton, Thomas K

    2015-03-10

    We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.

  6. Simulations of Spray Reacting Flows in a Single Element LDI Injector With and Without Invoking an Eulerian Scalar PDF Method

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    This paper presents the numerical simulations of the Jet-A spray reacting flow in a single element lean direct injection (LDI) injector by using the National Combustion Code (NCC) with and without invoking the Eulerian scalar probability density function (PDF) method. The flow field is calculated by using the Reynolds averaged Navier-Stokes equations (RANS and URANS) with nonlinear turbulence models, and when the scalar PDF method is invoked, the energy and compositions or species mass fractions are calculated by solving the equation of an ensemble averaged density-weighted fine-grained probability density function that is referred to here as the averaged probability density function (APDF). A nonlinear model for closing the convection term of the scalar APDF equation is used in the presented simulations and will be briefly described. Detailed comparisons between the results and available experimental data are carried out. Some positive findings of invoking the Eulerian scalar PDF method in both improving the simulation quality and reducing the computing cost are observed.

  7. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, Max

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.

  8. A Divergence Median-based Geometric Detector with A Weighted Averaging Filter

    NASA Astrophysics Data System (ADS)

    Hua, Xiaoqiang; Cheng, Yongqiang; Li, Yubo; Wang, Hongqiang; Qin, Yuliang

    2018-01-01

    To overcome the performance degradation of the classical fast Fourier transform (FFT)-based constant false alarm rate detector with the limited sample data, a divergence median-based geometric detector on the Riemannian manifold of Heimitian positive definite matrices is proposed in this paper. In particular, an autocorrelation matrix is used to model the correlation of sample data. This method of the modeling can avoid the poor Doppler resolution as well as the energy spread of the Doppler filter banks result from the FFT. Moreover, a weighted averaging filter, conceived from the philosophy of the bilateral filtering in image denoising, is proposed and combined within the geometric detection framework. As the weighted averaging filter acts as the clutter suppression, the performance of the geometric detector is improved. Numerical experiments are given to validate the effectiveness of our proposed method.

  9. Current Trends in Modeling Research for Turbulent Aerodynamic Flows

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.; Rumsey, Christopher L.; Manceau, Remi

    2007-01-01

    The engineering tools of choice for the computation of practical engineering flows have begun to migrate from those based on the traditional Reynolds-averaged Navier-Stokes approach to methodologies capable, in theory if not in practice, of accurately predicting some instantaneous scales of motion in the flow. The migration has largely been driven by both the success of Reynolds-averaged methods over a wide variety of flows as well as the inherent limitations of the method itself. Practitioners, emboldened by their ability to predict a wide-variety of statistically steady, equilibrium turbulent flows, have now turned their attention to flow control and non-equilibrium flows, that is, separation control. This review gives some current priorities in traditional Reynolds-averaged modeling research as well as some methodologies being applied to a new class of turbulent flow control problems.

  10. Unbiased mean direction of paleomagnetic data and better estimate of paleolatitude

    NASA Astrophysics Data System (ADS)

    Hatakeyama, T.; Shibuya, H.

    2010-12-01

    In paleomagnetism, when we obtain only paleodirection data without paleointensities we calculate Fisher-mean directions (I, D) and Fisher-mean VGP positions as the description of the mean field. However, Kono (1997) and Hatakeyama and Kono (2001) indicated that these averaged directions does not show the unbiased estimated mean directions derived from the time-averaged field (TAF). Hatakeyama and Kono (2002) calculated the TAF and paleosecular variation (PSV) models for the past 5My with considering the biases due to the averaging of the nonlinear functions such as the summation of the unit vectors in the Fisher statistics process. Here we will show a zonal TAF model based on the Hatakeyama and Kono TAF model. Moreover, we will introduce the biased angles due to the PSV in the mean direction and a method for determining true paleolatitudes, which represents the TAF, from paleodirections. This method will helps tectonics studies, especially in the estimation of the accurate paleolatitude in the middle latitude regions.

  11. Estimation of Cloud Fraction Profile in Shallow Convection Using a Scanning Cloud Radar

    DOE PAGES

    Oue, Mariko; Kollias, Pavlos; North, Kirk W.; ...

    2016-10-18

    Large spatial heterogeneities in shallow convection result in uncertainties in estimations of domain-averaged cloud fraction profiles (CFP). This issue is addressed using large eddy simulations of shallow convection over land coupled with a radar simulator. Results indicate that zenith profiling observations are inadequate to provide reliable CFP estimates. Use of Scanning Cloud Radar (SCR), performing a sequence of cross-wind horizon-to-horizon scans, is not straightforward due to the strong dependence of radar sensitivity to target distance. An objective method for estimating domain-averaged CFP is proposed that uses observed statistics of SCR hydrometeor detection with height to estimate optimum sampling regions. Thismore » method shows good agreement with the model CFP. Results indicate that CFP estimates require more than 35 min of SCR scans to converge on the model domain average. Lastly, the proposed technique is expected to improve our ability to compare model output with cloud radar observations in shallow cumulus cloud conditions.« less

  12. [Evaluation of the influence of humidity and temperature on the drug stability by initial average rate experiment].

    PubMed

    He, Ning; Sun, Hechun; Dai, Miaomiao

    2014-05-01

    To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.

  13. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  14. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  15. Comparing CT perfusion with oxygen partial pressure in a rabbit VX2 soft-tissue tumor model.

    PubMed

    Sun, Chang-Jin; Li, Chao; Lv, Hai-Bo; Zhao, Cong; Yu, Jin-Ming; Wang, Guang-Hui; Luo, Yun-Xiu; Li, Yan; Xiao, Mingyong; Yin, Jun; Lang, Jin-Yi

    2014-01-01

    The aim of this study was to evaluate the oxygen partial pressure of the rabbit model of the VX2 tumor using a 64-slice perfusion CT and to compare the results with that obtained using the oxygen microelectrode method. Perfusion CT was performed for 45 successfully constructed rabbit models of a VX2 brain tumor. The perfusion values of the brain tumor region of interest, the blood volume (BV), the time to peak (TTP) and the peak enhancement intensity (PEI) were measured. The results were compared with the partial pressure of oxygen (PO2) of that region of interest obtained using the oxygen microelectrode method. The perfusion values of the brain tumor region of interest in 45 successfully constructed rabbit models of a VX2 brain tumor ranged from 1.3-127.0 (average, 21.1 ± 26.7 ml/min/ml); BV ranged from 1.2-53.5 ml/100g (average, 22.2 ± 13.7 ml/100g); PEI ranged from 8.7-124.6 HU (average, 43.5 ± 28.7 HU); and TTP ranged from 8.2-62.3 s (average, 38.8 ± 14.8 s). The PO2 in the corresponding region ranged from 0.14-47 mmHg (average, 16 ± 14.8 mmHg). The perfusion CT positively correlated with the tumor PO2, which can be used for evaluating the tumor hypoxia in clinical practice.

  16. Two-Point Turbulence Closure Applied to Variable Resolution Modeling

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.; Rubinstein, Robert

    2011-01-01

    Variable resolution methods have become frontline CFD tools, but in order to take full advantage of this promising new technology, more formal theoretical development is desirable. Two general classes of variable resolution methods can be identified: hybrid or zonal methods in which RANS and LES models are solved in different flow regions, and bridging or seamless models which interpolate smoothly between RANS and LES. This paper considers the formulation of bridging methods using methods of two-point closure theory. The fundamental problem is to derive a subgrid two-equation model. We compare and reconcile two different approaches to this goal: the Partially Integrated Transport Model, and the Partially Averaged Navier-Stokes method.

  17. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    PubMed Central

    Jiang, Hao; Zhao, Dehua; Cai, Ying; An, Shuqing

    2012-01-01

    In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT), the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI) as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal) thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV) of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling) normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3%) and overall (92.0%–93.1%) accuracies. Our results suggest that Method of 0.1% index scaling provides a feasible way to apply CT models directly to images from sensors or time periods that differ from those of the images used to develop the original models.

  18. Fourier descriptor analysis and unification of voice range profile contours: method and applications.

    PubMed

    Pabon, Peter; Ternström, Sten; Lamarche, Anick

    2011-06-01

    To describe a method for unified description, statistical modeling, and comparison of voice range profile (VRP) contours, even from diverse sources. A morphologic modeling technique, which is based on Fourier descriptors (FDs), is applied to the VRP contour. The technique, which essentially involves resampling of the curve of the contour, is assessed and also is compared to density-based VRP averaging methods that use the overlap count. VRP contours can be usefully described and compared using FDs. The method also permits the visualization of the local covariation along the contour average. For example, the FD-based analysis shows that the population variance for ensembles of VRP contours is usually smallest at the upper left part of the VRP. To illustrate the method's advantages and possible further application, graphs are given that compare the averaged contours from different authors and recording devices--for normal, trained, and untrained male and female voices as well as for child voices. The proposed technique allows any VRP shape to be brought to the same uniform base. On this uniform base, VRP contours or contour elements coming from a variety of sources may be placed within the same graph for comparison and for statistical analysis.

  19. Computational studies of transthoracic and transvenous defibrillation in a detailed 3-D human thorax model.

    PubMed

    Jorgenson, D B; Haynor, D R; Bardy, G H; Kim, Y

    1995-02-01

    A method for constructing and solving detailed patient-specific 3-D finite element models of the human thorax is presented for use in defibrillation studies. The method utilizes the patient's own X-ray CT scan and a simplified meshing scheme to quickly and efficiently generate a model typically composed of approximately 400,000 elements. A parameter sensitivity study on one human thorax model to examine the effects of variation in assigned tissue resistivity values, level of anatomical detail included in the model, and number of CT slices used to produce the model is presented. Of the seven tissue types examined, the average left ventricular (LV) myocardial voltage gradient was most sensitive to the values of myocardial and blood resistivity. Incorrectly simplifying the model, for example modeling the heart as a homogeneous structure by ignoring the blood in the chambers, caused the average LV myocardial voltage gradient to increase by 12%. The sensitivity of the model to variations in electrode size and position was also examined. Small changes (< 2.0 cm) in electrode position caused average LV myocardial voltage gradient values to increase by up to 12%. We conclude that patient-specific 3-D finite element modeling of human thoracic electric fields is feasible and may reduce the empiric approach to insertion of implantable defibrillators and improve transthoracic defibrillation techniques.

  20. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  1. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  2. A passage retrieval method based on probabilistic information retrieval model and UMLS concepts in biomedical question answering.

    PubMed

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-04-01

    Passage retrieval, the identification of top-ranked passages that may contain the answer for a given biomedical question, is a crucial component for any biomedical question answering (QA) system. Passage retrieval in open-domain QA is a longstanding challenge widely studied over the last decades. However, it still requires further efforts in biomedical QA. In this paper, we present a new biomedical passage retrieval method based on Stanford CoreNLP sentence/passage length, probabilistic information retrieval (IR) model and UMLS concepts. In the proposed method, we first use our document retrieval system based on PubMed search engine and UMLS similarity to retrieve relevant documents to a given biomedical question. We then take the abstracts from the retrieved documents and use Stanford CoreNLP for sentence splitter to make a set of sentences, i.e., candidate passages. Using stemmed words and UMLS concepts as features for the BM25 model, we finally compute the similarity scores between the biomedical question and each of the candidate passages and keep the N top-ranked ones. Experimental evaluations performed on large standard datasets, provided by the BioASQ challenge, show that the proposed method achieves good performances compared with the current state-of-the-art methods. The proposed method significantly outperforms the current state-of-the-art methods by an average of 6.84% in terms of mean average precision (MAP). We have proposed an efficient passage retrieval method which can be used to retrieve relevant passages in biomedical QA systems with high mean average precision. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Comparison of the gravimetric, phenol red, and 14C-PEG-3350 methods to determine water absorption in the rat single-pass intestinal perfusion model.

    PubMed

    Sutton, S C; Rinaldi, M T; Vukovinsky, K E

    2001-01-01

    This study was undertaken to determine whether the gravimetric method provided an accurate measure of water flux correction and to compare the gravimetric method with methods that employ nonabsorbed markers (eg, phenol red and 14C-PEG-3350). Phenol red,14C-PEG-3350, and 4-[2-[[2-(6-amino-3-pyridinyl)-2-hydroxyethyl]amino]ethoxy]-, methyl ester, (R)-benzene acetic acid (Compound I) were co-perfused in situ through the jejunum of 9 anesthetized rats (single-pass intestinal perfusion [SPIP]). Water absorption was determined from the phenol red,14C-PEG-3350, and gravimetric methods. The absorption rate constant (ka) for Compound I was calculated. Both phenol red and 14C-PEG-3350 were appreciably absorbed, underestimating the extent of water flux in the SPIP model. The average +/- SD water flux microg/h/cm) for the 3 methods were 68.9 +/- 28.2 (gravimetric), 26.8 +/- 49.2 (phenol red), and 34.9 +/- 21.9 (14C-PEG-3350). The (average +/- SD) ka for Compound I (uncorrected for water flux) was 0.024 +/- 0.005 min(-1). For the corrected, gravimetric method, the average +/- SD was 0.031 +/- 0.001 min(-1). The gravimetric method for correcting water flux was as accurate as the 2 "nonabsorbed" marker methods.

  4. Accounting for dropout in xenografted tumour efficacy studies: integrated endpoint analysis, reduced bias and better use of animals.

    PubMed

    Martin, Emma C; Aarons, Leon; Yates, James W T

    2016-07-01

    Xenograft studies are commonly used to assess the efficacy of new compounds and characterise their dose-response relationship. Analysis often involves comparing the final tumour sizes across dose groups. This can cause bias, as often in xenograft studies a tumour burden limit (TBL) is imposed for ethical reasons, leading to the animals with the largest tumours being excluded from the final analysis. This means the average tumour size, particularly in the control group, is underestimated, leading to an underestimate of the treatment effect. Four methods to account for dropout due to the TBL are proposed, which use all the available data instead of only final observations: modelling, pattern mixture models, treating dropouts as censored using the M3 method and joint modelling of tumour growth and dropout. The methods were applied to both a simulated data set and a real example. All four proposed methods led to an improvement in the estimate of treatment effect in the simulated data. The joint modelling method performed most strongly, with the censoring method also providing a good estimate of the treatment effect, but with higher uncertainty. In the real data example, the dose-response estimated using the censoring and joint modelling methods was higher than the very flat curve estimated from average final measurements. Accounting for dropout using the proposed censoring or joint modelling methods allows the treatment effect to be recovered in studies where it may have been obscured due to dropout caused by the TBL.

  5. Learning Instance-Specific Predictive Models

    PubMed Central

    Visweswaran, Shyam; Cooper, Gregory F.

    2013-01-01

    This paper introduces a Bayesian algorithm for constructing predictive models from data that are optimized to predict a target variable well for a particular instance. This algorithm learns Markov blanket models, carries out Bayesian model averaging over a set of models to predict a target variable of the instance at hand, and employs an instance-specific heuristic to locate a set of suitable models to average over. We call this method the instance-specific Markov blanket (ISMB) algorithm. The ISMB algorithm was evaluated on 21 UCI data sets using five different performance measures and its performance was compared to that of several commonly used predictive algorithms, including nave Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor, Lazy Bayesian Rules, and AdaBoost. Over all the data sets, the ISMB algorithm performed better on average on all performance measures against all the comparison algorithms. PMID:25045325

  6. Visualizing the uncertainty in the relationship between seasonal average climate and malaria risk.

    PubMed

    MacLeod, D A; Morse, A P

    2014-12-02

    Around $1.6 billion per year is spent financing anti-malaria initiatives, and though malaria morbidity is falling, the impact of annual epidemics remains significant. Whilst malaria risk may increase with climate change, projections are highly uncertain and to sidestep this intractable uncertainty, adaptation efforts should improve societal ability to anticipate and mitigate individual events. Anticipation of climate-related events is made possible by seasonal climate forecasting, from which warnings of anomalous seasonal average temperature and rainfall, months in advance are possible. Seasonal climate hindcasts have been used to drive climate-based models for malaria, showing significant skill for observed malaria incidence. However, the relationship between seasonal average climate and malaria risk remains unquantified. Here we explore this relationship, using a dynamic weather-driven malaria model. We also quantify key uncertainty in the malaria model, by introducing variability in one of the first order uncertainties in model formulation. Results are visualized as location-specific impact surfaces: easily integrated with ensemble seasonal climate forecasts, and intuitively communicating quantified uncertainty. Methods are demonstrated for two epidemic regions, and are not limited to malaria modeling; the visualization method could be applied to any climate impact.

  7. Visualizing the uncertainty in the relationship between seasonal average climate and malaria risk

    NASA Astrophysics Data System (ADS)

    MacLeod, D. A.; Morse, A. P.

    2014-12-01

    Around $1.6 billion per year is spent financing anti-malaria initiatives, and though malaria morbidity is falling, the impact of annual epidemics remains significant. Whilst malaria risk may increase with climate change, projections are highly uncertain and to sidestep this intractable uncertainty, adaptation efforts should improve societal ability to anticipate and mitigate individual events. Anticipation of climate-related events is made possible by seasonal climate forecasting, from which warnings of anomalous seasonal average temperature and rainfall, months in advance are possible. Seasonal climate hindcasts have been used to drive climate-based models for malaria, showing significant skill for observed malaria incidence. However, the relationship between seasonal average climate and malaria risk remains unquantified. Here we explore this relationship, using a dynamic weather-driven malaria model. We also quantify key uncertainty in the malaria model, by introducing variability in one of the first order uncertainties in model formulation. Results are visualized as location-specific impact surfaces: easily integrated with ensemble seasonal climate forecasts, and intuitively communicating quantified uncertainty. Methods are demonstrated for two epidemic regions, and are not limited to malaria modeling; the visualization method could be applied to any climate impact.

  8. Quantum cluster variational method and message passing algorithms revisited

    NASA Astrophysics Data System (ADS)

    Domínguez, E.; Mulet, Roberto

    2018-02-01

    We present a general framework to study quantum disordered systems in the context of the Kikuchi's cluster variational method (CVM). The method relies in the solution of message passing-like equations for single instances or in the iterative solution of complex population dynamic algorithms for an average case scenario. We first show how a standard application of the Kikuchi's CVM can be easily translated to message passing equations for specific instances of the disordered system. We then present an "ad hoc" extension of these equations to a population dynamic algorithm representing an average case scenario. At the Bethe level, these equations are equivalent to the dynamic population equations that can be derived from a proper cavity ansatz. However, at the plaquette approximation, the interpretation is more subtle and we discuss it taking also into account previous results in classical disordered models. Moreover, we develop a formalism to properly deal with the average case scenario using a replica-symmetric ansatz within this CVM for quantum disordered systems. Finally, we present and discuss numerical solutions of the different approximations for the quantum transverse Ising model and the quantum random field Ising model in two-dimensional lattices.

  9. 40 CFR 600.512-12 - Model year report.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... CFR parts 531 or 533 as applicable, and the applicable fleet average CO2 emission standards. Model... standards. Model year reports shall include a statement that the method of measuring vehicle track width... models and the applicable in-use CREE emission standard. The list of models shall include the applicable...

  10. 40 CFR 600.512-12 - Model year report.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CFR parts 531 or 533 as applicable, and the applicable fleet average CO2 emission standards. Model... standards. Model year reports shall include a statement that the method of measuring vehicle track width... models and the applicable in-use CREE emission standard. The list of models shall include the applicable...

  11. 40 CFR 600.512-12 - Model year report.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CFR parts 531 or 533 as applicable, and the applicable fleet average CO2 emission standards. Model... standards. Model year reports shall include a statement that the method of measuring vehicle track width... models and the applicable in-use CREE emission standard. The list of models shall include the applicable...

  12. Spatial averaging of a dissipative particle dynamics model for active suspensions

    NASA Astrophysics Data System (ADS)

    Panchenko, Alexander; Hinz, Denis F.; Fried, Eliot

    2018-03-01

    Starting from a fine-scale dissipative particle dynamics (DPD) model of self-motile point particles, we derive meso-scale continuum equations by applying a spatial averaging version of the Irving-Kirkwood-Noll procedure. Since the method does not rely on kinetic theory, the derivation is valid for highly concentrated particle systems. Spatial averaging yields stochastic continuum equations similar to those of Toner and Tu. However, our theory also involves a constitutive equation for the average fluctuation force. According to this equation, both the strength and the probability distribution vary with time and position through the effective mass density. The statistics of the fluctuation force also depend on the fine scale dissipative force equation, the physical temperature, and two additional parameters which characterize fluctuation strengths. Although the self-propulsion force entering our DPD model contains no explicit mechanism for aligning the velocities of neighboring particles, our averaged coarse-scale equations include the commonly encountered cubically nonlinear (internal) body force density.

  13. A new type of exact arbitrarily inhomogeneous cosmology: evolution of deceleration in the flat homogeneous-on-average case

    NASA Astrophysics Data System (ADS)

    Hellaby, Charles

    2012-01-01

    A new method for constructing exact inhomogeneous universes is presented, that allows variation in 3 dimensions. The resulting spacetime may be statistically uniform on average, or have random, non-repeating variation. The construction utilises the Darmois junction conditions to join many different component spacetime regions. In the initial simple example given, the component parts are spatially flat and uniform, but much more general combinations should be possible. Further inhomogeneity may be added via swiss cheese vacuoles and inhomogeneous metrics. This model is used to explore the proposal, that observers are located in bound, non-expanding regions, while the universe is actually in the process of becoming void dominated, and thus its average expansion rate is increasing. The model confirms qualitatively that the faster expanding components come to dominate the average, and that inhomogeneity results in average parameters which evolve differently from those of any one component, but more realistic modelling of the effect will need this construction to be generalised.

  14. APOLLO: a quality assessment service for single and multiple protein models.

    PubMed

    Wang, Zheng; Eickholt, Jesse; Cheng, Jianlin

    2011-06-15

    We built a web server named APOLLO, which can evaluate the absolute global and local qualities of a single protein model using machine learning methods or the global and local qualities of a pool of models using a pair-wise comparison approach. Based on our evaluations on 107 CASP9 (Critical Assessment of Techniques for Protein Structure Prediction) targets, the predicted quality scores generated from our machine learning and pair-wise methods have an average per-target correlation of 0.671 and 0.917, respectively, with the true model quality scores. Based on our test on 92 CASP9 targets, our predicted absolute local qualities have an average difference of 2.60 Å with the actual distances to native structure. http://sysbio.rnet.missouri.edu/apollo/. Single and pair-wise global quality assessment software is also available at the site.

  15. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  16. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  17. Evaluation of Observation-Fused Regional Air Quality Model Results for Population Air Pollution Exposure Estimation

    PubMed Central

    Chen, Gang; Li, Jingyi; Ying, Qi; Sherman, Seth; Perkins, Neil; Rajeshwari, Sundaram; Mendola, Pauline

    2014-01-01

    In this study, Community Multiscale Air Quality (CMAQ) model was applied to predict ambient gaseous and particulate concentrations during 2001 to 2010 in 15 hospital referral regions (HRRs) using a 36-km horizontal resolution domain. An inverse distance weighting based method was applied to produce exposure estimates based on observation-fused regional pollutant concentration fields using the differences between observations and predictions at grid cells where air quality monitors were located. Although the raw CMAQ model is capable of producing satisfying results for O3 and PM2.5 based on EPA guidelines, using the observation data fusing technique to correct CMAQ predictions leads to significant improvement of model performance for all gaseous and particulate pollutants. Regional average concentrations were calculated using five different methods: 1) inverse distance weighting of observation data alone, 2) raw CMAQ results, 3) observation-fused CMAQ results, 4) population-averaged raw CMAQ results and 5) population-averaged fused CMAQ results. It shows that while O3 (as well as NOx) monitoring networks in the HRR regions are dense enough to provide consistent regional average exposure estimation based on monitoring data alone, PM2.5 observation sites (as well as monitors for CO, SO2, PM10 and PM2.5 components) are usually sparse and the difference between the average concentrations estimated by the inverse distance interpolated observations, raw CMAQ and fused CMAQ results can be significantly different. Population-weighted average should be used to account spatial variation in pollutant concentration and population density. Using raw CMAQ results or observations alone might lead to significant biases in health outcome analyses. PMID:24747248

  18. Mass Function of Galaxy Clusters in Relativistic Inhomogeneous Cosmology

    NASA Astrophysics Data System (ADS)

    Ostrowski, Jan J.; Buchert, Thomas; Roukema, Boudewijn F.

    The current cosmological model (ΛCDM) with the underlying FLRW metric relies on the assumption of local isotropy, hence homogeneity of the Universe. Difficulties arise when one attempts to justify this model as an average description of the Universe from first principles of general relativity, since in general, the Einstein tensor built from the averaged metric is not equal to the averaged stress-energy tensor. In this context, the discrepancy between these quantities is called "cosmological backreaction" and has been the subject of scientific debate among cosmologists and relativists for more than 20 years. Here we present one of the methods to tackle this problem, i.e. averaging the scalar parts of the Einstein equations, together with its application, the cosmological mass function of galaxy clusters.

  19. The uses and limitations of the square‐root‐impedance method for computing site amplification

    USGS Publications Warehouse

    Boore, David

    2013-01-01

    The square‐root‐impedance (SRI) method is a fast way of computing approximate site amplification that does not depend on the details from velocity models. The SRI method underestimates the peak response of models with large impedance contrasts near their base, but the amplifications for those models is often close to or equal to the root mean square of the theoretical full resonant (FR) response of the higher modes. On the other hand, for velocity models made up of gradients, with no significant impedance changes across small ranges of depth, the SRI method systematically underestimates the theoretical FR response over a wide frequency range. For commonly used gradient models for generic rock sites, the SRI method underestimates the FR response by about 20%–30%. Notwithstanding the persistent underestimation of amplifications from theoretical FR calculations, however, amplifications from the SRI method may often provide more useful estimates of amplifications than the FR method, because the SRI amplifications are not sensitive to details of the models and will not exhibit the many peaks and valleys characteristic of theoretical full resonant amplifications (jaggedness sometimes not seen in amplifications based on averages of site response from multiple recordings at a given site). The lack of sensitivity to details of the velocity models also makes the SRI method useful in comparing the response of various velocity models, in spite of any systematic underestimation of the response. The quarter‐wavelength average velocity, which is fundamental to the SRI method, is useful by itself in site characterization, and as such, is the fundamental parameter used to characterize the site response in a number of recent ground‐motion prediction equations.

  20. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study

    PubMed Central

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang

    2016-01-01

    Objective To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. Design A time-series study using regional death registry between 2009 and 2010. Setting 8 districts in a large metropolitan area in Northern China. Participants 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Main outcome measures Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. Results The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (−1.09 to 4.28 vs −1.08 to 3.93) and the PCs-based model (−2.23 to 4.07 vs −2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, −1.12 to 4.85 versus −1.11 versus 4.83. Conclusions The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. PMID:27531727

  1. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  2. Rapid calculation of accurate atomic charges for proteins via the electronegativity equalization method.

    PubMed

    Ionescu, Crina-Maria; Geidl, Stanislav; Svobodová Vařeková, Radka; Koča, Jaroslav

    2013-10-28

    We focused on the parametrization and evaluation of empirical models for fast and accurate calculation of conformationally dependent atomic charges in proteins. The models were based on the electronegativity equalization method (EEM), and the parametrization procedure was tailored to proteins. We used large protein fragments as reference structures and fitted the EEM model parameters using atomic charges computed by three population analyses (Mulliken, Natural, iterative Hirshfeld), at the Hartree-Fock level with two basis sets (6-31G*, 6-31G**) and in two environments (gas phase, implicit solvation). We parametrized and successfully validated 24 EEM models. When tested on insulin and ubiquitin, all models reproduced quantum mechanics level charges well and were consistent with respect to population analysis and basis set. Specifically, the models showed on average a correlation of 0.961, RMSD 0.097 e, and average absolute error per atom 0.072 e. The EEM models can be used with the freely available EEM implementation EEM_SOLVER.

  3. Development of a comprehensive screening method for more than 300 organic chemicals in water samples using a combination of solid-phase extraction and liquid chromatography-time-of-flight-mass spectrometry.

    PubMed

    Chau, Hong Thi Cam; Kadokami, Kiwao; Ifuku, Tomomi; Yoshida, Yusuke

    2017-12-01

    A comprehensive screening method for 311 organic compounds with a wide range of physicochemical properties (log Pow -2.2-8.53) in water samples was developed by combining solid-phase extraction with liquid chromatography-high-resolution time-of-flight mass spectrometry. Method optimization using 128 pesticides revealed that tandem extraction with styrene-divinylbenzene polymer and activated carbon solid-phase extraction cartridges at pH 7.0 was optimal. The developed screening method was able to extract 190 model compounds with average recovery of 80.8% and average relative standard deviations (RSD) of 13.5% from spiked reagent water at 0.20 μg L -1 , and 87.1% recovery and 10.8% RSD at 0.05 μg L -1 . Spike-recovery testing (0.20 μg L -1 ) using real sewage treatment plant effluents resulted in an average recovery and average RSD of 190 model compounds of 77.4 and 13.1%, respectively. The method was applied to the influent and effluent of five sewage treatment plants in Kitakyushu, Japan, with 29 out of 311 analytes being observed at least once. The results showed that this method can screen for a large number of chemicals with a wide range of physicochemical properties quickly and at low operational cost, something that is difficult to achieve using conventional analytical methods. This method will find utility in target screening of hazardous chemicals with a high risk in environmental waters, and for confirming the safety of water after environmental incidents.

  4. Calm water resistance prediction of a bulk carrier using Reynolds averaged Navier-Stokes based solver

    NASA Astrophysics Data System (ADS)

    Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan

    2017-12-01

    Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.

  5. Laplace-Fourier-domain dispersion analysis of an average derivative optimal scheme for scalar-wave equation

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Bo

    2014-06-01

    By using low-frequency components of the damped wavefield, Laplace-Fourier-domain full waveform inversion (FWI) can recover a long-wavelength velocity model from the original undamped seismic data lacking low-frequency information. Laplace-Fourier-domain modelling is an important foundation of Laplace-Fourier-domain FWI. Based on the numerical phase velocity and the numerical attenuation propagation velocity, a method for performing Laplace-Fourier-domain numerical dispersion analysis is developed in this paper. This method is applied to an average-derivative optimal scheme. The results show that within the relative error of 1 per cent, the Laplace-Fourier-domain average-derivative optimal scheme requires seven gridpoints per smallest wavelength and smallest pseudo-wavelength for both equal and unequal directional sampling intervals. In contrast, the classical five-point scheme requires 23 gridpoints per smallest wavelength and smallest pseudo-wavelength to achieve the same accuracy. Numerical experiments demonstrate the theoretical analysis.

  6. Evaluation of Four Methods for Predicting Carbon Stocks of Korean Pine Plantations in Heilongjiang Province, China

    PubMed Central

    Gao, Huilin; Dong, Lihu; Li, Fengri; Zhang, Lianjun

    2015-01-01

    A total of 89 trees of Korean pine (Pinus koraiensis) were destructively sampled from the plantations in Heilongjiang Province, P.R. China. The sample trees were measured and calculated for the biomass and carbon stocks of tree components (i.e., stem, branch, foliage and root). Both compatible biomass and carbon stock models were developed with the total biomass and total carbon stocks as the constraints, respectively. Four methods were used to evaluate the carbon stocks of tree components. The first method predicted carbon stocks directly by the compatible carbon stocks models (Method 1). The other three methods indirectly predicted the carbon stocks in two steps: (1) estimating the biomass by the compatible biomass models, and (2) multiplying the estimated biomass by three different carbon conversion factors (i.e., carbon conversion factor 0.5 (Method 2), average carbon concentration of the sample trees (Method 3), and average carbon concentration of each tree component (Method 4)). The prediction errors of estimating the carbon stocks were compared and tested for the differences between the four methods. The results showed that the compatible biomass and carbon models with tree diameter (D) as the sole independent variable performed well so that Method 1 was the best method for predicting the carbon stocks of tree components and total. There were significant differences among the four methods for the carbon stock of stem. Method 2 produced the largest error, especially for stem and total. Methods 3 and Method 4 were slightly worse than Method 1, but the differences were not statistically significant. In practice, the indirect method using the mean carbon concentration of individual trees was sufficient to obtain accurate carbon stocks estimation if carbon stocks models are not available. PMID:26659257

  7. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter

    PubMed Central

    Huang, Lei

    2015-01-01

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409

  8. A semi-analytic theory for the motion of a close-earth artificial satellite with drag

    NASA Technical Reports Server (NTRS)

    Liu, J. J. F.; Alford, R. L.

    1979-01-01

    A semi-analytic method is used to estimate the decay history/lifetime and to generate orbital ephemeris for close-earth satellites perturbed by the atmospheric drag and earth oblateness due to the spherical harmonics J2, J3, and J4. The theory maintains efficiency through the application of the theory of a method of averaging and employs sufficient numerical emphasis to include a rather sophisticated atmospheric density model. The averaged drag effects with respect to mean anomaly are evaluated by a Gauss-Legendre quadrature while the averaged variational equations of motion are integrated numerically with automatic step size and error control.

  9. On-line algorithms for forecasting hourly loads of an electric utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vemuri, S.; Huang, W.L.; Nelson, D.J.

    A method that lends itself to on-line forecasting of hourly electric loads is presented, and the results of its use are compared to models developed using the Box-Jenkins method. The method consits of processing the historical hourly loads with a sequential least-squares estimator to identify a finite-order autoregressive model which, in turn, is used to obtain a parsimonious autoregressive-moving average model. The method presented has several advantages in comparison with the Box-Jenkins method including much-less human intervention, improved model identification, and better results. The method is also more robust in that greater confidence can be placed in the accuracy ofmore » models based upon the various measures available at the identification stage.« less

  10. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit

    PubMed Central

    Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert

    2016-01-01

    Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040

  11. An Interactive Multi-Model for Consensus on Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocarev, Ljupco

    This project purports to develop a new scheme for forming consensus among alternative climate models, that give widely divergent projections as to the details of climate change, that is more intelligent than simply averaging the model outputs, or averaging with ex post facto weighting factors. The method under development effectively allows models to assimilate data from one another in run time with weights that are chosen in an adaptive training phase using 20th century data, so that the models synchronize with one another as well as with reality. An alternate approach that is being explored in parallel is the automatedmore » combination of equations from different models in an expert-system-like framework.« less

  12. Hydrological modelling of the Mara River Basin, Kenya: Dealing with uncertain data quality and calibrating using river stage

    NASA Astrophysics Data System (ADS)

    Hulsman, P.; Bogaard, T.; Savenije, H. H. G.

    2016-12-01

    In hydrology and water resources management, discharge is the main time series for model calibration. Rating curves are needed to derive discharge from continuously measured water levels. However, assuring their quality is demanding due to dynamic changes and problems in accurately deriving discharge at high flows. This is valid everywhere, but even more in African socio-economic context. To cope with these uncertainties, this study proposes to use water levels instead of discharge data for calibration. Also uncertainties in rainfall measurements, especially the spatial heterogeneity needs to be considered. In this study, the semi-distributed rainfall runoff model FLEX-Topo was applied to the Mara River Basin. In this model seven sub-basins were distinguished and four hydrological response units with each a unique model structure based on the expected dominant flow processes. Parameter and process constrains were applied to exclude unrealistic results. To calibrate the model, the water levels were back-calculated from modelled discharges, using cross-section data and the Strickler formula calibrating parameter `k•s1/2', and compared to measured water levels. The model simulated the water depths well for the entire basin and the Nyangores sub-basin in the north. However, the calibrated and observed rating curves differed significantly at the basin outlet, probably due to uncertainties in the measured discharge, but at Nyangores they were almost identical. To assess the effect of rainfall uncertainties on the hydrological model, the representative rainfall in each sub-basin was estimated with three different methods: 1) single station, 2) average precipitation, 3) areal sub-division using Thiessen polygons. All three methods gave on average similar results, but method 1 resulted in more flashy responses, method 2 dampened the water levels due to averaging the rainfall and method 3 was a combination of both. In conclusion, in the case of unreliable rating curves, water level data can be used instead and a new rating curve can be calibrated. The effect of rainfall uncertainties on the hydrological model was insignificant.

  13. A physics-based fractional order model and state of energy estimation for lithium ion batteries. Part II: Parameter identification and state of energy estimation for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Li, Xiaoyu; Pan, Ke; Fan, Guodong; Lu, Rengui; Zhu, Chunbo; Rizzoni, Giorgio; Canova, Marcello

    2017-11-01

    State of energy (SOE) is an important index for the electrochemical energy storage system in electric vehicles. In this paper, a robust state of energy estimation method in combination with a physical model parameter identification method is proposed to achieve accurate battery state estimation at different operating conditions and different aging stages. A physics-based fractional order model with variable solid-state diffusivity (FOM-VSSD) is used to characterize the dynamic performance of a LiFePO4/graphite battery. In order to update the model parameter automatically at different aging stages, a multi-step model parameter identification method based on the lexicographic optimization is especially designed for the electric vehicle operating conditions. As the battery available energy changes with different applied load current profiles, the relationship between the remaining energy loss and the state of charge, the average current as well as the average squared current is modeled. The SOE with different operating conditions and different aging stages are estimated based on an adaptive fractional order extended Kalman filter (AFEKF). Validation results show that the overall SOE estimation error is within ±5%. The proposed method is suitable for the electric vehicle online applications.

  14. Evaluation of the accuracy of an offline seasonally-varying matrix transport model for simulating ideal age

    DOE PAGES

    Bardin, Ann; Primeau, Francois; Lindsay, Keith; ...

    2016-07-21

    Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less

  15. Forecasting Daily Patient Outflow From a Ward Having No Real-Time Clinical Data

    PubMed Central

    Tran, Truyen; Luo, Wei; Phung, Dinh; Venkatesh, Svetha

    2016-01-01

    Background: Modeling patient flow is crucial in understanding resource demand and prioritization. We study patient outflow from an open ward in an Australian hospital, where currently bed allocation is carried out by a manager relying on past experiences and looking at demand. Automatic methods that provide a reasonable estimate of total next-day discharges can aid in efficient bed management. The challenges in building such methods lie in dealing with large amounts of discharge noise introduced by the nonlinear nature of hospital procedures, and the nonavailability of real-time clinical information in wards. Objective Our study investigates different models to forecast the total number of next-day discharges from an open ward having no real-time clinical data. Methods We compared 5 popular regression algorithms to model total next-day discharges: (1) autoregressive integrated moving average (ARIMA), (2) the autoregressive moving average with exogenous variables (ARMAX), (3) k-nearest neighbor regression, (4) random forest regression, and (5) support vector regression. Although the autoregressive integrated moving average model relied on past 3-month discharges, nearest neighbor forecasting used median of similar discharges in the past in estimating next-day discharge. In addition, the ARMAX model used the day of the week and number of patients currently in ward as exogenous variables. For the random forest and support vector regression models, we designed a predictor set of 20 patient features and 88 ward-level features. Results Our data consisted of 12,141 patient visits over 1826 days. Forecasting quality was measured using mean forecast error, mean absolute error, symmetric mean absolute percentage error, and root mean square error. When compared with a moving average prediction model, all 5 models demonstrated superior performance with the random forests achieving 22.7% improvement in mean absolute error, for all days in the year 2014. Conclusions In the absence of clinical information, our study recommends using patient-level and ward-level data in predicting next-day discharges. Random forest and support vector regression models are able to use all available features from such data, resulting in superior performance over traditional autoregressive methods. An intelligent estimate of available beds in wards plays a crucial role in relieving access block in emergency departments. PMID:27444059

  16. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  17. A partial least squares based spectrum normalization method for uncertainty reduction for laser-induced breakdown spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou

    2013-10-01

    A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo; Craig, Tim

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and appliedmore » three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR weight prediction methodologies perform comparably to the LR model and can produce clinical quality treatment plans by simultaneously predicting multiple weights that capture trade-offs associated with sparing multiple OARs.« less

  19. Indirect and direct methods for measuring a dynamic throat diameter in a solid rocket motor

    NASA Astrophysics Data System (ADS)

    Colbaugh, Lauren

    In a solid rocket motor, nozzle throat erosion is dictated by propellant composition, throat material properties, and operating conditions. Throat erosion has a significant effect on motor performance, so it must be accurately characterized to produce a good motor design. In order to correlate throat erosion rate to other parameters, it is first necessary to know what the throat diameter is throughout a motor burn. Thus, an indirect method and a direct method for determining throat diameter in a solid rocket motor are investigated in this thesis. The indirect method looks at the use of pressure and thrust data to solve for throat diameter as a function of time. The indirect method's proof of concept was shown by the good agreement between the ballistics model and the test data from a static motor firing. The ballistics model was within 10% of all measured and calculated performance parameters (e.g. average pressure, specific impulse, maximum thrust, etc.) for tests with throat erosion and within 6% of all measured and calculated performance parameters for tests without throat erosion. The direct method involves the use of x-rays to directly observe a simulated nozzle throat erode in a dynamic environment; this is achieved with a dynamic calibration standard. An image processing algorithm is developed for extracting the diameter dimensions from the x-ray intensity digital images. Static and dynamic tests were conducted. The measured diameter was compared to the known diameter in the calibration standard. All dynamic test results were within +6% / -7% of the actual diameter. Part of the edge detection method consists of dividing the entire x-ray image by an average pixel value, calculated from a set of pixels in the x-ray image. It was found that the accuracy of the edge detection method depends upon the selection of the average pixel value area and subsequently the average pixel value. An average pixel value sensitivity analysis is presented. Both the indirect method and the direct method prove to be viable approaches to determining throat diameter during solid rocket motor operation.

  20. Energy diffusion controlled reaction rate of reacting particle driven by broad-band noise

    NASA Astrophysics Data System (ADS)

    Deng, M. L.; Zhu, W. Q.

    2007-10-01

    The energy diffusion controlled reaction rate of a reacting particle with linear weak damping and broad-band noise excitation is studied by using the stochastic averaging method. First, the stochastic averaging method for strongly nonlinear oscillators under broad-band noise excitation using generalized harmonic functions is briefly introduced. Then, the reaction rate of the classical Kramers' reacting model with linear weak damping and broad-band noise excitation is investigated by using the stochastic averaging method. The averaged Itô stochastic differential equation describing the energy diffusion and the Pontryagin equation governing the mean first-passage time (MFPT) are established. The energy diffusion controlled reaction rate is obtained as the inverse of the MFPT by solving the Pontryagin equation. The results of two special cases of broad-band noises, i.e. the harmonic noise and the exponentially corrected noise, are discussed in details. It is demonstrated that the general expression of reaction rate derived by the authors can be reduced to the classical ones via linear approximation and high potential barrier approximation. The good agreement with the results of the Monte Carlo simulation verifies that the reaction rate can be well predicted using the stochastic averaging method.

  1. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  2. Hill Problem Analytical Theory to the Order Four. Application to the Computation of Frozen Orbits around Planetary Satellites

    NASA Technical Reports Server (NTRS)

    Lara, Martin; Palacian, Jesus F.

    2007-01-01

    Frozen orbits of the Hill problem are determined in the double averaged problem, where short and long period terms are removed by means of Lie transforms. The computation of initial conditions of corresponding quasi periodic solutions in the non-averaged problem is straightforward for the perturbation method used provides the explicit equations of the transformation that connects the averaged and non-averaged models. A fourth order analytical theory reveals necessary for the accurate computation of quasi periodic, frozen orbits.

  3. Estimation of open water evaporation using land-based meteorological data

    NASA Astrophysics Data System (ADS)

    Li, Fawen; Zhao, Yong

    2017-10-01

    Water surface evaporation is an important process in the hydrologic and energy cycles. Accurate simulation of water evaporation is important for the evaluation of water resources. In this paper, using meteorological data from the Aixinzhuang reservoir, the main factors affecting water surface evaporation were determined by the principal component analysis method. To illustrate the influence of these factors on water surface evaporation, the paper first adopted the Dalton model to simulate water surface evaporation. The results showed that the simulation precision was poor for the peak value zone. To improve the model simulation's precision, a modified Dalton model considering relative humidity was proposed. The results show that the 10-day average relative error is 17.2%, assessed as qualified; the monthly average relative error is 12.5%, assessed as qualified; and the yearly average relative error is 3.4%, assessed as excellent. To validate its applicability, the meteorological data of Kuancheng station in the Luan River basin were selected to test the modified model. The results show that the 10-day average relative error is 15.4%, assessed as qualified; the monthly average relative error is 13.3%, assessed as qualified; and the yearly average relative error is 6.0%, assessed as good. These results showed that the modified model had good applicability and versatility. The research results can provide technical support for the calculation of water surface evaporation in northern China or similar regions.

  4. An efficient computational method for characterizing the effects of random surface errors on the average power pattern of reflectors

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1983-01-01

    Based on the works of Ruze (1966) and Vu (1969), a novel mathematical model has been developed to determine efficiently the average power pattern degradations caused by random surface errors. In this model, both nonuniform root mean square (rms) surface errors and nonuniform illumination functions are employed. In addition, the model incorporates the dependence on F/D in the construction of the solution. The mathematical foundation of the model rests on the assumption that in each prescribed annular region of the antenna, the geometrical rms surface value is known. It is shown that closed-form expressions can then be derived, which result in a very efficient computational method for the average power pattern. Detailed parametric studies are performed with these expressions to determine the effects of different random errors and illumination tapers on parameters such as gain loss and sidelobe levels. The results clearly demonstrate that as sidelobe levels decrease, their dependence on the surface rms/wavelength becomes much stronger and, for a specified tolerance level, a considerably smaller rms/wavelength is required to maintain the low sidelobes within the required bounds.

  5. Relevance analysis and short-term prediction of PM2.5 concentrations in Beijing based on multi-source data

    NASA Astrophysics Data System (ADS)

    Ni, X. Y.; Huang, H.; Du, W. P.

    2017-02-01

    The PM2.5 problem is proving to be a major public crisis and is of great public-concern requiring an urgent response. Information about, and prediction of PM2.5 from the perspective of atmospheric dynamic theory is still limited due to the complexity of the formation and development of PM2.5. In this paper, we attempted to realize the relevance analysis and short-term prediction of PM2.5 concentrations in Beijing, China, using multi-source data mining. A correlation analysis model of PM2.5 to physical data (meteorological data, including regional average rainfall, daily mean temperature, average relative humidity, average wind speed, maximum wind speed, and other pollutant concentration data, including CO, NO2, SO2, PM10) and social media data (microblog data) was proposed, based on the Multivariate Statistical Analysis method. The study found that during these factors, the value of average wind speed, the concentrations of CO, NO2, PM10, and the daily number of microblog entries with key words 'Beijing; Air pollution' show high mathematical correlation with PM2.5 concentrations. The correlation analysis was further studied based on a big data's machine learning model- Back Propagation Neural Network (hereinafter referred to as BPNN) model. It was found that the BPNN method performs better in correlation mining. Finally, an Autoregressive Integrated Moving Average (hereinafter referred to as ARIMA) Time Series model was applied in this paper to explore the prediction of PM2.5 in the short-term time series. The predicted results were in good agreement with the observed data. This study is useful for helping realize real-time monitoring, analysis and pre-warning of PM2.5 and it also helps to broaden the application of big data and the multi-source data mining methods.

  6. Tidal and tidally averaged circulation characteristics of Suisun Bay, California

    USGS Publications Warehouse

    Smith, Lawrence H.; Cheng, Ralph T.

    1987-01-01

    Availability of extensive field data permitted realistic calibration and validation of a hydrodynamic model of tidal circulation and salt transport for Suisun Bay, California. Suisun Bay is a partially mixed embayment of northern San Francisco Bay located just seaward of the Sacramento-San Joaquin Delta. The model employs a variant of an alternating direction implicit finite-difference method to solve the hydrodynamic equations and an Eulerian-Lagrangian method to solve the salt transport equation. An upwind formulation of the advective acceleration terms of the momentum equations was employed to avoid oscillations in the tidally averaged velocity field produced by central spatial differencing of these terms. Simulation results of tidal circulation and salt transport demonstrate that tides and the complex bathymetry determine the patterns of tidal velocities and that net changes in the salinity distribution over a few tidal cycles are small despite large changes during each tidal cycle. Computations of tidally averaged circulation suggest that baroclinic and wind effects are important influences on tidally averaged circulation during low freshwater-inflow conditions. Exclusion of baroclinic effects would lead to overestimation of freshwater inflow by several hundred m3/s for a fixed set of model boundary conditions. Likewise, exclusion of wind would cause an underestimation of flux rates between shoals and channels by 70–100%.

  7. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, M.

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.

  8. Correlation between average tissue depth data and quantitative accuracy of forensic craniofacial reconstructions measured by geometric surface comparison method.

    PubMed

    Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi

    2015-05-01

    Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortoleva, Peter J.

    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  10. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    PubMed

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Natural Language-based Machine Learning Models for the Annotation of Clinical Radiology Reports.

    PubMed

    Zech, John; Pain, Margaret; Titano, Joseph; Badgeley, Marcus; Schefflein, Javin; Su, Andres; Costa, Anthony; Bederson, Joshua; Lehar, Joseph; Oermann, Eric Karl

    2018-05-01

    Purpose To compare different methods for generating features from radiology reports and to develop a method to automatically identify findings in these reports. Materials and Methods In this study, 96 303 head computed tomography (CT) reports were obtained. The linguistic complexity of these reports was compared with that of alternative corpora. Head CT reports were preprocessed, and machine-analyzable features were constructed by using bag-of-words (BOW), word embedding, and Latent Dirichlet allocation-based approaches. Ultimately, 1004 head CT reports were manually labeled for findings of interest by physicians, and a subset of these were deemed critical findings. Lasso logistic regression was used to train models for physician-assigned labels on 602 of 1004 head CT reports (60%) using the constructed features, and the performance of these models was validated on a held-out 402 of 1004 reports (40%). Models were scored by area under the receiver operating characteristic curve (AUC), and aggregate AUC statistics were reported for (a) all labels, (b) critical labels, and (c) the presence of any critical finding in a report. Sensitivity, specificity, accuracy, and F1 score were reported for the best performing model's (a) predictions of all labels and (b) identification of reports containing critical findings. Results The best-performing model (BOW with unigrams, bigrams, and trigrams plus average word embeddings vector) had a held-out AUC of 0.966 for identifying the presence of any critical head CT finding and an average 0.957 AUC across all head CT findings. Sensitivity and specificity for identifying the presence of any critical finding were 92.59% (175 of 189) and 89.67% (191 of 213), respectively. Average sensitivity and specificity across all findings were 90.25% (1898 of 2103) and 91.72% (18 351 of 20 007), respectively. Simpler BOW methods achieved results competitive with those of more sophisticated approaches, with an average AUC for presence of any critical finding of 0.951 for unigram BOW versus 0.966 for the best-performing model. The Yule I of the head CT corpus was 34, markedly lower than that of the Reuters corpus (at 103) or I2B2 discharge summaries (at 271), indicating lower linguistic complexity. Conclusion Automated methods can be used to identify findings in radiology reports. The success of this approach benefits from the standardized language of these reports. With this method, a large labeled corpus can be generated for applications such as deep learning. © RSNA, 2018 Online supplemental material is available for this article.

  12. Average focal length and power of a section of any defined surface.

    PubMed

    Kaye, Stephen B

    2010-04-01

    To provide a method to allow calculation of the average focal length and power of a lens through a specified meridian of any defined surface, not limited to the paraxial approximations. University of Liverpool, Liverpool, United Kingdom. Functions were derived to model back-vertex focal length and representative power through a meridian containing any defined surface. Average back-vertex focal length was based on the definition of the average of a function, using the angle of incidence as an independent variable. Univariate functions allowed determination of average focal length and power through a section of any defined or topographically measured surface of a known refractive index. These functions incorporated aberrations confined to the section. The proposed method closely approximates the average focal length, and by inference power, of a section (meridian) of a surface to a single or scalar value. It is not dependent on the paraxial and other nonconstant approximations and includes aberrations confined to that meridian. A generalization of this method to include all orthogonal and oblique meridians is needed before a comparison with measured wavefront values can be made. Copyright (c) 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  13. Retina Image Vessel Segmentation Using a Hybrid CGLI Level Set Method

    PubMed Central

    Chen, Meizhu; Li, Jichun; Zhang, Encai

    2017-01-01

    As a nonintrusive method, the retina imaging provides us with a better way for the diagnosis of ophthalmologic diseases. Extracting the vessel profile automatically from the retina image is an important step in analyzing retina images. A novel hybrid active contour model is proposed to segment the fundus image automatically in this paper. It combines the signed pressure force function introduced by the Selective Binary and Gaussian Filtering Regularized Level Set (SBGFRLS) model with the local intensity property introduced by the Local Binary fitting (LBF) model to overcome the difficulty of the low contrast in segmentation process. It is more robust to the initial condition than the traditional methods and is easily implemented compared to the supervised vessel extraction methods. Proposed segmentation method was evaluated on two public datasets, DRIVE (Digital Retinal Images for Vessel Extraction) and STARE (Structured Analysis of the Retina) (the average accuracy of 0.9390 with 0.7358 sensitivity and 0.9680 specificity on DRIVE datasets and average accuracy of 0.9409 with 0.7449 sensitivity and 0.9690 specificity on STARE datasets). The experimental results show that our method is effective and our method is also robust to some kinds of pathology images compared with the traditional level set methods. PMID:28840122

  14. Models of convection-driven tectonic plates - A comparison of methods and results

    NASA Technical Reports Server (NTRS)

    King, Scott D.; Gable, Carl W.; Weinstein, Stuart A.

    1992-01-01

    Recent numerical studies of convection in the earth's mantle have included various features of plate tectonics. This paper describes three methods of modeling plates: through material properties, through force balance, and through a thin power-law sheet approximation. The results obtained are compared using each method on a series of simple calculations. From these results, scaling relations between the different parameterizations are developed. While each method produces different degrees of deformation within the surface plate, the surface heat flux and average plate velocity agree to within a few percent. The main results are not dependent upon the plate modeling method and herefore are representative of the physical system modeled.

  15. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  16. Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.

    PubMed

    Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel

    2018-06-05

    In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.

  17. Personalized Pseudophakic Model for Refractive Assessment

    PubMed Central

    Ribeiro, Filomena J.; Castanheira-Dinis, António; Dias, João M.

    2012-01-01

    Purpose To test a pseudophakic eye model that allows for intraocular lens power (IOL) calculation, both in normal eyes and in extreme conditions, such as post-LASIK. Methods Participants: The model’s efficacy was tested in 54 participants (104 eyes) who underwent LASIK and were assessed before and after surgery, thus allowing to test the same method in the same eye after only changing corneal topography. Modelling The Liou-Brennan eye model was used as a starting point, and biometric values were replaced by individual measurements. Detailed corneal surface data were obtained from topography (Orbscan®) and a grid of elevation values was used to define corneal surfaces in an optical ray-tracing software (Zemax®). To determine IOL power, optimization criteria based on values of the modulation transfer function (MTF) weighted according to contrast sensitivity function (CSF), were applied. Results Pre-operative refractive assessment calculated by our eye model correlated very strongly with SRK/T (r = 0.959, p<0.001) with no difference of average values (16.9±2.9 vs 17.1±2.9 D, p>0.05). Comparison of post-operative refractive assessment obtained using our eye model with the average of currently used formulas showed a strong correlation (r = 0.778, p<0.001), with no difference of average values (21.5±1.7 vs 21.8±1.6 D, p>0.05). Conclusions Results suggest that personalized pseudophakic eye models and ray-tracing allow for the use of the same methodology, regardless of previous LASIK, independent of population averages and commonly used regression correction factors, which represents a clinical advantage. PMID:23056450

  18. Generalized self-adjustment method for statistical mechanics of composite materials

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-03-01

    A new method is developed for the statistical mechanics of composite materials — the generalized selfadjustment method — which makes it possible to reduce the problem of predicting effective elastic properties of composites with random structures to the solution of two simpler "averaged" problems of an inclusion with transitional layers in a medium with the desired effective elastic properties. The inhomogeneous elastic properties and dimensions of the transitional layers take into account both the "approximate" order of mutual positioning, and also the variation in the dimensions and elastics properties of inclusions through appropriate special averaged indicator functions of the random structure of the composite. A numerical calculation of averaged indicator functions and effective elastic characteristics is performed by the generalized self-adjustment method for a unidirectional fiberglass on the basis of various models of actual random structures in the plane of isotropy.

  19. Numerical investigation of airflow in an idealised human extra-thoracic airway: a comparison study

    PubMed Central

    Chen, Jie; Gutmark, Ephraim

    2013-01-01

    Large eddy simulation (LES) technique is employed to numerically investigate the airflow through an idealised human extra-thoracic airway under different breathing conditions, 10 l/min, 30 l/min, and 120 l/min. The computational results are compared with single and cross hot-wire measurements, and with time-averaged flow field computed by standard k-ω and k-ω-SST Reynolds averaged Navier-Stokes (RANS) models and the Lattice-Boltzmann method (LBM). The LES results are also compared to root-mean-square (RMS) flow field computed by the Reynolds stress model (RSM) and LBM. LES generally gives better prediction of the time-averaged flow field than RANS models and LBM. LES also provides better estimation of the RMS flow field than both the RSM and the LBM. PMID:23619907

  20. [A method for rapid extracting three-dimensional root model of vivo tooth from cone beam computed tomography data based on the anatomical characteristics of periodontal ligament].

    PubMed

    Zhao, Y J; Wang, S W; Liu, Y; Wang, Y

    2017-02-18

    To explore a new method for rapid extracting and rebuilding three-dimensional (3D) digital root model of vivo tooth from cone beam computed tomography (CBCT) data based on the anatomical characteristics of periodontal ligament, and to evaluate the extraction accuracy of the method. In the study, 15 extracted teeth (11 with single root, 4 with double roots) were collected from oral clinic and 3D digital root models of each tooth were obtained by 3D dental scanner with a high accuracy 0.02 mm in STL format. CBCT data for each patient were acquired before tooth extraction, DICOM data with a voxel size 0.3 mm were input to Mimics 18.0 software. Segmentation, Morphology operations, Boolean operations and Smart expanded function in Mimics software were used to edit teeth, bone and periodontal ligament threshold mask, and root threshold mask were automatically acquired after a series of mask operations. 3D digital root models were extracted in STL format finally. 3D morphology deviation between the extracted root models and corresponding vivo root models were compared in Geomagic Studio 2012 software. The 3D size errors in long axis, bucco-lingual direction and mesio-distal direction were also calculated. The average value of the 3D morphology deviation for 15 roots by calculating Root Mean Square (RMS) value was 0.22 mm, the average size errors in the mesio-distal direction, the bucco-lingual direction and the long axis were 0.46 mm, 0.36 mm and -0.68 mm separately. The average time of this new method for extracting single root was about 2-3 min. It could meet the accuracy requirement of the root 3D reconstruction fororal clinical use. This study established a new method for rapid extracting 3D root model of vivo tooth from CBCT data. It could simplify the traditional manual operation and improve the efficiency and automation of single root extraction. The strategy of this method for complete dentition extraction needs further research.

  1. Analysis of vegetation effect on waves using a vertical 2-D RANS model

    USDA-ARS?s Scientific Manuscript database

    A vertical two-dimensional (2-D) model has been applied in the simulation of wave propagation through vegetated water bodies. The model is based on an existing model SOLA-VOF which solves the Reynolds-Averaged Navier-Stokes (RANS) equations with the finite difference method on a staggered rectangula...

  2. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  3. Generalized nonequilibrium vertex correction method in coherent medium theory for quantum transport simulation of disordered nanoelectronics

    NASA Astrophysics Data System (ADS)

    Yan, Jiawei; Ke, Youqi

    2016-07-01

    Electron transport properties of nanoelectronics can be significantly influenced by the inevitable and randomly distributed impurities/defects. For theoretical simulation of disordered nanoscale electronics, one is interested in both the configurationally averaged transport property and its statistical fluctuation that tells device-to-device variability induced by disorder. However, due to the lack of an effective method to do disorder averaging under the nonequilibrium condition, the important effects of disorders on electron transport remain largely unexplored or poorly understood. In this work, we report a general formalism of Green's function based nonequilibrium effective medium theory to calculate the disordered nanoelectronics. In this method, based on a generalized coherent potential approximation for the Keldysh nonequilibrium Green's function, we developed a generalized nonequilibrium vertex correction method to calculate the average of a two-Keldysh-Green's-function correlator. We obtain nine nonequilibrium vertex correction terms, as a complete family, to express the average of any two-Green's-function correlator and find they can be solved by a set of linear equations. As an important result, the averaged nonequilibrium density matrix, averaged current, disorder-induced current fluctuation, and averaged shot noise, which involve different two-Green's-function correlators, can all be derived and computed in an effective and unified way. To test the general applicability of this method, we applied it to compute the transmission coefficient and its fluctuation with a square-lattice tight-binding model and compared with the exact results and other previously proposed approximations. Our results show very good agreement with the exact results for a wide range of disorder concentrations and energies. In addition, to incorporate with density functional theory to realize first-principles quantum transport simulation, we have also derived a general form of conditionally averaged nonequilibrium Green's function for multicomponent disorders.

  4. Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.

    PubMed

    Kis, Maria

    2005-01-01

    In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.

  5. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  6. Rethinking CMB foregrounds: systematic extension of foreground parametrizations

    NASA Astrophysics Data System (ADS)

    Chluba, Jens; Hill, James Colin; Abitbol, Maximilian H.

    2017-11-01

    Future high-sensitivity measurements of the cosmic microwave background (CMB) anisotropies and energy spectrum will be limited by our understanding and modelling of foregrounds. Not only does more information need to be gathered and combined, but also novel approaches for the modelling of foregrounds, commensurate with the vast improvements in sensitivity, have to be explored. Here, we study the inevitable effects of spatial averaging on the spectral shapes of typical foreground components, introducing a moment approach, which naturally extends the list of foreground parameters that have to be determined through measurements or constrained by theoretical models. Foregrounds are thought of as a superposition of individual emitting volume elements along the line of sight and across the sky, which then are observed through an instrumental beam. The beam and line-of-sight averages are inevitable. Instead of assuming a specific model for the distributions of physical parameters, our method identifies natural new spectral shapes for each foreground component that can be used to extract parameter moments (e.g. mean, dispersion, cross terms, etc.). The method is illustrated for the superposition of power laws, free-free spectra, grey-body and modified blackbody spectra, but can be applied to more complicated fundamental spectral energy distributions. Here, we focus on intensity signals but the method can be extended to the case of polarized emission. The averaging process automatically produces scale-dependent spectral shapes and the moment method can be used to propagate the required information across scales in power spectrum estimates. The approach is not limited to applications to CMB foregrounds, but could also be useful for the modelling of X-ray emission in clusters of galaxies.

  7. Protein model quality assessment prediction by combining fragment comparisons and a consensus Cα contact potential

    PubMed Central

    Zhou, Hongyi; Skolnick, Jeffrey

    2009-01-01

    In this work, we develop a fully automated method for the quality assessment prediction of protein structural models generated by structure prediction approaches such as fold recognition servers, or ab initio methods. The approach is based on fragment comparisons and a consensus Cα contact potential derived from the set of models to be assessed and was tested on CASP7 server models. The average Pearson linear correlation coefficient between predicted quality and model GDT-score per target is 0.83 for the 98 targets which is better than those of other quality assessment methods that participated in CASP7. Our method also outperforms the other methods by about 3% as assessed by the total GDT-score of the selected top models. PMID:18004783

  8. Upscaling the Navier-Stokes Equation for Turbulent Flows in Porous Media Using a Volume Averaging Method

    NASA Astrophysics Data System (ADS)

    Wood, Brian; He, Xiaoliang; Apte, Sourabh

    2017-11-01

    Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.

  9. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  10. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  11. Automatic construction of subject-specific human airway geometry including trifurcations based on a CT-segmented airway skeleton and surface

    PubMed Central

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Wenzel, Sally E.; Lin, Ching-Long

    2016-01-01

    We propose a method to construct three-dimensional airway geometric models based on airway skeletons, or centerlines (CLs). Given a CT-segmented airway skeleton and surface, the proposed CL-based method automatically constructs subject-specific models that contain anatomical information regarding branches, include bifurcations and trifurcations, and extend from the trachea to terminal bronchioles. The resulting model can be anatomically realistic with the assistance of an image-based surface; alternatively a model with an idealized skeleton and/or branch diameters is also possible. This method systematically identifies and classifies trifurcations to successfully construct the models, which also provides the number and type of trifurcations for the analysis of the airways from an anatomical point of view. We applied this method to 16 normal and 16 severe asthmatic subjects using their computed tomography images. The average distance between the surface of the model and the image-based surface was 11% of the average voxel size of the image. The four most frequent locations of trifurcations were the left upper division bronchus, left lower lobar bronchus, right upper lobar bronchus, and right intermediate bronchus. The proposed method automatically constructed accurate subject-specific three-dimensional airway geometric models that contain anatomical information regarding branches using airway skeleton, diameters, and image-based surface geometry. The proposed method can construct (i) geometry automatically for population-based studies, (ii) trifurcations to retain the original airway topology, (iii) geometry that can be used for automatic generation of computational fluid dynamics meshes, and (iv) geometry based only on a skeleton and diameters for idealized branches. PMID:27704229

  12. Test techniques for model development of repetitive service energy storage capacitors

    NASA Astrophysics Data System (ADS)

    Thompson, M. C.; Mauldin, G. H.

    1984-03-01

    The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.

  13. A Response to Holster and Lake Regarding Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Stewart, Jeffrey; McLean, Stuart; Kramer, Brandon

    2017-01-01

    Stewart questioned vocabulary size estimation methods proposed by Beglar and Nation for the Vocabulary Size Test, further arguing Rasch mean square (MSQ) fit statistics cannot determine the proportion of random guesses contained in the average learner's raw score, because the average value will be near 1 by design. He illustrated this by…

  14. Micromechanical models for textile structural composites

    NASA Technical Reports Server (NTRS)

    Marrey, Ramesh V.; Sankar, Bhavani V.

    1995-01-01

    The objective is to develop micromechanical models for predicting the stiffness and strength properties of textile composite materials. Two models are presented to predict the homogeneous elastic constants and coefficients of thermal expansion of a textile composite. The first model is based on rigorous finite element analysis of the textile composite unit-cell. Periodic boundary conditions are enforced between opposite faces of the unit-cell to simulate deformations accurately. The second model implements the selective averaging method (SAM), which is based on a judicious combination of stiffness and compliance averaging. For thin textile composites, both models can predict the plate stiffness coefficients and plate thermal coefficients. The finite element procedure is extended to compute the thermal residual microstresses, and to estimate the initial failure envelope for textile composites.

  15. Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.

    PubMed

    Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S

    2010-03-01

    This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.

  16. Improving consensus structure by eliminating averaging artifacts

    PubMed Central

    KC, Dukka B

    2009-01-01

    Background Common structural biology methods (i.e., NMR and molecular dynamics) often produce ensembles of molecular structures. Consequently, averaging of 3D coordinates of molecular structures (proteins and RNA) is a frequent approach to obtain a consensus structure that is representative of the ensemble. However, when the structures are averaged, artifacts can result in unrealistic local geometries, including unphysical bond lengths and angles. Results Herein, we describe a method to derive representative structures while limiting the number of artifacts. Our approach is based on a Monte Carlo simulation technique that drives a starting structure (an extended or a 'close-by' structure) towards the 'averaged structure' using a harmonic pseudo energy function. To assess the performance of the algorithm, we applied our approach to Cα models of 1364 proteins generated by the TASSER structure prediction algorithm. The average RMSD of the refined model from the native structure for the set becomes worse by a mere 0.08 Å compared to the average RMSD of the averaged structures from the native structure (3.28 Å for refined structures and 3.36 A for the averaged structures). However, the percentage of atoms involved in clashes is greatly reduced (from 63% to 1%); in fact, the majority of the refined proteins had zero clashes. Moreover, a small number (38) of refined structures resulted in lower RMSD to the native protein versus the averaged structure. Finally, compared to PULCHRA [1], our approach produces representative structure of similar RMSD quality, but with much fewer clashes. Conclusion The benchmarking results demonstrate that our approach for removing averaging artifacts can be very beneficial for the structural biology community. Furthermore, the same approach can be applied to almost any problem where averaging of 3D coordinates is performed. Namely, structure averaging is also commonly performed in RNA secondary prediction [2], which could also benefit from our approach. PMID:19267905

  17. Visualization of scoliotic spine using ultrasound-accessible skeletal landmarks

    NASA Astrophysics Data System (ADS)

    Church, Ben; Lasso, Andras; Schlenger, Christopher; Borschneck, Daniel P.; Mousavi, Parvin; Fichtinger, Gabor; Ungi, Tamas

    2017-03-01

    PURPOSE: Ultrasound imaging is an attractive alternative to X-ray for scoliosis diagnosis and monitoring due to its safety and inexpensiveness. The transverse processes as skeletal landmarks are accessible by means of ultrasound and are sufficient for quantifying scoliosis, but do not provide an informative visualization of the spine. METHODS: We created a method for visualization of the scoliotic spine using a 3D transform field, resulting from thin-spline interpolation of a landmark-based registration between the transverse processes that we localized in both the patient's ultrasound and an average healthy spine model. Additional anchor points were computationally generated to control the thin-spline interpolation, in order to gain a transform field that accurately represents the deformation of the patient's spine. The transform field is applied to the average spine model, resulting in a 3D surface model depicting the patient's spine. We applied ground truth CT from pediatric scoliosis patients in which we reconstructed the bone surface and localized the transverse processes. We warped the average spine model and analyzed the match between the patient's bone surface and the warped spine. RESULTS: Visual inspection revealed accurate rendering of the scoliotic spine. Notable misalignments occurred mainly in the anterior-posterior direction, and at the first and last vertebrae, which is immaterial for scoliosis quantification. The average Hausdorff distance computed for 4 patients was 2.6 mm. CONCLUSIONS: We achieved qualitatively accurate and intuitive visualization to depict the 3D deformation of the patient's spine when compared to ground truth CT.

  18. A new market risk model for cogeneration project financing---combined heat and power development without a power purchase agreement

    NASA Astrophysics Data System (ADS)

    Lockwood, Timothy A.

    Federal legislative changes in 2006 no longer entitle cogeneration project financings by law to receive the benefit of a power purchase agreement underwritten by an investment-grade investor-owned utility. Consequently, this research explored the need for a new market-risk model for future cogeneration and combined heat and power (CHP) project financing. CHP project investment represents a potentially enormous energy efficiency benefit through its application by reducing fossil fuel use up to 55% when compared to traditional energy generation, and concurrently eliminates constituent air emissions up to 50%, including global warming gases. As a supplemental approach to a comprehensive technical analysis, a quantitative multivariate modeling was also used to test the statistical validity and reliability of host facility energy demand and CHP supply ratios in predicting the economic performance of CHP project financing. The resulting analytical models, although not statistically reliable at this time, suggest a radically simplified CHP design method for future profitable CHP investments using four easily attainable energy ratios. This design method shows that financially successful CHP adoption occurs when the average system heat-to-power-ratio supply is less than or equal to the average host-convertible-energy-ratio, and when the average nominally-rated capacity is less than average host facility-load-factor demands. New CHP investments can play a role in solving the world-wide problem of accommodating growing energy demand while preserving our precious and irreplaceable air quality for future generations.

  19. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  20. Automatic extraction of three-dimensional thoracic aorta geometric model from phase contrast MRI for morphometric and hemodynamic characterization.

    PubMed

    Volonghi, Paola; Tresoldi, Daniele; Cadioli, Marcello; Usuelli, Antonio M; Ponzini, Raffaele; Morbiducci, Umberto; Esposito, Antonio; Rizzo, Giovanna

    2016-02-01

    To propose and assess a new method that automatically extracts a three-dimensional (3D) geometric model of the thoracic aorta (TA) from 3D cine phase contrast MRI (PCMRI) acquisitions. The proposed method is composed of two steps: segmentation of the TA and creation of the 3D geometric model. The segmentation algorithm, based on Level Set, was set and applied to healthy subjects acquired in three different modalities (with and without SENSE reduction factors). Accuracy was evaluated using standard quality indices. The 3D model is characterized by the vessel surface mesh and its centerline; the comparison of models obtained from the three different datasets was also carried out in terms of radius of curvature (RC) and average tortuosity (AT). In all datasets, the segmentation quality indices confirmed very good agreement between manual and automatic contours (average symmetric distance < 1.44 mm, DICE Similarity Coefficient > 0.88). The 3D models extracted from the three datasets were found to be comparable, with differences of less than 10% for RC and 11% for AT. Our method was found effective on PCMRI data to provide a 3D geometric model of the TA, to support morphometric and hemodynamic characterization of the aorta. © 2015 Wiley Periodicals, Inc.

  1. Optimal weighted averaging of event related activity from acquisitions with artifacts.

    PubMed

    Vollero, Luca; Petrichella, Sara; Innello, Giulio

    2016-08-01

    In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.

  2. A stochastic post-processing method for solar irradiance forecasts derived from NWPs models

    NASA Astrophysics Data System (ADS)

    Lara-Fanego, V.; Pozo-Vazquez, D.; Ruiz-Arias, J. A.; Santos-Alamillos, F. J.; Tovar-Pescador, J.

    2010-09-01

    Solar irradiance forecast is an important area of research for the future of the solar-based renewable energy systems. Numerical Weather Prediction models (NWPs) have proved to be a valuable tool for solar irradiance forecasting with lead time up to a few days. Nevertheless, these models show low skill in forecasting the solar irradiance under cloudy conditions. Additionally, climatic (averaged over seasons) aerosol loading are usually considered in these models, leading to considerable errors for the Direct Normal Irradiance (DNI) forecasts during high aerosols load conditions. In this work we propose a post-processing method for the Global Irradiance (GHI) and DNI forecasts derived from NWPs. Particularly, the methods is based on the use of Autoregressive Moving Average with External Explanatory Variables (ARMAX) stochastic models. These models are applied to the residuals of the NWPs forecasts and uses as external variables the measured cloud fraction and aerosol loading of the day previous to the forecast. The method is evaluated for a set one-moth length three-days-ahead forecast of the GHI and DNI, obtained based on the WRF mesoscale atmospheric model, for several locations in Andalusia (Southern Spain). The Cloud fraction is derived from MSG satellite estimates and the aerosol loading from the MODIS platform estimates. Both sources of information are readily available at the time of the forecast. Results showed a considerable improvement of the forecasting skill of the WRF model using the proposed post-processing method. Particularly, relative improvement (in terms of the RMSE) for the DNI during summer is about 20%. A similar value is obtained for the GHI during the winter.

  3. Novel method for the determination of average molecular weight of natural polymers based on 2D DOSY NMR and chemometrics: Example of heparin.

    PubMed

    Monakhova, Yulia B; Diehl, Bernd W K; Do, Tung X; Schulze, Margit; Witzleben, Steffen

    2018-02-05

    Apart from the characterization of impurities, the full characterization of heparin and low molecular weight heparin (LMWH) also requires the determination of average molecular weight, which is closely related to the pharmaceutical properties of anticoagulant drugs. To determine average molecular weight of these animal-derived polymer products, partial least squares regression (PLS) was utilized for modelling of diffused-ordered spectroscopy NMR data (DOSY) of a representative set of heparin (n=32) and LMWH (n=30) samples. The same sets of samples were measured by gel permeation chromatography (GPC) to obtain reference data. The application of PLS to the data led to calibration models with root mean square error of prediction of 498Da and 179Da for heparin and LMWH, respectively. The average coefficients of variation (CVs) did not exceed 2.1% excluding sample preparation (by successive measuring one solution, n=5) and 2.5% including sample preparation (by preparing and analyzing separate samples, n=5). An advantage of the method is that the sample after standard 1D NMR characterization can be used for the molecular weight determination without further manipulation. The accuracy of multivariate models is better than the previous results for other matrices employing internal standards. Therefore, DOSY experiment is recommended to be employed for the calculation of molecular weight of heparin products as a complementary measurement to standard 1D NMR quality control. The method can be easily transferred to other matrices as well. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  5. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  6. A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms

    PubMed Central

    2014-01-01

    Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be “well balanced” and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments. PMID:24563581

  7. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.

  8. Excellent amino acid racemization results from Holocene sand dollars

    NASA Astrophysics Data System (ADS)

    Kosnik, M.; Kaufman, D. S.; Kowalewski, M.; Whitacre, K.

    2015-12-01

    Amino acid racemization (AAR) is widely used as a cost-effective method to date molluscs in time-averaging and taphonomic studies, but it has not been attempted for echinoderms despite their paleobiological importance. Here we demonstrate the feasibility of AAR geochronology in Holocene aged Peronella peronii (Echinodermata: Echinoidea) collected from Sydney Harbour (Australia). Using standard HPLC methods we determined the extent of AAR in 74 Peronella tests and performed replicate analyses on 18 tests. We sampled multiple areas of two individuals and identified the outer edge as a good sampling location. Multiple replicate analyses from the outer edge of 18 tests spanning the observed range of D/Ls yielded median coefficients of variation < 4% for Asp, Phe, Ala, and Glu D/L values, which overlaps with the analytical precision. Correlations between D/L values across 155 HPLC injections sampled from 74 individuals are also very high (pearson r2 > 0.95) for these four amino acids. The ages of 11 individuals spanning the observed range of D/L values were determined using 14C analyses, and Bayesian model averaging was used to determine the best AAR age model. The averaged age model was mainly composed of time-dependent reaction kinetics models (TDK, 71%) based on phenylalanine (Phe, 94%). Modelled ages ranged from 14 to 5539 yrs, and the median 95% confidence interval for the 74 analysed individuals is ±28% of the modelled age. In comparison, the median 95% confidence interval for the 11 calibrated 14C ages was ±9% of the median age estimate. Overall Peronella yields exceptionally high-quality AAR D/L values and appears to be an excellent substrate for AAR geochronology. This work opens the way for time-averaging and taphonomic studies of echinoderms similar to those in molluscs.

  9. A novel approach to detect respiratory phases from pulmonary acoustic signals using normalised power spectral density and fuzzy inference system.

    PubMed

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S

    2016-07-01

    Monitoring respiration is important in several medical applications. One such application is respiratory rate monitoring in patients with sleep apnoea. The respiratory rate in patients with sleep apnoea disorder is irregular compared with the controls. Respiratory phase detection is required for a proper monitoring of respiration in patients with sleep apnoea. To develop a model to detect the respiratory phases present in the pulmonary acoustic signals and to evaluate the performance of the model in detecting the respiratory phases. Normalised averaged power spectral density for each frame and change in normalised averaged power spectral density between the adjacent frames were fuzzified and fuzzy rules were formulated. The fuzzy inference system (FIS) was developed with both Mamdani and Sugeno methods. To evaluate the performance of both Mamdani and Sugeno methods, correlation coefficient and root mean square error (RMSE) were calculated. In the correlation coefficient analysis in evaluating the fuzzy model using Mamdani and Sugeno method, the strength of the correlation was found to be r = 0.9892 and r = 0.9964, respectively. The RMSE for Mamdani and Sugeno methods are RMSE = 0.0853 and RMSE = 0.0817, respectively. The correlation coefficient and the RMSE of the proposed fuzzy models in detecting the respiratory phases reveals that Sugeno method performs better compared with the Mamdani method. © 2014 John Wiley & Sons Ltd.

  10. Latent Growth and Dynamic Structural Equation Models.

    PubMed

    Grimm, Kevin J; Ram, Nilam

    2018-05-07

    Latent growth models make up a class of methods to study within-person change-how it progresses, how it differs across individuals, what are its determinants, and what are its consequences. Latent growth methods have been applied in many domains to examine average and differential responses to interventions and treatments. In this review, we introduce the growth modeling approach to studying change by presenting different models of change and interpretations of their model parameters. We then apply these methods to examining sex differences in the development of binge drinking behavior through adolescence and into adulthood. Advances in growth modeling methods are then discussed and include inherently nonlinear growth models, derivative specification of growth models, and latent change score models to study stochastic change processes. We conclude with relevant design issues of longitudinal studies and considerations for the analysis of longitudinal data.

  11. Comparing Satellite Rainfall Estimates with Rain-Gauge Data: Optimal Strategies Suggested by a Spectral Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Validation of satellite remote-sensing methods for estimating rainfall against rain-gauge data is attractive because of the direct nature of the rain-gauge measurements. Comparisons of satellite estimates to rain-gauge data are difficult, however, because of the extreme variability of rain and the fact that satellites view large areas over a short time while rain gauges monitor small areas continuously. In this paper, a statistical model of rainfall variability developed for studies of sampling error in averages of satellite data is used to examine the impact of spatial and temporal averaging of satellite and gauge data on intercomparison results. The model parameters were derived from radar observations of rain, but the model appears to capture many of the characteristics of rain-gauge data as well. The model predicts that many months of data from areas containing a few gauges are required to validate satellite estimates over the areas, and that the areas should be of the order of several hundred km in diameter. Over gauge arrays of sufficiently high density, the optimal areas and averaging times are reduced. The possibility of using time-weighted averages of gauge data is explored.

  12. Forecast of Frost Days Based on Monthly Temperatures

    NASA Astrophysics Data System (ADS)

    Castellanos, M. T.; Tarquis, A. M.; Morató, M. C.; Saa-Requejo, A.

    2009-04-01

    Although frost can cause considerable crop damage and mitigation practices against forecasted frost exist, frost forecasting technologies have not changed for many years. The paper reports a new method to forecast the monthly number of frost days (FD) for several meteorological stations at Community of Madrid (Spain) based on successive application of two models. The first one is a stochastic model, autoregressive integrated moving average (ARIMA), that forecasts monthly minimum absolute temperature (tmin) and monthly average of minimum temperature (tminav) following Box-Jenkins methodology. The second model relates these monthly temperatures to minimum daily temperature distribution during one month. Three ARIMA models were identified for the time series analyzed with a stational period correspondent to one year. They present the same stational behavior (moving average differenced model) and different non-stational part: autoregressive model (Model 1), moving average differenced model (Model 2) and autoregressive and moving average model (Model 3). At the same time, the results point out that minimum daily temperature (tdmin), for the meteorological stations studied, followed a normal distribution each month with a very similar standard deviation through years. This standard deviation obtained for each station and each month could be used as a risk index for cold months. The application of Model 1 to predict minimum monthly temperatures showed the best FD forecast. This procedure provides a tool for crop managers and crop insurance companies to asses the risk of frost frequency and intensity, so that they can take steps to mitigate against frost damage and estimated the damage that frost would cost. This research was supported by Comunidad de Madrid Research Project 076/92. The cooperation of the Spanish National Meteorological Institute and the Spanish Ministerio de Agricultura, Pesca y Alimentation (MAPA) is gratefully acknowledged.

  13. Using a GIS to link digital spatial data and the precipitation-runoff modeling system, Gunnison River Basin, Colorado

    USGS Publications Warehouse

    Battaglin, William A.; Kuhn, Gerhard; Parker, Randolph S.

    1993-01-01

    The U.S. Geological Survey Precipitation-Runoff Modeling System, a modular, distributed-parameter, watershed-modeling system, is being applied to 20 smaller watersheds within the Gunnison River basin. The model is used to derive a daily water balance for subareas in a watershed, ultimately producing simulated streamflows that can be input into routing and accounting models used to assess downstream water availability under current conditions, and to assess the sensitivity of water resources in the basin to alterations in climate. A geographic information system (GIS) is used to automate a method for extracting physically based hydrologic response unit (HRU) distributed parameter values from digital data sources, and for the placement of those estimates into GIS spatial datalayers. The HRU parameters extracted are: area, mean elevation, average land-surface slope, predominant aspect, predominant land-cover type, predominant soil type, average total soil water-holding capacity, and average water-holding capacity of the root zone.

  14. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    NASA Astrophysics Data System (ADS)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  15. Highly accurate prediction of protein self-interactions by incorporating the average block and PSSM information into the general PseAAC.

    PubMed

    Zhai, Jing-Xuan; Cao, Tian-Jie; An, Ji-Yong; Bian, Yong-Tao

    2017-11-07

    It is a challenging task for fundamental research whether proteins can interact with their partners. Protein self-interaction (SIP) is a special case of PPIs, which plays a key role in the regulation of cellular functions. Due to the limitations of experimental self-interaction identification, it is very important to develop an effective biological tool for predicting SIPs based on protein sequences. In the study, we developed a novel computational method called RVM-AB that combines the Relevance Vector Machine (RVM) model and Average Blocks (AB) for detecting SIPs from protein sequences. Firstly, Average Blocks (AB) feature extraction method is employed to represent protein sequences on a Position Specific Scoring Matrix (PSSM). Secondly, Principal Component Analysis (PCA) method is used to reduce the dimension of AB vector for reducing the influence of noise. Then, by employing the Relevance Vector Machine (RVM) algorithm, the performance of RVM-AB is assessed and compared with the state-of-the-art support vector machine (SVM) classifier and other exiting methods on yeast and human datasets respectively. Using the fivefold test experiment, RVM-AB model achieved very high accuracies of 93.01% and 97.72% on yeast and human datasets respectively, which are significantly better than the method based on SVM classifier and other previous methods. The experimental results proved that the RVM-AB prediction model is efficient and robust. It can be an automatic decision support tool for detecting SIPs. For facilitating extensive studies for future proteomics research, the RVMAB server is freely available for academic use at http://219.219.62.123:8888/SIP_AB. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  17. Bayesian model averaging method for evaluating associations between air pollution and respiratory mortality: a time-series study.

    PubMed

    Fang, Xin; Li, Runkui; Kan, Haidong; Bottai, Matteo; Fang, Fang; Cao, Yang

    2016-08-16

    To demonstrate an application of Bayesian model averaging (BMA) with generalised additive mixed models (GAMM) and provide a novel modelling technique to assess the association between inhalable coarse particles (PM10) and respiratory mortality in time-series studies. A time-series study using regional death registry between 2009 and 2010. 8 districts in a large metropolitan area in Northern China. 9559 permanent residents of the 8 districts who died of respiratory diseases between 2009 and 2010. Per cent increase in daily respiratory mortality rate (MR) per interquartile range (IQR) increase of PM10 concentration and corresponding 95% confidence interval (CI) in single-pollutant and multipollutant (including NOx, CO) models. The Bayesian model averaged GAMM (GAMM+BMA) and the optimal GAMM of PM10, multipollutants and principal components (PCs) of multipollutants showed comparable results for the effect of PM10 on daily respiratory MR, that is, one IQR increase in PM10 concentration corresponded to 1.38% vs 1.39%, 1.81% vs 1.83% and 0.87% vs 0.88% increase, respectively, in daily respiratory MR. However, GAMM+BMA gave slightly but noticeable wider CIs for the single-pollutant model (-1.09 to 4.28 vs -1.08 to 3.93) and the PCs-based model (-2.23 to 4.07 vs -2.03 vs 3.88). The CIs of the multiple-pollutant model from two methods are similar, that is, -1.12 to 4.85 versus -1.11 versus 4.83. The BMA method may represent a useful tool for modelling uncertainty in time-series studies when evaluating the effect of air pollution on fatal health outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  18. Image-Guided Rendering with an Evolutionary Algorithm Based on Cloud Model

    PubMed Central

    2018-01-01

    The process of creating nonphotorealistic rendering images and animations can be enjoyable if a useful method is involved. We use an evolutionary algorithm to generate painterly styles of images. Given an input image as the reference target, a cloud model-based evolutionary algorithm that will rerender the target image with nonphotorealistic effects is evolved. The resulting animations have an interesting characteristic in which the target slowly emerges from a set of strokes. A number of experiments are performed, as well as visual comparisons, quantitative comparisons, and user studies. The average scores in normalized feature similarity of standard pixel-wise peak signal-to-noise ratio, mean structural similarity, feature similarity, and gradient similarity based metric are 0.486, 0.628, 0.579, and 0.640, respectively. The average scores in normalized aesthetic measures of Benford's law, fractal dimension, global contrast factor, and Shannon's entropy are 0.630, 0.397, 0.418, and 0.708, respectively. Compared with those of similar method, the average score of the proposed method, except peak signal-to-noise ratio, is higher by approximately 10%. The results suggest that the proposed method can generate appealing images and animations with different styles by choosing different strokes, and it would inspire graphic designers who may be interested in computer-based evolutionary art. PMID:29805440

  19. Time is Money

    NASA Astrophysics Data System (ADS)

    Ausloos, Marcel; Vandewalle, Nicolas; Ivanova, Kristinka

    Specialized topics on financial data analysis from a numerical and physical point of view are discussed when pertaining to the analysis of coherent and random sequences in financial fluctuations within (i) the extended detrended fluctuation analysis method, (ii) multi-affine analysis technique, (iii) mobile average intersection rules and distributions, (iv) sandpile avalanches models for crash prediction, (v) the (m,k)-Zipf method and (vi) the i-variability diagram technique for sorting out short range correlations. The most baffling result that needs further thought from mathematicians and physicists is recalled: the crossing of two mobile averages is an original method for measuring the "signal" roughness exponent, but why it is so is not understood up to now.

  20. Deformation of a plate with periodically changing parameters

    NASA Astrophysics Data System (ADS)

    Naumova, Natalia V.; Ivanov, Denis; Voloshinova, Tatiana

    2018-05-01

    Deformation of reinforced square plate under external pressure is considered. The averaged fourth-order partial differential equation for the plate deflection w is obtained. The new mathematical model of the plate is offered. Asymptotic averaging and Finite Elements Method (ANSYS) are used to get the values of normal deflections of the plate surface. The comparison of numerical and asymptotic results is performed.

  1. Atomic structure data based on average-atom model for opacity calculations in astrophysical plasmas

    NASA Astrophysics Data System (ADS)

    Trzhaskovskaya, M. B.; Nikulin, V. K.

    2018-03-01

    Influence of the plasmas parameters on the electron structure of ions in astrophysical plasmas is studied on the basis of the average-atom model in the local thermodynamic equilibrium approximation. The relativistic Dirac-Slater method is used for the electron density estimation. The emphasis is on the investigation of an impact of the plasmas temperature and density on the ionization stages required for calculations of the plasmas opacities. The level population distributions and level energy spectra are calculated and analyzed for all ions with 6 ≤ Z ≤ 32 occurring in astrophysical plasmas. The plasma temperature range 2 - 200 eV and the density range 2 - 100 mg/cm3 are considered. The validity of the method used is supported by good agreement between our values of ionization stages for a number of ions, from oxygen up to uranium, and results obtained earlier by various methods among which are more complicated procedures.

  2. Simplified approach to the mixed time-averaging semiclassical initial value representation for the calculation of dense vibrational spectra

    NASA Astrophysics Data System (ADS)

    Buchholz, Max; Grossmann, Frank; Ceotto, Michele

    2018-03-01

    We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.

  3. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  4. Constraints on Average Radial Anisotropy in the Lower Mantle

    NASA Astrophysics Data System (ADS)

    Trampert, J.; De Wit, R. W. L.; Kaeufl, P.; Valentine, A. P.

    2014-12-01

    Quantifying uncertainties in seismological models is challenging, yet ideally quality assessment is an integral part of the inverse method. We invert centre frequencies for spheroidal and toroidal modes for three parameters of average radial anisotropy, density and P- and S-wave velocities in the lower mantle. We adopt a Bayesian machine learning approach to extract the information on the earth model that is available in the normal mode data. The method is flexible and allows us to infer probability density functions (pdfs), which provide a quantitative description of our knowledge of the individual earth model parameters. The parameters describing shear- and P-wave anisotropy show little deviations from isotropy, but the intermediate parameter η carries robust information on negative anisotropy of ~1% below 1900 km depth. The mass density in the deep mantle (below 1900 km) shows clear positive deviations from existing models. Other parameters (P- and shear-wave velocities) are close to PREM. Our results require that the average mantle is about 150K colder than commonly assumed adiabats and consist of a mixture of about 60% perovskite and 40% ferropericlase containing 10-15% iron. The anisotropy favours a specific orientation of the two minerals. This observation has important consequences for the nature of mantle flow.

  5. A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.

    PubMed

    Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan

    2017-12-01

    A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.

  6. Short-term forecasts gain in accuracy. [Regression technique using ''Box-Jenkins'' analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Box-Jenkins time-series models offer accuracy for short-term forecasts that compare with large-scale macroeconomic forecasts. Utilities need to be able to forecast peak demand in order to plan their generating, transmitting, and distribution systems. This new method differs from conventional models by not assuming specific data patterns, but by fitting available data into a tentative pattern on the basis of auto-correlations. Three types of models (autoregressive, moving average, or mixed autoregressive/moving average) can be used according to which provides the most appropriate combination of autocorrelations and related derivatives. Major steps in choosing a model are identifying potential models, estimating the parametersmore » of the problem, and running a diagnostic check to see if the model fits the parameters. The Box-Jenkins technique is well suited for seasonal patterns, which makes it possible to have as short as hourly forecasts of load demand. With accuracy up to two years, the method will allow electricity price-elasticity forecasting that can be applied to facility planning and rate design. (DCK)« less

  7. Nonlinear data assimilation for the regional modeling of maximum ozone values.

    PubMed

    Božnar, Marija Zlata; Grašič, Boštjan; Mlakar, Primož; Gradišar, Dejan; Kocijan, Juš

    2017-11-01

    We present a new method of data assimilation with the aim of correcting the forecast of the maximum values of ozone in regional photo-chemical models for areas over complex terrain using multilayer perceptron artificial neural networks. Up until now, these types of models have been used as a single model for one location when forecasting concentrations of air pollutants. We propose a method for constructing a more ambitious model: a single model, which can be used at several locations because the model is spatially transferable and is valid for the whole 2D domain. To achieve this goal, we introduce three novel ideas. The new method improves correlation at measurement station locations by 10% on average and improves by approximately 5% elsewhere.

  8. Estimating direct, diffuse, and global solar radiation for various cities in Iran by two methods and their comparison with the measured data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashjaee, M.; Roomina, M.R.; Ghafouri-Azar, R.

    1993-05-01

    Two computational methods for calculating hourly, daily, and monthly average values of direct, diffuse, and global solar radiation on horizontal collectors have been presented in this article for location with different latitude, altitude, and atmospheric conditions in Iran. These methods were developed using two different independent sets of measured data from the Iranian Meteorological Organization (IMO) for two cities in Iran (Tehran and Isfahan) during 14 years of measurement for Tehran and 4 years of measurement for Isfahan. Comparison of calculated monthly average global solar radiation, using the two models for Tehran and Isfahan with measured data from the IMO,more » has indicated a good agreement between them. Then these developed methods were extended to another location (city of Bandar-Abbas), where measured data are not available. But the work of Daneshyar predicts its monthly global radiation. The maximum discrepancy of 7% between the developed models and the work of Daneshyar was observed.« less

  9. Rapid determination of thermodynamic parameters from one-dimensional programmed-temperature gas chromatography for use in retention time prediction in comprehensive multidimensional chromatography.

    PubMed

    McGinitie, Teague M; Ebrahimi-Najafabadi, Heshmatollah; Harynuk, James J

    2014-01-17

    A new method for estimating the thermodynamic parameters of ΔH(T0), ΔS(T0), and ΔCP for use in thermodynamic modeling of GC×GC separations has been developed. The method is an alternative to the traditional isothermal separations required to fit a three-parameter thermodynamic model to retention data. Herein, a non-linear optimization technique is used to estimate the parameters from a series of temperature-programmed separations using the Nelder-Mead simplex algorithm. With this method, the time required to obtain estimates of thermodynamic parameters a series of analytes is significantly reduced. This new method allows for precise predictions of retention time with the average error being only 0.2s for 1D separations. Predictions for GC×GC separations were also in agreement with experimental measurements; having an average relative error of 0.37% for (1)tr and 2.1% for (2)tr. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part III: reference evapotranspiration

    NASA Astrophysics Data System (ADS)

    Lobit, P.; Gómez Tagle, A.; Bautista, F.; Lhomme, J. P.

    2017-07-01

    We evaluated two methods to estimate evapotranspiration (ETo) from minimal weather records (daily maximum and minimum temperatures) in Mexico: a modified reduced set FAO-Penman-Monteith method (Allen et al. 1998, Rome, Italy) and the Hargreaves and Samani (Appl Eng Agric 1(2): 96-99, 1985) method. In the reduced set method, the FAO-Penman-Monteith equation was applied with vapor pressure and radiation estimated from temperature data using two new models (see first and second articles in this series): mean temperature as the average of maximum and minimum temperature corrected for a constant bias and constant wind speed. The Hargreaves-Samani method combines two empirical relationships: one between diurnal temperature range ΔT and shortwave radiation Rs, and another one between average temperature and the ratio ETo/Rs: both relationships were evaluated and calibrated for Mexico. After performing a sensitivity analysis to evaluate the impact of different approximations on the estimation of Rs and ETo, several model combinations were tested to predict ETo from daily maximum and minimum temperature alone. The quality of fit of these models was evaluated on 786 weather stations covering most of the territory of Mexico. The best method was found to be a combination of the FAO-Penman-Monteith reduced set equation with the new radiation estimation and vapor pressure model. As an alternative, a recalibration of the Hargreaves-Samani equation is proposed.

  11. Forecasting daily meteorological time series using ARIMA and regression models

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Gos, Magdalena; Krzyszczak, Jaromir

    2018-04-01

    The daily air temperature and precipitation time series recorded between January 1, 1980 and December 31, 2010 in four European sites (Jokioinen, Dikopshof, Lleida and Lublin) from different climatic zones were modeled and forecasted. In our forecasting we used the methods of the Box-Jenkins and Holt- Winters seasonal auto regressive integrated moving-average, the autoregressive integrated moving-average with external regressors in the form of Fourier terms and the time series regression, including trend and seasonality components methodology with R software. It was demonstrated that obtained models are able to capture the dynamics of the time series data and to produce sensible forecasts.

  12. Isolating the cow-specific part of residual energy intake in lactating dairy cows using random regressions.

    PubMed

    Fischer, A; Friggens, N C; Berry, D P; Faverdin, P

    2018-07-01

    The ability to properly assess and accurately phenotype true differences in feed efficiency among dairy cows is key to the development of breeding programs for improving feed efficiency. The variability among individuals in feed efficiency is commonly characterised by the residual intake approach. Residual feed intake is represented by the residuals of a linear regression of intake on the corresponding quantities of the biological functions that consume (or release) energy. However, the residuals include both, model fitting and measurement errors as well as any variability in cow efficiency. The objective of this study was to isolate the individual animal variability in feed efficiency from the residual component. Two separate models were fitted, in one the standard residual energy intake (REI) was calculated as the residual of a multiple linear regression of lactation average net energy intake (NEI) on lactation average milk energy output, average metabolic BW, as well as lactation loss and gain of body condition score. In the other, a linear mixed model was used to simultaneously fit fixed linear regressions and random cow levels on the biological traits and intercept using fortnight repeated measures for the variables. This method split the predicted NEI in two parts: one quantifying the population mean intercept and coefficients, and one quantifying cow-specific deviations in the intercept and coefficients. The cow-specific part of predicted NEI was assumed to isolate true differences in feed efficiency among cows. NEI and associated energy expenditure phenotypes were available for the first 17 fortnights of lactation from 119 Holstein cows; all fed a constant energy-rich diet. Mixed models fitting cow-specific intercept and coefficients to different combinations of the aforementioned energy expenditure traits, calculated on a fortnightly basis, were compared. The variance of REI estimated with the lactation average model represented only 8% of the variance of measured NEI. Among all compared mixed models, the variance of the cow-specific part of predicted NEI represented between 53% and 59% of the variance of REI estimated from the lactation average model or between 4% and 5% of the variance of measured NEI. The remaining 41% to 47% of the variance of REI estimated with the lactation average model may therefore reflect model fitting errors or measurement errors. In conclusion, the use of a mixed model framework with cow-specific random regressions seems to be a promising method to isolate the cow-specific component of REI in dairy cows.

  13. Improving consensus contact prediction via server correlation reduction.

    PubMed

    Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming

    2009-05-06

    Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.

  14. Evaluation of the quality of the college library websites in Iranian medical Universities based on the Stover model

    PubMed Central

    Nasajpour, Mohammad Reza; Ashrafi-rizi, Hasan; Soleymani, Mohammad Reza; Shahrzadi, Leila; Hassanzadeh, Akbar

    2014-01-01

    Introduction: Today, the websites of college and university libraries play an important role in providing the necessary services for clients. These websites not only allow the users to access different collections of library resources, but also provide them with the necessary guidance in order to use the information. The goal of this study is the quality evaluation of the college library websites in Iranian Medical Universities based on the Stover model. Material and Methods: This study uses an analytical survey method and is an applied study. The data gathering tool is the standard checklist provided by Stover, which was modified by the researchers for this study. The statistical population is the college library websites of the Iranian Medical Universities (146 websites) and census method was used for investigation. The data gathering method was a direct access to each website and filling of the checklist was based on the researchers’ observations. Descriptive and analytical statistics (Analysis of Variance (ANOVA)) were used for data analysis with the help of the SPSS software. Findings: The findings showed that in the dimension of the quality of contents, the highest average belonged to type one universities (46.2%) and the lowest average belonged to type three universities (24.8%). In the search and research capabilities, the highest average belonged to type one universities (48.2%) and the lowest average belonged to type three universities. In the dimension of facilities provided for the users, type one universities again had the highest average (37.2%), while type three universities had the lowest average (15%). In general the library websites of type one universities had the highest quality (44.2%), while type three universities had the lowest quality (21.1%). Also the library websites of the College of Rehabilitation and the College of Paramedics, of the Shiraz University of Medical Science, had the highest quality scores. Discussion: The results showed that there was a meaningful difference between the quality of the college library websites and the university types, resulting in college libraries of type one universities having the highest average score and the college libraries of type three universities having the lowest score. PMID:25540794

  15. The application of time series models to cloud field morphology analysis

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.

    1987-01-01

    A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.

  16. To acquire more detailed radiation drive by use of ``quasi-steady'' approximation in atomic kinetics

    NASA Astrophysics Data System (ADS)

    Ren, Guoli; Pei, Wenbing; Lan, Ke; Gu, Peijun; Li, Xin

    2012-10-01

    In current routine 2D simulation of hohlraum physics, we adopt the principal-quantum- number(n-level) average atom model(AAM) in NLTE plasma description. However, the detailed experimental frequency-dependant radiative drive differs from our n-level simulated drive, which reminds us the need of a more detailed atomic kinetics description. The orbital-quantum- number(nl-level) average atom model is a natural consideration, however the nl-level in-line calculation needs much more computational resource. By distinguishing the rapid bound-bound atomic processes from the relative slow bound-free atomic processes, we found a method to build up a more detailed bound electron distribution(nl-level even nlm-level) using in-line n-level calculated plasma conditions(temperature, density, and average ionization degree). We name this method ``quasi-steady approximation'' in atomic kinetics. Using this method, we re-build the nl-level bound electron distribution (Pnl), and acquire a new hohlraum radiative drive by post-processing. Comparison with the n-level post-processed hohlraum drive shows that we get an almost identical radiation flux but with more fine frequency-denpending spectrum structure which appears only in nl-level transition with same n number(n=0) .

  17. Life-History Traits of the Model Organism Pristionchus pacificus Recorded Using the Hanging Drop Method: Comparison with Caenorhabditis elegans.

    PubMed

    Gilarte, Patricia; Kreuzinger-Janik, Bianca; Majdi, Nabil; Traunspurger, Walter

    2015-01-01

    The nematode Pristionchus pacificus is of growing interest as a model organism in evolutionary biology. However, despite multiple studies of its genetics, developmental cues, and ecology, the basic life-history traits (LHTs) of P. pacificus remain unknown. In this study, we used the hanging drop method to follow P. pacificus at the individual level and thereby quantify its LHTs. This approach allowed direct comparisons with the LHTs of Caenorhabditis elegans recently determined using this method. When provided with 5×10(9) Escherichia coli cells ml(-1) at 20°C, the intrinsic rate of natural increase of P. pacificus was 1.125 (individually, per day); mean net production was 115 juveniles produced during the life-time of each individual, and each nematode laid an average of 270 eggs (both fertile and unfertile). The mean age of P. pacificus individuals at first reproduction was 65 h, and the average life span was 22 days. The life cycle of P. pacificus is therefore slightly longer than that of C. elegans, with a longer average life span and hatching time and the production of fewer progeny.

  18. The pitch of short-duration fundamental frequency glissandos.

    PubMed

    d'Alessandro, C; Rosset, S; Rossi, J P

    1998-10-01

    Pitch perception for short-duration fundamental frequency (F0) glissandos was studied. In the first part, new measurements using the method of adjustment are reported. Stimuli were F0 glissandos centered at 220 Hz. The parameters under study were: F0 glissando extents (0, 0.8, 1.5, 3, 6, and 12 semitones, i.e., 0, 10.17, 18.74, 38.17, 76.63, and 155.56 Hz), F0 glissando durations (50, 100, 200, and 300 ms), F0 glissando directions (rising or falling), and the extremity of F0 glissandos matched (beginning or end). In the second part, the main results are discussed: (1) perception seems to correspond to an average of the frequencies present in the vicinity of the extremity matched; (2) the higher extremities of the glissando seem more important; (3) adjustments at the end are closer to the extremities than adjustments at the beginning. In the third part, numerical models accounting for the experimental data are proposed: a time-average model and a weighted time-average model. Optimal parameters for these models are derived. The weighted time-average model achieves a 94% accurate prediction rate for the experimental data. The numerical model is successful in predicting the pitch of short-duration F0 glissandos.

  19. The relationship between organizational trust and nurse administrators’ productivity in hospitals

    PubMed Central

    Bahrami, Susan; Hasanpour, Marzieh; Rajaeepour, Saeed; Aghahosseni, Taghi; Hodhodineghad, Nilofar

    2012-01-01

    Context: Management of health care organizations based on employee’s mutual trust will increase the improvement in functions and tasks. Aims: The present study was performed to investigate the relationship between organizational trust and the nurse administrators’ productivity in educational health centers of in Health-Education Centers of Isfahan University of Medical Sciences. Settings and Design: This research was a descriptive and correlational study. Materials and Methods: The population included all nurse administrators. In this research, 165 nurses were selected through random sampling method. Data collection instruments were organizational trust questionnaire based on Robbins’s model and productivity questionnaire based on Hersy and Blanchard’s model. Validity of these questionnaires was determined through content validity and their reliability was calculated through Cranach’s alpha. Statistical analysis was used: The data analysis was done using the SPSS (18) statistical software. Results: The indicators of organizational trust such as loyalty, competence, honesty, and stability were more than average level but explicitness indicator was at average level. The components of productivity such as ability, job knowledge, environmental compatibility, performance feedback, and validity were more than average level but motivation factor was at average level and organizational support was less than average level. There were a significant multiple correlations between organizational trust and productivity. Beta coefficients among organizational trust and productivity were significant and no autocorrelation existed and regression model was significant. Conclusions: Committed employees, timely performing the tasks and developing the sense of responsibility among employees can enhance production and productivity in the health care organizations. PMID:23922588

  20. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    NASA Astrophysics Data System (ADS)

    M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.

    2014-06-01

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.

  1. Bayesian random local clocks, or one rate to rule them all

    PubMed Central

    2010-01-01

    Background Relaxed molecular clock models allow divergence time dating and "relaxed phylogenetic" inference, in which a time tree is estimated in the face of unequal rates across lineages. We present a new method for relaxing the assumption of a strict molecular clock using Markov chain Monte Carlo to implement Bayesian modeling averaging over random local molecular clocks. The new method approaches the problem of rate variation among lineages by proposing a series of local molecular clocks, each extending over a subregion of the full phylogeny. Each branch in a phylogeny (subtending a clade) is a possible location for a change of rate from one local clock to a new one. Thus, including both the global molecular clock and the unconstrained model results, there are a total of 22n-2 possible rate models available for averaging with 1, 2, ..., 2n - 2 different rate categories. Results We propose an efficient method to sample this model space while simultaneously estimating the phylogeny. The new method conveniently allows a direct test of the strict molecular clock, in which one rate rules them all, against a large array of alternative local molecular clock models. We illustrate the method's utility on three example data sets involving mammal, primate and influenza evolution. Finally, we explore methods to visualize the complex posterior distribution that results from inference under such models. Conclusions The examples suggest that large sequence datasets may only require a small number of local molecular clocks to reconcile their branch lengths with a time scale. All of the analyses described here are implemented in the open access software package BEAST 1.5.4 (http://beast-mcmc.googlecode.com/). PMID:20807414

  2. Mathematical modelling for the drying method and smoothing drying rate using cubic spline for seaweed Kappaphycus Striatum variety Durian in a solar dryer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M Ali, M. K., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Ruslan, M. H., E-mail: majidkhankhan@ymail.com, E-mail: eutoco@gmail.com; Muthuvalu, M. S., E-mail: sudaram-@yahoo.com, E-mail: jumat@ums.edu.my

    2014-06-19

    The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea ofmore » this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.« less

  3. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-05-18

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.

  4. Estimating Watershed-Averaged Precipitation and Evapotranspiration Fluxes using Streamflow Measurements in a Semi-Arid, High Altitude Montane Catchment

    NASA Astrophysics Data System (ADS)

    Herrington, C.; Gonzalez-Pinzon, R.

    2014-12-01

    Streamflow through the Middle Rio Grande Valley is largely driven by snowmelt pulses and monsoonal precipitation events originating in the mountain highlands of New Mexico (NM) and Colorado. Water managers rely on results from storage/runoff models to distribute this resource statewide and to allocate compact deliveries to Texas under the Rio Grande Compact agreement. Prevalent drought conditions and the added uncertainty of climate change effects in the American southwest have led to a greater call for accuracy in storage model parameter inputs. While precipitation and evapotranspiration measurements are subject to scaling and representativeness errors, streamflow readings remain relatively dependable and allow watershed-average water budget estimates. Our study seeks to show that by "Doing Hydrology Backwards" we can effectively estimate watershed-average precipitation and evapotranspiration fluxes in semi-arid landscapes of NM using fluctuations in streamflow data alone. We tested this method in the Valles Caldera National Preserve (VCNP) in the Jemez Mountains of central NM. This method will be further verified by using existing weather stations and eddy-covariance towers within the VCNP to obtain measured values to compare against our model results. This study contributes to further validate this technique as being successful in humid and semi-arid catchments as the method has already been verified as effective in the former setting.

  5. Data Pre-Processing Method to Remove Interference of Gas Bubbles and Cell Clusters During Anaerobic and Aerobic Yeast Fermentations in a Stirred Tank Bioreactor

    NASA Astrophysics Data System (ADS)

    Princz, S.; Wenzel, U.; Miller, R.; Hessling, M.

    2014-11-01

    One aerobic and four anaerobic batch fermentations of the yeast Saccharomyces cerevisiae were conducted in a stirred bioreactor and monitored inline by NIR spectroscopy and a transflectance dip probe. From the acquired NIR spectra, chemometric partial least squares regression (PLSR) models for predicting biomass, glucose and ethanol were constructed. The spectra were directly measured in the fermentation broth and successfully inspected for adulteration using our novel data pre-processing method. These adulterations manifested as strong fluctuations in the shape and offset of the absorption spectra. They resulted from cells, cell clusters, or gas bubbles intercepting the optical path of the dip probe. In the proposed data pre-processing method, adulterated signals are removed by passing the time-scanned non-averaged spectra through two filter algorithms with a 5% quantile cutoff. The filtered spectra containing meaningful data are then averaged. A second step checks whether the whole time scan is analyzable. If true, the average is calculated and used to prepare the PLSR models. This new method distinctly improved the prediction results. To dissociate possible correlations between analyte concentrations, such as glucose and ethanol, the feeding analytes were alternately supplied at different concentrations (spiking) at the end of the four anaerobic fermentations. This procedure yielded low-error (anaerobic) PLSR models for predicting analyte concentrations of 0.31 g/l for biomass, 3.41 g/l for glucose, and 2.17 g/l for ethanol. The maximum concentrations were 14 g/l biomass, 167 g/l glucose, and 80 g/l ethanol. Data from the aerobic fermentation, carried out under high agitation and high aeration, were incorporated to realize combined PLSR models, which have not been previously reported to our knowledge.

  6. Robust estimation of event-related potentials via particle filter.

    PubMed

    Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito

    2016-03-01

    In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    PubMed

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  8. A new method to generate large order low temperature expansions for discrete spin models

    NASA Astrophysics Data System (ADS)

    Bhanot, Gyan

    1993-03-01

    I describe work done in collaboration with Michael Creutz at BNL and Jan Lacki at IAS Princeton. We have developed a method to generate very high order low temperature (weak coupling) expansions for discrete spin systems. For the 3-d and 4-d Ising model, we give results for the low temperature expansion of the average free energy to 50 and 44 excited bonds respectively.

  9. [The trial of business data analysis at the Department of Radiology by constructing the auto-regressive integrated moving-average (ARIMA) model].

    PubMed

    Tani, Yuji; Ogasawara, Katsuhiko

    2012-01-01

    This study aimed to contribute to the management of a healthcare organization by providing management information using time-series analysis of business data accumulated in the hospital information system, which has not been utilized thus far. In this study, we examined the performance of the prediction method using the auto-regressive integrated moving-average (ARIMA) model, using the business data obtained at the Radiology Department. We made the model using the data used for analysis, which was the number of radiological examinations in the past 9 years, and we predicted the number of radiological examinations in the last 1 year. Then, we compared the actual value with the forecast value. We were able to establish that the performance prediction method was simple and cost-effective by using free software. In addition, we were able to build the simple model by pre-processing the removal of trend components using the data. The difference between predicted values and actual values was 10%; however, it was more important to understand the chronological change rather than the individual time-series values. Furthermore, our method was highly versatile and adaptable compared to the general time-series data. Therefore, different healthcare organizations can use our method for the analysis and forecasting of their business data.

  10. PAB3D: Its History in the Use of Turbulence Models in the Simulation of Jet and Nozzle Flows

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Pao, S. Paul; Hunter, Craig A.; Deere, Karen A.; Massey, Steven J.; Elmiligui, Alaa

    2006-01-01

    This is a review paper for PAB3D s history in the implementation of turbulence models for simulating jet and nozzle flows. We describe different turbulence models used in the simulation of subsonic and supersonic jet and nozzle flows. The time-averaged simulations use modified linear or nonlinear two-equation models to account for supersonic flow as well as high temperature mixing. Two multiscale-type turbulence models are used for unsteady flow simulations. These models require modifications to the Reynolds Averaged Navier-Stokes (RANS) equations. The first scheme is a hybrid RANS/LES model utilizing the two-equation (k-epsilon) model with a RANS/LES transition function, dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier-Stokes (PANS) formulation. All of these models are implemented in the three-dimensional Navier-Stokes code PAB3D. This paper discusses computational methods, code implementation, computed results for a wide range of nozzle configurations at various operating conditions, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions.

  11. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  12. Averaged head phantoms from magnetic resonance images of Korean children and young adults

    NASA Astrophysics Data System (ADS)

    Han, Miran; Lee, Ae-Kyoung; Choi, Hyung-Do; Jung, Yong Wook; Park, Jin Seo

    2018-02-01

    Increased use of mobile phones raises concerns about the health risks of electromagnetic radiation. Phantom heads are routinely used for radiofrequency dosimetry simulations, and the purpose of this study was to construct averaged phantom heads for children and young adults. Using magnetic resonance images (MRI), sectioned cadaver images, and a hybrid approach, we initially built template phantoms representing 6-, 9-, 12-, 15-year-old children and young adults. Our subsequent approach revised the template phantoms using 29 averaged items that were identified by averaging the MRI data from 500 children and young adults. In females, the brain size and cranium thickness peaked in the early teens and then decreased. This is contrary to what was observed in males, where brain size and cranium thicknesses either plateaued or grew continuously. The overall shape of brains was spherical in children and became ellipsoidal by adulthood. In this study, we devised a method to build averaged phantom heads by constructing surface and voxel models. The surface model could be used for phantom manipulation, whereas the voxel model could be used for compliance test of specific absorption rate (SAR) for users of mobile phones or other electronic devices.

  13. The effect of inquiry-flipped classroom model toward students' achievement on chemical reaction rate

    NASA Astrophysics Data System (ADS)

    Paristiowati, Maria; Fitriani, Ella; Aldi, Nurul Hanifah

    2017-08-01

    The aim of this research is to find out the effect of Inquiry-Flipped Classroom Models toward Students' Achievement on Chemical Reaction Rate topic. This study was conducted at SMA Negeri 3 Tangerang in Eleventh Graders. The Quasi Experimental Method with Non-equivalent Control Group design was implemented in this study. 72 students as the sample was selected by purposive sampling. Students in experimental group were learned through inquiry-flipped classroom model. Meanwhile, in control group, students were learned through guided inquiry learning model. Based on the data analysis, it can be seen that there is significant difference in the result of the average achievement of the students. The average achievement of the students in inquiry-flipped classroom model was 83,44 and the average achievement of the students in guided inquiry learning model was 74,06. It can be concluded that the students' achievement with inquiry-flipped classroom better than guided inquiry. The difference of students' achievement were significant through t-test which is tobs 3.056 > ttable 1.994 (α = 0.005).

  14. A modelling tool for capacity planning in acute and community stroke services.

    PubMed

    Monks, Thomas; Worthington, David; Allen, Michael; Pitt, Martin; Stein, Ken; James, Martin A

    2016-09-29

    Mathematical capacity planning methods that can take account of variations in patient complexity, admission rates and delayed discharges have long been available, but their implementation in complex pathways such as stroke care remains limited. Instead simple average based estimates are commonplace. These methods often substantially underestimate capacity requirements. We analyse the capacity requirements for acute and community stroke services in a pathway with over 630 admissions per year. We sought to identify current capacity bottlenecks affecting patient flow, future capacity requirements in the presence of increased admissions, the impact of co-location and pooling of the acute and rehabilitation units and the impact of patient subgroups on capacity requirements. We contrast these results to the often used method of planning by average occupancy, often with arbitrary uplifts to cater for variability. We developed a discrete-event simulation model using aggregate parameter values derived from routine administrative data on over 2000 anonymised admission and discharge timestamps. The model mimicked the flow of stroke, high risk TIA and complex neurological patients from admission to an acute ward through to community rehab and early supported discharge, and predicted the probability of admission delays. An increase from 10 to 14 acute beds reduces the number of patients experiencing a delay to the acute stroke unit from 1 in every 7 to 1 in 50. Co-location of the acute and rehabilitation units and pooling eight beds out of a total bed stock of 26 reduce the number of delayed acute admissions to 1 in every 29 and the number of delayed rehabilitation admissions to 1 in every 20. Planning by average occupancy would resulted in delays for one in every five patients in the acute stroke unit. Planning by average occupancy fails to provide appropriate reserve capacity to manage the variations seen in stroke pathways to desired service levels. An appropriate uplift from the average cannot be based simply on occupancy figures. Our method draws on long available, intuitive, but underused mathematical techniques for capacity planning. Implementation via simulation at our study hospital provided valuable decision support for planners to assess future bed numbers and organisation of the acute and rehabilitation services.

  15. Trunk density profile estimates from dual X-ray absorptiometry.

    PubMed

    Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A

    2008-01-01

    Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.

  16. The Value of Hydrograph Partitioning Curves for Calibrating Hydrological Models in Glacierized Basins

    NASA Astrophysics Data System (ADS)

    He, Zhihua; Vorogushyn, Sergiy; Unger-Shayesteh, Katy; Gafurov, Abror; Kalashnikova, Olga; Omorova, Elvira; Merz, Bruno

    2018-03-01

    This study refines the method for calibrating a glacio-hydrological model based on Hydrograph Partitioning Curves (HPCs), and evaluates its value in comparison to multidata set optimization approaches which use glacier mass balance, satellite snow cover images, and discharge. The HPCs are extracted from the observed flow hydrograph using catchment precipitation and temperature gradients. They indicate the periods when the various runoff processes, such as glacier melt or snow melt, dominate the basin hydrograph. The annual cumulative curve of the difference between average daily temperature and melt threshold temperature over the basin, as well as the annual cumulative curve of average daily snowfall on the glacierized areas are used to identify the starting and end dates of snow and glacier ablation periods. Model parameters characterizing different runoff processes are calibrated on different HPCs in a stepwise and iterative way. Results show that the HPC-based method (1) delivers model-internal consistency comparably to the tri-data set calibration method; (2) improves the stability of calibrated parameter values across various calibration periods; and (3) estimates the contributions of runoff components similarly to the tri-data set calibration method. Our findings indicate the potential of the HPC-based approach as an alternative for hydrological model calibration in glacierized basins where other calibration data sets than discharge are often not available or very costly to obtain.

  17. Transient Macroscopic Chemistry in the DSMC Method

    NASA Astrophysics Data System (ADS)

    Goldsworthy, M. J.; Macrossan, M. N.; Abdel-Jawad, M.

    2008-12-01

    In the Direct Simulation Monte Carlo method, a combination of statistical and deterministic procedures applied to a finite number of `simulator' particles are used to model rarefied gas-kinetic processes. Traditionally, chemical reactions are modelled using information from specific colliding particle pairs. In the Macroscopic Chemistry Method (MCM), the reactions are decoupled from the specific particle pairs selected for collisions. Information from all of the particles within a cell is used to determine a reaction rate coefficient for that cell. MCM has previously been applied to steady flow DSMC simulations. Here we show how MCM can be used to model chemical kinetics in DSMC simulations of unsteady flow. Results are compared with a collision-based chemistry procedure for two binary reactions in a 1-D unsteady shock-expansion tube simulation and during the unsteady development of 2-D flow through a cavity. For the shock tube simulation, close agreement is demonstrated between the two methods for instantaneous, ensemble-averaged profiles of temperature and species mole fractions. For the cavity flow, a high degree of thermal non-equilibrium is present and non-equilibrium reaction rate correction factors are employed in MCM. Very close agreement is demonstrated for ensemble averaged mole fraction contours predicted by the particle and macroscopic methods at three different flow-times. A comparison of the accumulated number of net reactions per cell shows that both methods compute identical numbers of reaction events. For the 2-D flow, MCM required similar CPU and memory resources to the particle chemistry method. The Macroscopic Chemistry Method is applicable to any general DSMC code using any viscosity or non-reacting collision models and any non-reacting energy exchange models. MCM can be used to implement any reaction rate formulations, whether these be from experimental or theoretical studies.

  18. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  19. A Comparison of Evaluation Metrics for Biomedical Journals, Articles, and Websites in Terms of Sensitivity to Topic

    PubMed Central

    Fu, Lawrence D.; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F.

    2011-01-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed’s clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. PMID:21419864

  20. Policy improvement by a model-free Dyna architecture.

    PubMed

    Hwang, Kao-Shing; Lo, Chia-Yue

    2013-05-01

    The objective of this paper is to accelerate the process of policy improvement in reinforcement learning. The proposed Dyna-style system combines two learning schemes, one of which utilizes a temporal difference method for direct learning; the other uses relative values for indirect learning in planning between two successive direct learning cycles. Instead of establishing a complicated world model, the approach introduces a simple predictor of average rewards to actor-critic architecture in the simulation (planning) mode. The relative value of a state, defined as the accumulated differences between immediate reward and average reward, is used to steer the improvement process in the right direction. The proposed learning scheme is applied to control a pendulum system for tracking a desired trajectory to demonstrate its adaptability and robustness. Through reinforcement signals from the environment, the system takes the appropriate action to drive an unknown dynamic to track desired outputs in few learning cycles. Comparisons are made between the proposed model-free method, a connectionist adaptive heuristic critic, and an advanced method of Dyna-Q learning in the experiments of labyrinth exploration. The proposed method outperforms its counterparts in terms of elapsed time and convergence rate.

  1. Response of MDOF strongly nonlinear systems to fractional Gaussian noises.

    PubMed

    Deng, Mao-Lin; Zhu, Wei-Qiu

    2016-08-01

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  2. Response of MDOF strongly nonlinear systems to fractional Gaussian noises

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Mao-Lin; Zhu, Wei-Qiu, E-mail: wqzhu@zju.edu.cn

    2016-08-15

    In the present paper, multi-degree-of-freedom strongly nonlinear systems are modeled as quasi-Hamiltonian systems and the stochastic averaging method for quasi-Hamiltonian systems (including quasi-non-integrable, completely integrable and non-resonant, completely integrable and resonant, partially integrable and non-resonant, and partially integrable and resonant Hamiltonian systems) driven by fractional Gaussian noise is introduced. The averaged fractional stochastic differential equations (SDEs) are derived. The simulation results for some examples show that the averaged SDEs can be used to predict the response of the original systems and the simulation time for the averaged SDEs is less than that for the original systems.

  3. Numerical stress analysis of the iris tissue induced by pupil expansion: Comparison of commercial devices

    PubMed Central

    Wang, Xiaofei; Perera, Shamira A.; Girard, Michaël J. A.

    2018-01-01

    Purpose (1) To use finite element (FE) modelling to estimate local iris stresses (i.e. internal forces) as a result of mechanical pupil expansion; and to (2) compare such stresses as generated from several commercially available expanders (Iris hooks, APX dilator and Malyugin ring) to determine which design and deployment method are most likely to cause iris damage. Methods We used a biofidelic 3-part iris FE model that consisted of the stroma, sphincter and dilator muscles. Our FE model simulated expansion of the pupil from 3 mm to a maximum of 6 mm using the aforementioned pupil expanders, with uniform circular expansion used for baseline comparison. FE-derived stresses, resultant forces and area of final pupil opening were compared across devices for analysis. Results Our FE models demonstrated that the APX dilator generated the highest stresses on the sphincter muscles, (max: 6.446 MPa; average: 5.112 MPa), followed by the iris hooks (max: 5.680 MPa; average: 5.219 MPa), and the Malyugin ring (max: 2.144 MPa; average: 1.575 MPa). Uniform expansion generated the lowest stresses (max: 0.435MPa; average: 0.377 MPa). For pupil expansion, the APX dilator required the highest force (41.22 mN), followed by iris hooks (40.82 mN) and the Malyugin ring (18.56 mN). Conclusion Our study predicted that current pupil expanders exert significantly higher amount of stresses and forces than required during pupil expansion. Our work may serve as a guide for the development and design of next-generation pupil expanders. PMID:29538452

  4. Large deviations of a long-time average in the Ehrenfest urn model

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  5. Comparing methods for modelling spreading cell fronts.

    PubMed

    Markham, Deborah C; Simpson, Matthew J; Maini, Philip K; Gaffney, Eamonn A; Baker, Ruth E

    2014-07-21

    Spreading cell fronts play an essential role in many physiological processes. Classically, models of this process are based on the Fisher-Kolmogorov equation; however, such continuum representations are not always suitable as they do not explicitly represent behaviour at the level of individual cells. Additionally, many models examine only the large time asymptotic behaviour, where a travelling wave front with a constant speed has been established. Many experiments, such as a scratch assay, never display this asymptotic behaviour, and in these cases the transient behaviour must be taken into account. We examine the transient and the asymptotic behaviour of moving cell fronts using techniques that go beyond the continuum approximation via a volume-excluding birth-migration process on a regular one-dimensional lattice. We approximate the averaged discrete results using three methods: (i) mean-field, (ii) pair-wise, and (iii) one-hole approximations. We discuss the performance of these methods, in comparison to the averaged discrete results, for a range of parameter space, examining both the transient and asymptotic behaviours. The one-hole approximation, based on techniques from statistical physics, is not capable of predicting transient behaviour but provides excellent agreement with the asymptotic behaviour of the averaged discrete results, provided that cells are proliferating fast enough relative to their rate of migration. The mean-field and pair-wise approximations give indistinguishable asymptotic results, which agree with the averaged discrete results when cells are migrating much more rapidly than they are proliferating. The pair-wise approximation performs better in the transient region than does the mean-field, despite having the same asymptotic behaviour. Our results show that each approximation only works in specific situations, thus we must be careful to use a suitable approximation for a given system, otherwise inaccurate predictions could be made. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Comparison of up-scaling methods in poroelasticity and its generalizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J G

    2003-12-13

    Four methods of up-scaling coupled equations at the microscale to equations valid at the mesoscale and/or macroscale for fluid-saturated and partially saturated porous media will be discussed, compared, and contrasted. The four methods are: (1) effective medium theory, (2) mixture theory, (3) two-scale and multiscale homogenization, and (4) volume averaging. All these methods have advantages for some applications and disadvantages for others. For example, effective medium theory, mixture theory, and homogenization methods can all give formulas for coefficients in the up-scaled equations, whereas volume averaging methods give the form of the up-scaled equations but generally must be supplemented with physicalmore » arguments and/or data in order to determine the coefficients. Homogenization theory requires a great deal of mathematical insight from the user in order to choose appropriate scalings for use in the resulting power-law expansions, while volume averaging requires more physical insight to motivate the steps needed to find coefficients. Homogenization often is performed on periodic models, while volume averaging does not require any assumption of periodicity and can therefore be related very directly to laboratory and/or field measurements. Validity of the homogenization process is often limited to specific ranges of frequency - in order to justify the scaling hypotheses that must be made - and therefore cannot be used easily over wide ranges of frequency. However, volume averaging methods can quite easily be used for wide band data analysis. So, we learn from these comparisons that a researcher in the theory of poroelasticity and its generalizations needs to be conversant with two or more of these methods to solve problems generally.« less

  7. Method to Rapidly Collect Thousands of Velocity Observations to Validate Million-Element 2D Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Barker, J. R.; Pasternack, G. B.; Bratovich, P.; Massa, D.; Reedy, G.; Johnson, T.

    2010-12-01

    Two-dimensional (depth-averaged) hydrodynamic models have existed for decades and are used to study a variety of hydrogeomorphic processes as well as to design river rehabilitation projects. Rapid computer and coding advances are revolutionizing the size and detail of 2D models. Meanwhile, advances in topo mapping and environmental informatics are providing the data inputs to drive large, detailed simulations. Million-element computational meshes are in hand. With simulations of this size and detail, the primary challenge has shifted to finding rapid and inexpensive means for testing model predictions against observations. Standard methods for collecting velocity data include boat-mounted ADCP and point-based sensors on boats or wading rods. These methods are labor intensive and often limited to a narrow flow range. Also, they generate small datasets at a few cross-sections, which is inadequate to characterize the statistical structure of the relation between predictions and observations. Drawing on the long-standing oceanographic method of using drogues to track water currents, previous studies have demonstrated the potential of small dGPS units to obtain surface velocity in rivers. However, dGPS is too inaccurate to test 2D models. Also, there is financial risk in losing drogues in rough currents. In this study, an RTK GPS unit was mounted onto a manned whitewater kayak. The boater positioned himself into the current and used floating debris to maintain a speed and heading consistent with the ambient surface flow field. RTK GPS measurements were taken ever 5 sec. From these positions, a 2D velocity vector was obtained. The method was tested over ~20 km of the lower Yuba River in California in flows ranging from 500-5000 cfs, yielding 5816 observations. To compare velocity magnitude against the 2D model-predicted depth-averaged value, kayak-based surface values were scaled down by an optimized constant (0.72), which had no negative effect on regression analysis. The r2 value for speed was 0.78 by this method, compared with 0.57 based on 199 points from traditional measurements. The r2 value for velocity direction was 0.77. Although it is not ideal to rely on observed surface velocity to evaluate depth-average velocity predictions, all available velocity-measurement methods have a suite of assumptions and complications. Using this method, the availability of 10-100x more data was so beneficial that the outcome was among the highest model performance outcomes reported in the literature.

  8. Automatic load forecasting. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, D.J.; Vemuri, S.

    A method which lends itself to on-line forecasting of hourly electric loads is presented and the results of its use are compared to models developed using the Box-Jenkins method. The method consists of processing the historical hourly loads with a sequential least-squares estimator to identify a finite order autoregressive model which in turn is used to obtain a parsimonious autoregressive-moving average model. A procedure is also defined for incorporating temperature as a variable to improve forecasts where loads are temperature dependent. The method presented has several advantages in comparison to the Box-Jenkins method including much less human intervention and improvedmore » model identification. The method has been tested using three-hourly data from the Lincoln Electric System, Lincoln, Nebraska. In the exhaustive analyses performed on this data base this method produced significantly better results than the Box-Jenkins method. The method also proved to be more robust in that greater confidence could be placed in the accuracy of models based upon the various measures available at the identification stage.« less

  9. The average motion of a charged particle in a dipole field

    NASA Technical Reports Server (NTRS)

    Chen, A. J.; Stern, D. P.

    1974-01-01

    The numerical representation of the average motion of a charged particle trapped in a geomagnetic field is developed. An assumption is made of the conservation of the first two adiabatic invariants where integration is along a field line between mirror points. The averaged motion also involved the parameters defining the magnetic field line to which the particle is attached. Methods involved in obtaining the motion in the equatorial plane of model magnetospheres are based on Hamiltonian functions. The restrictions imposed by the special nature of the dipole field are defined.

  10. Rapid model building of beta-sheets in electron-density maps.

    PubMed

    Terwilliger, Thomas C

    2010-03-01

    A method for rapidly building beta-sheets into electron-density maps is presented. beta-Strands are identified as tubes of high density adjacent to and nearly parallel to other tubes of density. The alignment and direction of each strand are identified from the pattern of high density corresponding to carbonyl and C(beta) atoms along the strand averaged over all repeats present in the strand. The beta-strands obtained are then assembled into a single atomic model of the beta-sheet regions. The method was tested on a set of 42 experimental electron-density maps at resolutions ranging from 1.5 to 3.8 A. The beta-sheet regions were nearly completely built in all but two cases, the exceptions being one structure at 2.5 A resolution in which a third of the residues in beta-sheets were built and a structure at 3.8 A in which under 10% were built. The overall average r.m.s.d. of main-chain atoms in the residues built using this method compared with refined models of the structures was 1.5 A.

  11. Stresses and elastic constants of crystalline sodium, from molecular dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiferl, S.K.

    1985-02-01

    The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less

  12. Simulation of streamflow and estimation of recharge to the Edwards aquifer in the Hondo Creek, Verde Creek, and San Geronimo Creek watersheds, south-central Texas, 1951-2003

    USGS Publications Warehouse

    Ockerman, Darwin J.

    2005-01-01

    The U.S. Geological Survey, in cooperation with the San Antonio Water System, constructed three watershed models using the Hydrological Simulation Program—FORTRAN (HSPF) to simulate streamflow and estimate recharge to the Edwards aquifer in the Hondo Creek, Verde Creek, and San Geronimo Creek watersheds in south-central Texas. The three models were calibrated and tested with available data collected during 1992–2003. Simulations of streamflow and recharge were done for 1951–2003. The approach to construct the models was to first calibrate the Hondo Creek model (with an hourly time step) using 1992–99 data and test the model using 2000–2003 data. The Hondo Creek model parameters then were applied to the Verde Creek and San Geronimo Creek watersheds to construct the Verde Creek and San Geronimo Creek models. The simulated streamflows for Hondo Creek are considered acceptable. Annual, monthly, and daily simulated streamflows adequately match measured values, but simulated hourly streamflows do not. The accuracy of streamflow simulations for Verde Creek is uncertain. For San Geronimo Creek, the match of measured and simulated annual and monthly streamflows is acceptable (or nearly so); but for daily and hourly streamflows, the calibration is relatively poor. Simulated average annual total streamflow for 1951–2003 to Hondo Creek, Verde Creek, and San Geronimo Creek is 45,400; 32,400; and 11,100 acre-feet, respectively. Simulated average annual streamflow at the respective watershed outlets is 13,000; 16,200; and 6,920 acre-feet. The difference between total streamflow and streamflow at the watershed outlet is streamflow lost to channel infiltration. Estimated average annual Edwards aquifer recharge for Hondo Creek, Verde Creek, and San Geronimo Creek watersheds for 1951–2003 is 37,900 acrefeet (5.04 inches), 26,000 acre-feet (3.36 inches), and 5,940 acre-feet (1.97 inches), respectively. Most of the recharge (about 77 percent for the three watersheds together) occurs as streamflow channel infiltration. Diffuse recharge (direct infiltration of rainfall to the aquifer) accounts for the remaining 23 percent of recharge. For the Hondo Creek watershed, the HSPF recharge estimates for 1992–2003 averaged about 22 percent less than those estimated by the Puente method, a method the U.S. Geological Survey has used to compute annual recharge to the Edwards aquifer since 1978. HSPF recharge estimates for the Verde Creek watershed average about 40 percent less than those estimated by the Puente method.

  13. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    NASA Astrophysics Data System (ADS)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  14. A Bayesian model averaging approach for estimating the relative risk of mortality associated with heat waves in 105 U.S. cities.

    PubMed

    Bobb, Jennifer F; Dominici, Francesca; Peng, Roger D

    2011-12-01

    Estimating the risks heat waves pose to human health is a critical part of assessing the future impact of climate change. In this article, we propose a flexible class of time series models to estimate the relative risk of mortality associated with heat waves and conduct Bayesian model averaging (BMA) to account for the multiplicity of potential models. Applying these methods to data from 105 U.S. cities for the period 1987-2005, we identify those cities having a high posterior probability of increased mortality risk during heat waves, examine the heterogeneity of the posterior distributions of mortality risk across cities, assess sensitivity of the results to the selection of prior distributions, and compare our BMA results to a model selection approach. Our results show that no single model best predicts risk across the majority of cities, and that for some cities heat-wave risk estimation is sensitive to model choice. Although model averaging leads to posterior distributions with increased variance as compared to statistical inference conditional on a model obtained through model selection, we find that the posterior mean of heat wave mortality risk is robust to accounting for model uncertainty over a broad class of models. © 2011, The International Biometric Society.

  15. Statistical Approaches for Spatiotemporal Prediction of Low Flows

    NASA Astrophysics Data System (ADS)

    Fangmann, A.; Haberlandt, U.

    2017-12-01

    An adequate assessment of regional climate change impacts on streamflow requires the integration of various sources of information and modeling approaches. This study proposes simple statistical tools for inclusion into model ensembles, which are fast and straightforward in their application, yet able to yield accurate streamflow predictions in time and space. Target variables for all approaches are annual low flow indices derived from a data set of 51 records of average daily discharge for northwestern Germany. The models require input of climatic data in the form of meteorological drought indices, derived from observed daily climatic variables, averaged over the streamflow gauges' catchments areas. Four different modeling approaches are analyzed. Basis for all pose multiple linear regression models that estimate low flows as a function of a set of meteorological indices and/or physiographic and climatic catchment descriptors. For the first method, individual regression models are fitted at each station, predicting annual low flow values from a set of annual meteorological indices, which are subsequently regionalized using a set of catchment characteristics. The second method combines temporal and spatial prediction within a single panel data regression model, allowing estimation of annual low flow values from input of both annual meteorological indices and catchment descriptors. The third and fourth methods represent non-stationary low flow frequency analyses and require fitting of regional distribution functions. Method three is subject to a spatiotemporal prediction of an index value, method four to estimation of L-moments that adapt the regional frequency distribution to the at-site conditions. The results show that method two outperforms successive prediction in time and space. Method three also shows a high performance in the near future period, but since it relies on a stationary distribution, its application for prediction of far future changes may be problematic. Spatiotemporal prediction of L-moments appeared highly uncertain for higher-order moments resulting in unrealistic future low flow values. All in all, the results promote an inclusion of simple statistical methods in climate change impact assessment.

  16. Climate Change Impacts at Department of Defense Installations

    DTIC Science & Technology

    2017-06-16

    locations. The ease of use of this method and its flexibility have led to a wide variety of applications for assessing impacts of climate change 4...versions of these statistical methods to provide the basis for regional climate assessments for various states, regions, and government agencies...averaging (REA) method proposed by Giorgi and Mearns (2002). This method assigns reliability classifications for the multi-model ensemble simulation by

  17. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  18. Multi-Component Profiling of Trace Volatiles in Blood by Gas Chromatography/Mass Spectrometry with Dynamic Headspace Extraction

    PubMed Central

    Kakuta, Shoji; Yamashita, Toshiyuki; Nishiumi, Shin; Yoshida, Masaru; Fukusaki, Eiichiro; Bamba, Takeshi

    2015-01-01

    A dynamic headspace extraction method (DHS) with high-pressure injection is described. This dynamic extraction method has superior sensitivity to solid phase micro extraction, SPME and is capable of extracting the entire gas phase by purging the headspace of a vial. Optimization of the DHS parameters resulted in a highly sensitive volatile profiling system with the ability to detect various volatile components including alcohols at nanogram levels. The average LOD for a standard volatile mixture was 0.50 ng mL−1, and the average LOD for alcohols was 0.66 ng mL−1. This method was used for the analysis of volatile components from biological samples and compared with acute and chronic inflammation models. The method permitted the identification of volatiles with the same profile pattern as in vitro oxidized lipid-derived volatiles. In addition, the concentration of alcohols and aldehydes from the acute inflammation model samples were significantly higher than that for the chronic inflammation model samples. The different profiles between these samples could also be identified by this method. Finally, it was possible to analyze alcohols and low-molecular-weight volatiles that are difficult to analyze by SPME in high sensitivity and to show volatile profiling based on multi-volatile simultaneous analysis. PMID:26819905

  19. Multi-Component Profiling of Trace Volatiles in Blood by Gas Chromatography/Mass Spectrometry with Dynamic Headspace Extraction.

    PubMed

    Kakuta, Shoji; Yamashita, Toshiyuki; Nishiumi, Shin; Yoshida, Masaru; Fukusaki, Eiichiro; Bamba, Takeshi

    2015-01-01

    A dynamic headspace extraction method (DHS) with high-pressure injection is described. This dynamic extraction method has superior sensitivity to solid phase micro extraction, SPME and is capable of extracting the entire gas phase by purging the headspace of a vial. Optimization of the DHS parameters resulted in a highly sensitive volatile profiling system with the ability to detect various volatile components including alcohols at nanogram levels. The average LOD for a standard volatile mixture was 0.50 ng mL(-1), and the average LOD for alcohols was 0.66 ng mL(-1). This method was used for the analysis of volatile components from biological samples and compared with acute and chronic inflammation models. The method permitted the identification of volatiles with the same profile pattern as in vitro oxidized lipid-derived volatiles. In addition, the concentration of alcohols and aldehydes from the acute inflammation model samples were significantly higher than that for the chronic inflammation model samples. The different profiles between these samples could also be identified by this method. Finally, it was possible to analyze alcohols and low-molecular-weight volatiles that are difficult to analyze by SPME in high sensitivity and to show volatile profiling based on multi-volatile simultaneous analysis.

  20. Some effects of quiet geomagnetic field changes upon values used for main field modeling

    USGS Publications Warehouse

    Campbell, W.H.

    1987-01-01

    The effects of three methods of data selection upon the assumed main field levels for geomagnetic observatory records used in main field modeling were investigated for a year of very low solar-terrestrial activity. The first method concerned the differences between the year's average of quiet day field values and the average of all values during the year. For H these differences were 2-3 gammas, for D they were -0.04 to -0.12???, for Z the differences were negligible. The second method of selection concerned the effects of the daytime internal Sq variations upon the daily mean values of field. The midnight field levels when the Sq currents were a minimum deviated from the daily mean levels by as much as 4-7 gammas in H and Z but were negligible for D. The third method of selection was designed to avoid the annual and semi-annual quiet level changes of field caused by the seasonal changes in the magnetosphere. Contributions from these changes were found to be as much as 4-7 gammas in quiet years and expected to be greater than 10 gammas in active years. Suggestions for improved methods of improved data selection in main field modeling are given. ?? 1987.

  1. A mathematical model of Clostridium difficile transmission in medical wards and a cost-effectiveness analysis comparing different strategies for laboratory diagnosis and patient isolation

    PubMed Central

    Carmeli, Yehuda; Leshno, Moshe

    2017-01-01

    Background Clostridium difficile infection (CDI) is a common and potentially fatal healthcare-associated infection. Improving diagnostic tests and infection control measures may prevent transmission. We aimed to determine, in resource-limited settings, whether it is more effective and cost-effective to allocate resources to isolation or to diagnostics. Methods We constructed a mathematical model of CDI transmission based on hospital data (9 medical wards, 350 beds) between March 2010 and February 2013. The model consisted of three compartments: susceptible patients, asymptomatic carriers and CDI patients. We used our model results to perform a cost-effectiveness analysis, comparing four strategies that were different combinations of 2 test methods (the two-step test and uniform PCR) and 2 infection control measures (contact isolation in multiple-bed rooms or single-bed rooms/cohorting). For each strategy, we calculated the annual cost (of CDI diagnosis and isolation) for a decrease of 1 in the average daily number of CDI patients; the strategy of the two-step test and contact isolation in multiple-bed rooms was the reference strategy. Results Our model showed that the average number of CDI patients increased exponentially as the transmission rate increased. Improving diagnosis by adopting uniform PCR assay reduced the average number of CDI cases per day per 350 beds from 9.4 to 8.5, while improving isolation by using single-bed rooms reduced the number to about 1; the latter was cost saving. Conclusions CDI can be decreased by better isolation and more sensitive laboratory methods. From the hospital perspective, improving isolation is more cost-effective than improving diagnostics. PMID:28187144

  2. Electron scattering intensities and Patterson functions of Skyrmions

    NASA Astrophysics Data System (ADS)

    Karliner, M.; King, C.; Manton, N. S.

    2016-06-01

    The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.

  3. Estimation of Biomass and Canopy Height in Bermudagrass, Alfalfa, and Wheat Using Ultrasonic, Laser, and Spectral Sensors

    PubMed Central

    Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.

    2015-01-01

    Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415

  4. The use of wavenumber normalization in computing spatially averaged coherencies (KRSPAC) of microtremor data from asymmetric arrays

    USGS Publications Warehouse

    Asten, M.W.; Stephenson, William J.; Hartzell, Stephen

    2015-01-01

    The SPAC method of processing microtremor noise observations for estimation of Vs profiles has a limitation that the array has circular or triangular symmetry in order to allow spatial (azimuthal) averaging of inter-station coherencies over a constant station separation. Common processing methods allow for station separations to vary by typically ±10% in the azimuthal averaging before degradation of the SPAC spectrum is excessive. A limitation on use of high-wavenumbers in inversions of SPAC spectra to Vs profiles has been the requirement for exact array symmetry to avoid loss of information in the azimuthal averaging step. In this paper we develop a new wavenumber-normalised SPAC method (KRSPAC) where instead of performing averaging of sets of coherency versus frequency spectra and then fitting to a model SPAC spectrum, we interpolate each spectrum to coherency versus k.r, where k and r are wavenumber and station separation respectively, and r may be different for each pair of stations. For fundamental mode Rayleigh-wave energy the model SPAC spectrum to be fitted reduces to Jo(kr). The normalization process changes with each iteration since k is a function of frequency and phase velocity and hence is updated each iteration. The method proves robust and is demonstrated on data acquired in the Santa Clara Valley, CA, (Site STGA) where an asymmetric array having station separations varying by a factor of 2 is compared with a conventional triangular array; a 300-mdeep borehole with a downhole Vs log provides nearby ground truth. The method is also demonstrated on data from the Pleasanton array, CA, where station spacings are irregular and vary from 400 to 1200 m. The KRSPAC method allows inversion of data using kr (unitless) values routinely up to 30, and occasionally up to 60. Thus despite the large and irregular station spacings, this array permits resolution of Vs as fine as 15 m for the near-surface sediments, and down to a maximum depth of 2.5 km.

  5. Quantifying O3 Impacts in Urban Areas Due to Wildfires Using a Generalized Additive Model.

    PubMed

    Gong, Xi; Kaulfus, Aaron; Nair, Udaysankar; Jaffe, Daniel A

    2017-11-21

    Wildfires emit O 3 precursors but there are large variations in emissions, plume heights, and photochemical processing. These factors make it challenging to model O 3 production from wildfires using Eulerian models. Here we describe a statistical approach to characterize the maximum daily 8-h average O 3 (MDA8) for 8 cities in the U.S. for typical, nonfire, conditions. The statistical model represents between 35% and 81% of the variance in MDA8 for each city. We then examine the residual from the model under conditions with elevated particulate matter (PM) and satellite observed smoke ("smoke days"). For these days, the residuals are elevated by an average of 3-8 ppb (MDA8) compared to nonsmoke days. We found that while smoke days are only 4.1% of all days (May-Sept) they are 19% of days with an MDA8 greater than 75 ppb. We also show that a published method that does not account for transport patterns gives rise to large overestimates in the amount of O 3 from fires, particularly for coastal cities. Finally, we apply this method to a case study from August 2015, and show that the method gives results that are directly applicable to the EPA guidance on excluding data due to an uncontrollable source.

  6. Increased performance in the short-term water demand forecasting through the use of a parallel adaptive weighting strategy

    NASA Astrophysics Data System (ADS)

    Sardinha-Lourenço, A.; Andrade-Campos, A.; Antunes, A.; Oliveira, M. S.

    2018-03-01

    Recent research on water demand short-term forecasting has shown that models using univariate time series based on historical data are useful and can be combined with other prediction methods to reduce errors. The behavior of water demands in drinking water distribution networks focuses on their repetitive nature and, under meteorological conditions and similar consumers, allows the development of a heuristic forecast model that, in turn, combined with other autoregressive models, can provide reliable forecasts. In this study, a parallel adaptive weighting strategy of water consumption forecast for the next 24-48 h, using univariate time series of potable water consumption, is proposed. Two Portuguese potable water distribution networks are used as case studies where the only input data are the consumption of water and the national calendar. For the development of the strategy, the Autoregressive Integrated Moving Average (ARIMA) method and a short-term forecast heuristic algorithm are used. Simulations with the model showed that, when using a parallel adaptive weighting strategy, the prediction error can be reduced by 15.96% and the average error by 9.20%. This reduction is important in the control and management of water supply systems. The proposed methodology can be extended to other forecast methods, especially when it comes to the availability of multiple forecast models.

  7. Modeling transit bus fuel consumption on the basis of cycle properties.

    PubMed

    Delgado, Oscar F; Clark, Nigel N; Thompson, Gregory J

    2011-04-01

    A method exists to predict heavy-duty vehicle fuel economy and emissions over an "unseen" cycle or during unseen on-road activity on the basis of fuel consumption and emissions data from measured chassis dynamometer test cycles and properties (statistical parameters) of those cycles. No regression is required for the method, which relies solely on the linear association of vehicle performance with cycle properties. This method has been advanced and examined using previously published heavy-duty truck data gathered using the West Virginia University heavy-duty chassis dynamometer with the trucks exercised over limited test cycles. In this study, data were available from a Washington Metropolitan Area Transit Authority emission testing program conducted in 2006. Chassis dynamometer data from two conventional diesel buses, two compressed natural gas buses, and one hybrid diesel bus were evaluated using an expanded driving cycle set of 16 or 17 different driving cycles. Cycle properties and vehicle fuel consumption measurements from three baseline cycles were selected to generate a linear model and then to predict unseen fuel consumption over the remaining 13 or 14 cycles. Average velocity, average positive acceleration, and number of stops per distance were found to be the desired cycle properties for use in the model. The methodology allowed for the prediction of fuel consumption with an average error of 8.5% from vehicles operating on a diverse set of chassis dynamometer cycles on the basis of relatively few experimental measurements. It was found that the data used for prediction should be acquired from a set that must include an idle cycle along with a relatively slow transient cycle and a relatively high speed cycle. The method was also applied to oxides of nitrogen prediction and was found to have less predictive capability than for fuel consumption with an average error of 20.4%.

  8. Studies in astronomical time series analysis. I - Modeling random processes in the time domain

    NASA Technical Reports Server (NTRS)

    Scargle, J. D.

    1981-01-01

    Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.

  9. Multi-Aperture-Based Probabilistic Noise Reduction of Random Telegraph Signal Noise and Photon Shot Noise in Semi-Photon-Counting Complementary-Metal-Oxide-Semiconductor Image Sensor

    PubMed Central

    Ishida, Haruki; Kagawa, Keiichiro; Komuro, Takashi; Zhang, Bo; Seo, Min-Woong; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    A probabilistic method to remove the random telegraph signal (RTS) noise and to increase the signal level is proposed, and was verified by simulation based on measured real sensor noise. Although semi-photon-counting-level (SPCL) ultra-low noise complementary-metal-oxide-semiconductor (CMOS) image sensors (CISs) with high conversion gain pixels have emerged, they still suffer from huge RTS noise, which is inherent to the CISs. The proposed method utilizes a multi-aperture (MA) camera that is composed of multiple sets of an SPCL CIS and a moderately fast and compact imaging lens to emulate a very fast single lens. Due to the redundancy of the MA camera, the RTS noise is removed by the maximum likelihood estimation where noise characteristics are modeled by the probability density distribution. In the proposed method, the photon shot noise is also relatively reduced because of the averaging effect, where the pixel values of all the multiple apertures are considered. An extremely low-light condition that the maximum number of electrons per aperture was the only 2e− was simulated. PSNRs of a test image for simple averaging, selective averaging (our previous method), and the proposed method were 11.92 dB, 11.61 dB, and 13.14 dB, respectively. The selective averaging, which can remove RTS noise, was worse than the simple averaging because it ignores the pixels with RTS noise and photon shot noise was less improved. The simulation results showed that the proposed method provided the best noise reduction performance. PMID:29587424

  10. Instrumental Variable Analysis with a Nonlinear Exposure–Outcome Relationship

    PubMed Central

    Davies, Neil M.; Thompson, Simon G.

    2014-01-01

    Background: Instrumental variable methods can estimate the causal effect of an exposure on an outcome using observational data. Many instrumental variable methods assume that the exposure–outcome relation is linear, but in practice this assumption is often in doubt, or perhaps the shape of the relation is a target for investigation. We investigate this issue in the context of Mendelian randomization, the use of genetic variants as instrumental variables. Methods: Using simulations, we demonstrate the performance of a simple linear instrumental variable method when the true shape of the exposure–outcome relation is not linear. We also present a novel method for estimating the effect of the exposure on the outcome within strata of the exposure distribution. This enables the estimation of localized average causal effects within quantile groups of the exposure or as a continuous function of the exposure using a sliding window approach. Results: Our simulations suggest that linear instrumental variable estimates approximate a population-averaged causal effect. This is the average difference in the outcome if the exposure for every individual in the population is increased by a fixed amount. Estimates of localized average causal effects reveal the shape of the exposure–outcome relation for a variety of models. These methods are used to investigate the relations between body mass index and a range of cardiovascular risk factors. Conclusions: Nonlinear exposure–outcome relations should not be a barrier to instrumental variable analyses. When the exposure–outcome relation is not linear, either a population-averaged causal effect or the shape of the exposure–outcome relation can be estimated. PMID:25166881

  11. Multi-criteria, personalized route planning using quantifier-guided ordered weighted averaging operators

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Delavar, M. R.

    2011-06-01

    This paper presents a generic model for using different decision strategies in multi-criteria, personalized route planning. Some researchers have considered user preferences in navigation systems. However, these prior studies typically employed a high tradeoff decision strategy, which used a weighted linear aggregation rule, and neglected other decision strategies. The proposed model integrates a pairwise comparison method and quantifier-guided ordered weighted averaging (OWA) aggregation operators to form a personalized route planning method that incorporates different decision strategies. The model can be used to calculate the impedance of each link regarding user preferences in terms of the route criteria, criteria importance and the selected decision strategy. Regarding the decision strategy, the calculated impedance lies between aggregations that use a logical "and" (which requires all the criteria to be satisfied) and a logical "or" (which requires at least one criterion to be satisfied). The calculated impedance also includes taking the average of the criteria scores. The model results in multiple alternative routes, which apply different decision strategies and provide users with the flexibility to select one of them en-route based on the real world situation. The model also defines the robust personalized route under different decision strategies. The influence of different decision strategies on the results are investigated in an illustrative example. This model is implemented in a web-based geographical information system (GIS) for Isfahan in Iran and verified in a tourist routing scenario. The results demonstrated, in real world situations, the validity of the route planning carried out in the model.

  12. Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting

    NASA Astrophysics Data System (ADS)

    Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.

    2018-04-01

    Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.

  13. Time Series Modelling of Syphilis Incidence in China from 2005 to 2012

    PubMed Central

    Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau

    2016-01-01

    Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682

  14. Incorporating wind availability into land use regression modelling of air quality in mountainous high-density urban environment.

    PubMed

    Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward

    2017-08-01

    Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Earthquakes Magnitude Predication Using Artificial Neural Network in Northern Red Sea Area

    NASA Astrophysics Data System (ADS)

    Alarifi, A. S.; Alarifi, N. S.

    2009-12-01

    Earthquakes are natural hazards that do not happen very often, however they may cause huge losses in life and property. Early preparation for these hazards is a key factor to reduce their damage and consequence. Since early ages, people tried to predicate earthquakes using simple observations such as strange or a typical animal behavior. In this paper, we study data collected from existing earthquake catalogue to give better forecasting for future earthquakes. The 16000 events cover a time span of 1970 to 2009, the magnitude range from greater than 0 to less than 7.2 while the depth range from greater than 0 to less than 100km. We propose a new artificial intelligent predication system based on artificial neural network, which can be used to predicate the magnitude of future earthquakes in northern Red Sea area including the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez. We propose a feed forward new neural network model with multi-hidden layers to predicate earthquakes occurrences and magnitudes in northern Red Sea area. Although there are similar model that have been published before in different areas, to our best knowledge this is the first neural network model to predicate earthquake in northern Red Sea area. Furthermore, we present other forecasting methods such as moving average over different interval, normally distributed random predicator, and uniformly distributed random predicator. In addition, we present different statistical methods and data fitting such as linear, quadratic, and cubic regression. We present a details performance analyses of the proposed methods for different evaluation metrics. The results show that neural network model provides higher forecast accuracy than other proposed methods. The results show that neural network achieves an average absolute error of 2.6% while an average absolute error of 3.8%, 7.3% and 6.17% for moving average, linear regression and cubic regression, respectively. In this work, we show an analysis of earthquakes data in northern Red Sea area for different statistics parameters such as correlation, mean, standard deviation, and other. This analysis is to provide a deep understand of the Seismicity of the area, and existing patterns.

  16. Mean field treatment of heterogeneous steady state kinetics

    NASA Astrophysics Data System (ADS)

    Geva, Nadav; Vaissier, Valerie; Shepherd, James; Van Voorhis, Troy

    2017-10-01

    We propose a method to quickly compute steady state populations of species undergoing a set of chemical reactions whose rate constants are heterogeneous. Using an average environment in place of an explicit nearest neighbor configuration, we obtain a set of equations describing a single fluctuating active site in the presence of an averaged bath. We apply this Mean Field Steady State (MFSS) method to a model of H2 production on a disordered surface for which the activation energy for the reaction varies from site to site. The MFSS populations quantitatively reproduce the KMC results across the range of rate parameters considered.

  17. Study of CdTe quantum dots grown using a two-step annealing method

    NASA Astrophysics Data System (ADS)

    Sharma, Kriti; Pandey, Praveen K.; Nagpal, Swati; Bhatnagar, P. K.; Mathur, P. C.

    2006-02-01

    High size dispersion, large average radius of quantum dot and low-volume ratio has been a major hurdle in the development of quantum dot based devices. In the present paper, we have grown CdTe quantum dots in a borosilicate glass matrix using a two-step annealing method. Results of optical characterization and the theoretical model of absorption spectra have shown that quantum dots grown using two-step annealing have lower average radius, lesser size dispersion, higher volume ratio and higher decrease in bulk free energy as compared to quantum dots grown conventionally.

  18. Long-term predictive capability of erosion models

    NASA Technical Reports Server (NTRS)

    Veerabhadra, P.; Buckley, D. H.

    1983-01-01

    A brief overview of long-term cavitation and liquid impingement erosion and modeling methods proposed by different investigators, including the curve-fit approach is presented. A table was prepared to highlight the number of variables necessary for each model in order to compute the erosion-versus-time curves. A power law relation based on the average erosion rate is suggested which may solve several modeling problems.

  19. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  20. Estimation of dynamic time activity curves from dynamic cardiac SPECT imaging

    NASA Astrophysics Data System (ADS)

    Hossain, J.; Du, Y.; Links, J.; Rahmim, A.; Karakatsanis, N.; Akhbardeh, A.; Lyons, J.; Frey, E. C.

    2015-04-01

    Whole-heart coronary flow reserve (CFR) may be useful as an early predictor of cardiovascular disease or heart failure. Here we propose a simple method to extract the time-activity curve, an essential component needed for estimating the CFR, for a small number of compartments in the body, such as normal myocardium, blood pool, and ischemic myocardial regions, from SPECT data acquired with conventional cameras using slow rotation. We evaluated the method using a realistic simulation of 99mTc-teboroxime imaging. Uptake of 99mTc-teboroxime based on data from the literature were modeled. Data were simulated using the anatomically-realistic 3D NCAT phantom and an analytic projection code that realistically models attenuation, scatter, and the collimator-detector response. The proposed method was then applied to estimate time activity curves (TACs) for a set of 3D volumes of interest (VOIs) directly from the projections. We evaluated the accuracy and precision of estimated TACs and studied the effects of the presence of perfusion defects that were and were not modeled in the estimation procedure. The method produced good estimates of the myocardial and blood-pool TACS organ VOIs, with average weighted absolute biases of less than 5% for the myocardium and 10% for the blood pool when the true organ boundaries were known and the activity distributions in the organs were uniform. In the presence of unknown perfusion defects, the myocardial TAC was still estimated well (average weighted absolute bias <10%) when the total reduction in myocardial uptake (product of defect extent and severity) was ≤5%. This indicates that the method was robust to modest model mismatch such as the presence of moderate perfusion defects and uptake nonuniformities. With larger defects where the defect VOI was included in the estimation procedure, the estimated normal myocardial and defect TACs were accurate (average weighted absolute bias ≈5% for a defect with 25% extent and 100% severity).

  1. Fused methods for visual saliency estimation

    NASA Astrophysics Data System (ADS)

    Danko, Amanda S.; Lyu, Siwei

    2015-02-01

    In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.

  2. The consideration of atmospheric stability within wind farm AEP calculations

    NASA Astrophysics Data System (ADS)

    Schmidt, Jonas; Chang, Chi-Yao; Dörenkämper, Martin; Salimi, Milad; Teichmann, Tim; Stoevesandt, Bernhard

    2016-09-01

    The annual energy production of an existing wind farm including thermal stratification is calculated with two different methods and compared to the average of three years of SCADA data. The first method is based on steady state computational fluid dynamics simulations and the assumption of Reynolds-similarity at hub height. The second method is a wake modelling calculation, where a new stratification transformation model was imposed on the Jensen an Ainslie wake models. The inflow states for both approaches were obtained from one year WRF simulation data of the site. Although all models underestimate the mean wind speed and wake effects, the results from the phenomenological wake transformation are compatible with high-fidelity simulation results.

  3. Spatiotemporal distributions of ambient oxides of nitrogen, with implications for exposure inequality and urban design.

    PubMed

    Yu, Haofei; Stuart, Amy L

    2013-08-01

    Intra-urban differences in concentrations of oxides of nitrogen (NO(x)) and exposure disparities in the Tampa area were investigated across temporal scales through emissions estimation, dispersion modeling, and analysis of residential subpopulation exposures. A hybrid estimation method was applied to provide link-level hourly on-road mobile source emissions. Ambient concentrations in 2002 at 1 km resolution were estimated using the CALPUFF dispersion model. Results were combined with residential demographic data at the block-group level, to investigate exposures and inequality for select racioethnic, age, and income population subgroups. Results indicate that on-road mobile sources contributed disproportionately to ground-level concentrations and dominated the spatial footprint across temporal scales (annual average to maximum hour). The black, lower income (less than $40K annually), and Hispanic subgroups had higher estimated exposures than the county average; the white and higher income (greater than $60K) subgroups had lower than average exposures. As annual average concentration increased, the disparity between groups generally increased. However for the highest 1-hr concentrations, reverse disparities were also found. Current studies of air pollution exposure inequality have not fully considered differences by time scale and are often limited in spatial resolution. The modeling methods and the results presented here can be used to improve understanding of potential impacts of urban growth form on health and to improve urban sustainability. Results suggest focusing urban design interventions on reducing on-road mobile source emissions in areas with high densities of minority and low income groups.

  4. Effect of the quartic gradient terms on the critical exponents of the Wilson-Fisher fixed point in O(N) models

    NASA Astrophysics Data System (ADS)

    Péli, Zoltán; Nagy, Sándor; Sailer, Kornel

    2018-02-01

    The effect of the O(partial4) terms of the gradient expansion on the anomalous dimension η and the correlation length's critical exponent ν of the Wilson-Fisher fixed point has been determined for the Euclidean 3-dimensional O( N) models with N≥ 2 . Wetterich's effective average action renormalization group method is used with field-independent derivative couplings and Litim's optimized regulator. It is shown that the critical theory is well approximated by the effective average action preserving O( N) symmetry with an accuracy of O(η).

  5. Single-Trial Normalization for Event-Related Spectral Decomposition Reduces Sensitivity to Noisy Trials

    PubMed Central

    Grandchamp, Romain; Delorme, Arnaud

    2011-01-01

    In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498

  6. Local and average structure of Mn- and La-substituted BiFeO{sub 3}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Bo; Selbach, Sverre M., E-mail: selbach@ntnu.no

    2017-06-15

    The local and average structure of solid solutions of the multiferroic perovskite BiFeO{sub 3} is investigated by synchrotron X-ray diffraction (XRD) and electron density functional theory (DFT) calculations. The average experimental structure is determined by Rietveld refinement and the local structure by total scattering data analyzed in real space with the pair distribution function (PDF) method. With equal concentrations of La on the Bi site or Mn on the Fe site, La causes larger structural distortions than Mn. Structural models based on DFT relaxed geometry give an improved fit to experimental PDFs compared to models constrained by the space groupmore » symmetry. Berry phase calculations predict a higher ferroelectric polarization than the experimental literature values, reflecting that structural disorder is not captured in either average structure space group models or DFT calculations with artificial long range order imposed by periodic boundary conditions. Only by including point defects in a supercell, here Bi vacancies, can DFT calculations reproduce the literature results on the structure and ferroelectric polarization of Mn-substituted BiFeO{sub 3}. The combination of local and average structure sensitive experimental methods with DFT calculations is useful for illuminating the structure-property-composition relationships in complex functional oxides with local structural distortions. - Graphical abstract: The experimental and simulated partial pair distribution functions (PDF) for BiFeO{sub 3}, BiFe{sub 0.875}Mn{sub 0.125}O{sub 3}, BiFe{sub 0.75}Mn{sub 0.25}O{sub 3} and Bi{sub 0.9}La{sub 0.1}FeO{sub 3}.« less

  7. Dependence of the average spatial and energy characteristics of the hadron-lepton cascade on the strong interaction parameters at superhigh energies

    NASA Technical Reports Server (NTRS)

    Boyadjian, N. G.; Dallakyan, P. Y.; Garyaka, A. P.; Mamidjanian, E. A.

    1985-01-01

    A method for calculating the average spatial and energy characteristics of hadron-lepton cascades in the atmosphere is described. The results of calculations for various strong interaction models of primary protons and nuclei are presented. The sensitivity of the experimentally observed extensive air showers (EAS) characteristics to variations of the elementary act parameters is analyzed.

  8. Modeling change in potential landscape vulnerability to forest insect and pathogen disturbances: methods for forested subwatersheds sampled in the midscale interior Columbia River basin assessment.

    Treesearch

    Paul F. Hessburg; Bradley G. Smith; Craig A. Miller; Scott D. Kreiter; R. Brion Salter

    1999-01-01

    In the interior Columbia River basin midscale ecological assessment, including portions of the Klamath and Great Basins, we mapped and characterized historical and current vegetation composition and structure of 337 randomly sampled subwatersheds (9500 ha average size) in 43 subbasins (404 000 ha average size). We compared landscape patterns, vegetation structure and...

  9. Modal identification of structures by a novel approach based on FDD-wavelet method

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2014-02-01

    An important application of system identification in structural dynamics is the determination of natural frequencies, mode shapes and damping ratios during operation which can then be used for calibrating numerical models. In this paper, the combination of two advanced methods of Operational Modal Analysis (OMA) called Frequency Domain Decomposition (FDD) and Continuous Wavelet Transform (CWT) based on novel cyclic averaging of correlation functions (CACF) technique are used for identification of dynamic properties. By using this technique, the autocorrelation of averaged correlation functions is used instead of original signals. Integration of FDD and CWT methods is used to overcome their deficiency and take advantage of the unique capabilities of these methods. The FDD method is able to accurately estimate the natural frequencies and mode shapes of structures in the frequency domain. On the other hand, the CWT method is in the time-frequency domain for decomposition of a signal at different frequencies and determines the damping coefficients. In this paper, a new formulation applied to the wavelet transform of the averaged correlation function of an ambient response is proposed. This application causes to accurate estimation of damping ratios from weak (noise) or strong (earthquake) vibrations and long or short duration record. For this purpose, the modified Morlet wavelet having two free parameters is used. The optimum values of these two parameters are obtained by employing a technique which minimizes the entropy of the wavelet coefficients matrix. The capabilities of the novel FDD-Wavelet method in the system identification of various dynamic systems with regular or irregular distribution of mass and stiffness are illustrated. This combined approach is superior to classic methods and yields results that agree well with the exact solutions of the numerical models.

  10. Comparing Families of Dynamic Causal Models

    PubMed Central

    Penny, Will D.; Stephan, Klaas E.; Daunizeau, Jean; Rosa, Maria J.; Friston, Karl J.; Schofield, Thomas M.; Leff, Alex P.

    2010-01-01

    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data. PMID:20300649

  11. [Localization of perforators in the lower leg by digital antomy imaging methods].

    PubMed

    Wei, Peng; Ma, Liang-Liang; Fang, Ye-Dong; Xia, Wei-Zhi; Ding, Mao-Chao; Mei, Jin

    2012-03-01

    To offer both the accurate three-dimensional anatomical information and algorithmic morphology of perforators in the lower leg for perforator flaps design. The cadaver was injected with a modified lead oxide-gelatin mixture. Radiography was first performed and the images were analyzed using the software Photoshop and Scion Image. Then spiral CT scan was also performed and 3-dimensional images were reconstructed with MIMICS 10.01 software. There are (27 +/- 4) perforators whose outer diameter > or = 0.5 mm ( average, 0.8 +/- 0.2 mm). The average pedicle length within the superficial fascia is (37.3 +/- 18.6) mm. The average supplied area of each perforator is (49.5 +/- 25.5) cm2. The three-dimensional model displayed accurate morphology structure and three-dimensional distribution of the perforator-to- perforator and perforator-to-source artery. The 3D reconstruction model can clearly show the geometric, local details and three-dimensional distribution. It is a considerable method for the study of morphological characteristics of the individual perforators in human calf and preoperative planning of the perforator flap.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooker, A.; Gonder, J.; Lopp, S.

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution ofmore » importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.« less

  13. CPHmodels-3.0--remote homology modeling using structure-guided sequence profiles.

    PubMed

    Nielsen, Morten; Lundegaard, Claus; Lund, Ole; Petersen, Thomas Nordahl

    2010-07-01

    CPHmodels-3.0 is a web server predicting protein 3D structure by use of single template homology modeling. The server employs a hybrid of the scoring functions of CPHmodels-2.0 and a novel remote homology-modeling algorithm. A query sequence is first attempted modeled using the fast CPHmodels-2.0 profile-profile scoring function suitable for close homology modeling. The new computational costly remote homology-modeling algorithm is only engaged provided that no suitable PDB template is identified in the initial search. CPHmodels-3.0 was benchmarked in the CASP8 competition and produced models for 94% of the targets (117 out of 128), 74% were predicted as high reliability models (87 out of 117). These achieved an average RMSD of 4.6 A when superimposed to the 3D structure. The remaining 26% low reliably models (30 out of 117) could superimpose to the true 3D structure with an average RMSD of 9.3 A. These performance values place the CPHmodels-3.0 method in the group of high performing 3D prediction tools. Beside its accuracy, one of the important features of the method is its speed. For most queries, the response time of the server is <20 min. The web server is available at http://www.cbs.dtu.dk/services/CPHmodels/.

  14. Work-related accidents among the Iranian population: a time series analysis, 2000–2011

    PubMed Central

    Karimlou, Masoud; Imani, Mehdi; Hosseini, Agha-Fatemeh; Dehnad, Afsaneh; Vahabi, Nasim; Bakhtiyari, Mahmood

    2015-01-01

    Background Work-related accidents result in human suffering and economic losses and are considered as a major health problem worldwide, especially in the economically developing world. Objectives To introduce seasonal autoregressive moving average (ARIMA) models for time series analysis of work-related accident data for workers insured by the Iranian Social Security Organization (ISSO) between 2000 and 2011. Methods In this retrospective study, all insured people experiencing at least one work-related accident during a 10-year period were included in the analyses. We used Box–Jenkins modeling to develop a time series model of the total number of accidents. Results There was an average of 1476 accidents per month (1476·05±458·77, mean±SD). The final ARIMA (p,d,q) (P,D,Q)s model for fitting to data was: ARIMA(1,1,1)×(0,1,1)12 consisting of the first ordering of the autoregressive, moving average and seasonal moving average parameters with 20·942 mean absolute percentage error (MAPE). Conclusions The final model showed that time series analysis of ARIMA models was useful for forecasting the number of work-related accidents in Iran. In addition, the forecasted number of work-related accidents for 2011 explained the stability of occurrence of these accidents in recent years, indicating a need for preventive occupational health and safety policies such as safety inspection. PMID:26119774

  15. A rapid radiative transfer model for reflection of solar radiation

    NASA Technical Reports Server (NTRS)

    Xiang, X.; Smith, E. A.; Justus, C. G.

    1994-01-01

    A rapid analytical radiative transfer model for reflection of solar radiation in plane-parallel atmospheres is developed based on the Sobolev approach and the delta function transformation technique. A distinct advantage of this model over alternative two-stream solutions is that in addition to yielding the irradiance components, which turn out to be mathematically equivalent to the delta-Eddington approximation, the radiance field can also be expanded in a mathematically consistent fashion. Tests with the model against a more precise multistream discrete ordinate model over a wide range of input parameters demonstrate that the new approximate method typically produces average radiance differences of less than 5%, with worst average differences of approximately 10%-15%. By the same token, the computational speed of the new model is some tens to thousands times faster than that of the more precise model when its stream resolution is set to generate precise calculations.

  16. Model selection bias and Freedman's paradox

    USGS Publications Warehouse

    Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.

    2010-01-01

    In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.

  17. FootSpring: A Compliance Model for the ATHLETE Family of Robots

    NASA Technical Reports Server (NTRS)

    Wheeler, Dawn Deborah; Chavez-Clemente, Daniel; Sunspiral, Vytas K.

    2010-01-01

    This paper describes and evaluates one method of modeling compliance in a wheel-on-leg walking robot. This method assumes that all of the robot s compliance takes place at the ground contact points, specifically the tires and legs, and that the rest of the robot is rigid. Optimization is used to solve for the displacement of the feet and of the center of gravity. This method was tested on both robots of the ATHLETE family, which have different compliance. For both robots, the model predicts the sag of points on the robot chassis with an average error of about one percent of the height of the robot.

  18. Resolving Isotropic Components from Regional Waves using Grid Search and Moment Tensor Inversion Methods

    NASA Astrophysics Data System (ADS)

    Ichinose, G. A.; Saikia, C. K.

    2007-12-01

    We applied the moment tensor (MT) analysis scheme to identify seismic sources using regional seismograms based on the representation theorem for the elastic wave displacement field. This method is applied to estimate the isotropic (ISO) and deviatoric MT components of earthquake, volcanic, and isotropic sources within the Basin and Range Province (BRP) and western US. The ISO components from Hoya, Bexar, Montello and Junction were compared to recently well recorded recent earthquakes near Little Skull Mountain, Scotty's Junction, Eureka Valley, and Fish Lake Valley within southern Nevada. We also examined "dilatational" sources near Mammoth Lakes Caldera and two mine collapses including the August 2007 event in Utah recorded by US Array. Using our formulation we have first implemented the full MT inversion method on long period filtered regional data. We also applied a grid-search technique to solve for the percent deviatoric and %ISO moments. By using the grid-search technique, high-frequency waveforms are used with calibrated velocity models. We modeled the ISO and deviatoric components (spall and tectonic release) as separate events delayed in time or offset in space. Calibrated velocity models helped the resolution of the ISO components and decrease the variance over the average, initial or background velocity models. The centroid location and time shifts are velocity model dependent. Models can be improved as was done in previously published work in which we used an iterative waveform inversion method with regional seismograms from four well recorded and constrained earthquakes. The resulting velocity models reduced the variance between predicted synthetics by about 50 to 80% for frequencies up to 0.5 Hz. Tests indicate that the individual path-specific models perform better at recovering the earthquake MT solutions even after using a sparser distribution of stations than the average or initial models.

  19. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  20. Analysis Monthly Import of Palm Oil Products Using Box-Jenkins Model

    NASA Astrophysics Data System (ADS)

    Ahmad, Nurul F. Y.; Khalid, Kamil; Saifullah Rusiman, Mohd; Ghazali Kamardan, M.; Roslan, Rozaini; Che-Him, Norziha

    2018-04-01

    The palm oil industry has been an important component of the national economy especially the agriculture sector. The aim of this study is to identify the pattern of import of palm oil products, to model the time series using Box-Jenkins model and to forecast the monthly import of palm oil products. The method approach is included in the statistical test for verifying the equivalence model and statistical measurement of three models, namely Autoregressive (AR) model, Moving Average (MA) model and Autoregressive Moving Average (ARMA) model. The model identification of all product import palm oil is different in which the AR(1) was found to be the best model for product import palm oil while MA(3) was found to be the best model for products import palm kernel oil. For the palm kernel, MA(4) was found to be the best model. The results forecast for the next four months for products import palm oil, palm kernel oil and palm kernel showed the most significant decrease compared to the actual data.

  1. Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors

    NASA Technical Reports Server (NTRS)

    VanOverbeke, Thomas J.

    1998-01-01

    The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.

  2. Intra-reach headwater fish assemblage structure

    USGS Publications Warehouse

    McKenna, James E.

    2017-01-01

    Large-scale conservation efforts can take advantage of modern large databases and regional modeling and assessment methods. However, these broad-scale efforts often assume uniform average habitat conditions and/or species assemblages within stream reaches.

  3. Evaluation of global climate model on performances of precipitation simulation and prediction in the Huaihe River basin

    NASA Astrophysics Data System (ADS)

    Wu, Yenan; Zhong, Ping-an; Xu, Bin; Zhu, Feilin; Fu, Jisi

    2017-06-01

    Using climate models with high performance to predict the future climate changes can increase the reliability of results. In this paper, six kinds of global climate models that selected from the Coupled Model Intercomparison Project Phase 5 (CMIP5) under Representative Concentration Path (RCP) 4.5 scenarios were compared to the measured data during baseline period (1960-2000) and evaluate the simulation performance on precipitation. Since the results of single climate models are often biased and highly uncertain, we examine the back propagation (BP) neural network and arithmetic mean method in assembling the precipitation of multi models. The delta method was used to calibrate the result of single model and multimodel ensembles by arithmetic mean method (MME-AM) during the validation period (2001-2010) and the predicting period (2011-2100). We then use the single models and multimodel ensembles to predict the future precipitation process and spatial distribution. The result shows that BNU-ESM model has the highest simulation effect among all the single models. The multimodel assembled by BP neural network (MME-BP) has a good simulation performance on the annual average precipitation process and the deterministic coefficient during the validation period is 0.814. The simulation capability on spatial distribution of precipitation is: calibrated MME-AM > MME-BP > calibrated BNU-ESM. The future precipitation predicted by all models tends to increase as the time period increases. The order of average increase amplitude of each season is: winter > spring > summer > autumn. These findings can provide useful information for decision makers to make climate-related disaster mitigation plans.

  4. Finite Element Methods and Multiphase Continuum Theory for Modeling 3D Air-Water-Sediment Interactions

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Miller, C. T.; Dimakopoulos, A.; Farthing, M.

    2016-12-01

    The last decade has seen an expansion in the development and application of 3D free surface flow models in the context of environmental simulation. These models are based primarily on the combination of effective algorithms, namely level set and volume-of-fluid methods, with high-performance, parallel computing. These models are still computationally expensive and suitable primarily when high-fidelity modeling near structures is required. While most research on algorithms and implementations has been conducted in the context of finite volume methods, recent work has extended a class of level set schemes to finite element methods on unstructured methods. This work considers models of three-phase flow in domains containing air, water, and granular phases. These multi-phase continuum mechanical formulations show great promise for applications such as analysis of coastal and riverine structures. This work will consider formulations proposed in the literature over the last decade as well as new formulations derived using the thermodynamically constrained averaging theory, an approach to deriving and closing macroscale continuum models for multi-phase and multi-component processes. The target applications require the ability to simulate wave breaking and structure over-topping, particularly fully three-dimensional, non-hydrostatic flows that drive these phenomena. A conservative level set scheme suitable for higher-order finite element methods is used to describe the air/water phase interaction. The interaction of these air/water flows with granular materials, such as sand and rubble, must also be modeled. The range of granular media dynamics targeted including flow and wave transmision through the solid media as well as erosion and deposition of granular media and moving bed dynamics. For the granular phase we consider volume- and time-averaged continuum mechanical formulations that are discretized with the finite element method and coupled to the underlying air/water flow via operator splitting (fractional step) schemes. Particular attention will be given to verification and validation of the numerical model and important qualitative features of the numerical methods including phase conservation, wave energy dissipation, and computational efficiency in regimes of interest.

  5. Quantification of leachate discharged to groundwater using the water balance method and the hydrologic evaluation of landfill performance (HELP) model.

    PubMed

    Alslaibi, Tamer M; Abustan, Ismail; Mogheir, Yunes K; Afifi, Samir

    2013-01-01

    Landfills are a source of groundwater pollution in Gaza Strip. This study focused on Deir Al Balah landfill, which is a unique sanitary landfill site in Gaza Strip (i.e., it has a lining system and a leachate recirculation system). The objective of this article is to assess the generated leachate quantity and percolation to the groundwater aquifer at a specific site, using the approaches of (i) the hydrologic evaluation of landfill performance model (HELP) and (ii) the water balance method (WBM). The results show that when using the HELP model, the average volume of leachate discharged from Deir Al Balah landfill during the period 1997 to 2007 was around, 6800 m3/year. Meanwhile, the average volume of leachate percolated through the clay layer was 550 m3/year, which represents around 8% of the generated leachate. Meanwhile, the WBM indicated that the average volume of leachate discharged from Deir Al Balah landfill during the same period was around 7660 m3/year--about half of which comes from the moisture content of the waste, while the remainder comes from the infiltration of precipitation and re-circulated leachate. Therefore, the estimated quantity of leachate to groundwater by these two methods was very close. However, compared with the measured leachate quantity, these results were overestimated and indicated a dangerous threat to the groundwater aquifer, as there was no separation between municipal, hazardous and industrial wastes, in the area.

  6. Multiscale Modeling of Damage Processes in fcc Aluminum: From Atoms to Grains

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Saether, E.; Yamakov, V.

    2008-01-01

    Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, current analysis is limited to small domains and increasing the size of the MD domain quickly presents intractable computational demands. A preferred approach to surmount this computational limitation has been to combine continuum mechanics-based modeling procedures, such as the finite element method (FEM), with MD analyses thereby reducing the region of atomic scale refinement. Such multiscale modeling strategies can be divided into two broad classifications: concurrent multiscale methods that directly incorporate an atomistic domain within a continuum domain and sequential multiscale methods that extract an averaged response from the atomistic simulation for later use as a constitutive model in a continuum analysis.

  7. A new approach for turbulent simulations in complex geometries

    NASA Astrophysics Data System (ADS)

    Israel, Daniel M.

    Historically turbulence modeling has been sharply divided into Reynolds averaged Navier-Stokes (RANS), in which all the turbulent scales of motion are modeled, and large-eddy simulation (LES), in which only a portion of the turbulent spectrum is modeled. In recent years there have been numerous attempts to couple these two approaches either by patching RANS and LES calculations together (zonal methods) or by blending the two sets of equations. In order to create a proper bridging model, that is, a single set of equations which captures both RANS and LES like behavior, it is necessary to place both RANS and LES in a more general framework. The goal of the current work is threefold: to provide such a framework, to demonstrate how the Flow Simulation Methodology (FSM) fits into this framework, and to evaluate the strengths and weaknesses of the current version of the FSM. To do this, first a set of filtered Navier-Stokes (FNS) equations are introduced in terms of an arbitrary generalized filter. Additional exact equations are given for the second order moments and the generalized subfilter dissipation rate tensor. This is followed by a discussion of the role of implicit and explicit filters in turbulence modeling. The FSM is then described with particular attention to its role as a bridging model. In order to evaluate the method a specific implementation of the FSM approach is proposed. Simulations are presented using this model for the case of a separating flow over a "hump" with and without flow control. Careful attention is paid to error estimation, and, in particular, how using flow statistics and time series affects the error analysis. Both mean flow and Reynolds stress profiles are presented, as well as the phase averaged turbulent structures and wall pressure spectra. Using the phase averaged data it is possible to examine how the FSM partitions the energy between the coherent resolved scale motions, the random resolved scale fluctuations, and the subfilter quantities. The method proves to be qualitatively successful at reproducing large turbulent structures. However, like other hybrid methods, it has difficulty in the region where the model behavior transitions from RANS to LES. Consequently the phase averaged structures reproduce the experiments quite well, and the forcing does significantly reduce the length of the separated region. Nevertheless, the recirculation length is significantly too large for all the cases. Overall the current results demonstrate the promise of bridging models in general and the FSM in particular. However, current bridging techniques are still in their infancy. There is still important progress to be made and it is hoped that this work points out the more important avenues for exploration.

  8. Electrostatically Embedded Many-Body Expansion for Neutral and Charged Metalloenzyme Model Systems.

    PubMed

    Kurbanov, Elbek K; Leverentz, Hannah R; Truhlar, Donald G; Amin, Elizabeth A

    2012-01-10

    The electrostatically embedded many-body (EE-MB) method has proven accurate for calculating cohesive and conformational energies in clusters, and it has recently been extended to obtain bond dissociation energies for metal-ligand bonds in positively charged inorganic coordination complexes. In the present paper, we present four key guidelines that maximize the accuracy and efficiency of EE-MB calculations for metal centers. Then, following these guidelines, we show that the EE-MB method can also perform well for bond dissociation energies in a variety of neutral and negatively charged inorganic coordination systems representing metalloenzyme active sites, including a model of the catalytic site of the zinc-bearing anthrax toxin lethal factor, a popular target for drug development. In particular, we find that the electrostatically embedded three-body (EE-3B) method is able to reproduce conventionally calculated bond-breaking energies in a series of pentacoordinate and hexacoordinate zinc-containing systems with an average absolute error (averaged over 25 cases) of only 0.98 kcal/mol.

  9. Judgmental Standard Setting Using a Cognitive Components Model.

    ERIC Educational Resources Information Center

    McGinty, Dixie; Neel, John H.

    A new standard setting approach is introduced, called the cognitive components approach. Like the Angoff method, the cognitive components method generates minimum pass levels (MPLs) for each item. In both approaches, the item MPLs are summed for each judge, then averaged across judges to yield the standard. In the cognitive components approach,…

  10. Mean motion resonances. [of asteroid belt structure

    NASA Technical Reports Server (NTRS)

    Froeschle, CL.; Greenberg, R.

    1989-01-01

    Recent research on the resonant structure of the asteroid belt is reviewed. The resonant mechanism is discussed, and analytical models for the study of mean motion resonances are examined. Numerical averaging methods and mapping methods are considered. It is shown how fresh insight can be obtained by means of a new semianalytical approach.

  11. Bi-ventricular finite element model of right ventricle overload in the healthy rat heart.

    PubMed

    Masithulela, Fulufhelo

    2016-11-25

    The recognition of RV overpressure is critical to human life, as this may signify morbidity and mortality. Right ventricle (RV) dysfunction is understood to have an impact on the performance of the left ventricle (LV), but the mechanisms remain poorly understood. It is understood that ventricular compliance has the ability to affect cardiac performance. In this study, a bi-ventricular model of the rat heart was used in preference to other, single-ventricle models. Finite element analysis (FEA) of the bi-ventricular model provides important information on the function of the healthy heart. The passive myocardium was modelled as a nearly incompressible, hyperelastic, transversely isotropic material using finite element (FE) methods. Bi-ventricular geometries of healthy rat hearts reconstructed from magnetic resonance images were imported in Abaqus©. In simulating the normal passive filling of the rat heart, pressures of 4.8 kPa and 0.0098 kPa were applied to the inner walls of the LV and RV respectively. In addition, to simulate the overpressure of the RV, pressures of 2.4 kPa and 4.8 kPa were applied to the endocardial walls of the LV and RV respectively. As boundary conditions, the circumferential and longitudinal displacements at the base were set to zero. The radial displacements at the base were left free. The results show that the average circumferential stress at the mid-wall in the overloaded model increased from 2.8 kPa to 18.2 kPa. The average longitudinal stress increased from 1.5 kPa to 9.7 kPa. Additionally, in the radial direction, the average stress increased from 0.1 kPa to 0.6 kPa in the mid-wall. The average circumferential strain was found to be 0.138 and 0.100 on the endocardium of the over pressured and healthy model respectively. The average circumferential stress at the epicardium, mid-wall and endocardium in the case of a normal heart is 10 times lower than in the overloaded heart model. The finite analysis method is able to provide insights into the behaviour of the over pressured model (myocardium). In the overloaded model the high stresses and strains were observed on the septal wall. The bi-ventricular model was shown to provide useful information relating to the over pressured ventricle. The possible heart dysfunction may be attributable to high stress and strain in the over pressured heart.

  12. The use of expressive methods for developing empathic skills.

    PubMed

    Ozcan, Neslihan Keser; Bilgin, Hülya; Eracar, Nevin

    2011-01-01

    Empathy is one of the fundamental concepts in nursing, and it is an ability that can be learned. Various education models have been tested for improving empathic skills. Research has focused on using oral presentations, videos, modeling, practiced negotiation based on experiences, and psychodrama methods, such as role playing, as ways to improve empathy in participants. This study looked at the use of expressive arts to improve empathic skills of nursing students. The study was conducted with 48 students who were separated into five different groups. All groups lasted for two hours, and met for 12 weeks. Expressive art and psychodrama methods were used in the group studies. The Scale of Empathic Skill was administered to participants before and after the group studies. Before the group study took place, the average score for empathic skill was 127.97 (SD = 21.26). After the group study, it increased to 138.87 (SD = 20.40). The average score for empathic skill increased after the group (t = 3.996, p = .000). Results show that expressive methods are easier, more effective, and enjoyable processes in nursing training.

  13. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  14. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  15. The cost of conservative synchronization in parallel discrete event simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.

  16. The value of model averaging and dynamical climate model predictions for improving statistical seasonal streamflow forecasts over Australia

    NASA Astrophysics Data System (ADS)

    Pokhrel, Prafulla; Wang, Q. J.; Robertson, David E.

    2013-10-01

    Seasonal streamflow forecasts are valuable for planning and allocation of water resources. In Australia, the Bureau of Meteorology employs a statistical method to forecast seasonal streamflows. The method uses predictors that are related to catchment wetness at the start of a forecast period and to climate during the forecast period. For the latter, a predictor is selected among a number of lagged climate indices as candidates to give the "best" model in terms of model performance in cross validation. This study investigates two strategies for further improvement in seasonal streamflow forecasts. The first is to combine, through Bayesian model averaging, multiple candidate models with different lagged climate indices as predictors, to take advantage of different predictive strengths of the multiple models. The second strategy is to introduce additional candidate models, using rainfall and sea surface temperature predictions from a global climate model as predictors. This is to take advantage of the direct simulations of various dynamic processes. The results show that combining forecasts from multiple statistical models generally yields more skillful forecasts than using only the best model and appears to moderate the worst forecast errors. The use of rainfall predictions from the dynamical climate model marginally improves the streamflow forecasts when viewed over all the study catchments and seasons, but the use of sea surface temperature predictions provide little additional benefit.

  17. A stacking ensemble learning framework for annual river ice breakup dates

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Trevor, Bernard

    2018-06-01

    River ice breakup dates (BDs) are not merely a proxy indicator of climate variability and change, but a direct concern in the management of local ice-caused flooding. A framework of stacking ensemble learning for annual river ice BDs was developed, which included two-level components: member and combining models. The member models described the relations between BD and their affecting indicators; the combining models linked the predicted BD by each member models with the observed BD. Especially, Bayesian regularization back-propagation artificial neural network (BRANN), and adaptive neuro fuzzy inference systems (ANFIS) were employed as both member and combining models. The candidate combining models also included the simple average methods (SAM). The input variables for member models were selected by a hybrid filter and wrapper method. The performances of these models were examined using the leave-one-out cross validation. As the largest unregulated river in Alberta, Canada with ice jams frequently occurring in the vicinity of Fort McMurray, the Athabasca River at Fort McMurray was selected as the study area. The breakup dates and candidate affecting indicators in 1980-2015 were collected. The results showed that, the BRANN member models generally outperformed the ANFIS member models in terms of better performances and simpler structures. The difference between the R and MI rankings of inputs in the optimal member models may imply that the linear correlation based filter method would be feasible to generate a range of candidate inputs for further screening through other wrapper or embedded IVS methods. The SAM and BRANN combining models generally outperformed all member models. The optimal SAM combining model combined two BRANN member models and improved upon them in terms of average squared errors by 14.6% and 18.1% respectively. In this study, for the first time, the stacking ensemble learning was applied to forecasting of river ice breakup dates, which appeared promising for other river ice forecasting problems.

  18. Cochlear Modeling Using Time-Averaged Lagrangian" Method:. Comparison with VBM, PST, and ZC Measurements

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Kim, N.; Puria, S.; Steele, C. R.

    2009-02-01

    In this work, basilar membrane velocity (VBM), scala tympani intracochlear pressure (PST), and cochlear input impedances (Zc) for gerbil and chinchilla are implemented using a three-dimensional hydro-dynamic cochlear model using 1) time-averaged Lagrangian, 2) push-pull mechanism in active case, and 3) the complex anatomy of cochlear scalae by micro computed tomography (μCT) scanning and 3-D reconstructions of gerbil and chinchilla temporal bones. The objective of this work is to compare the calculations and the physiological measurements of gerbil and chinchilla cochlear such as VBM (Ren and Nuttall [1]), PST (Olson [2]), and ZC (Decraemer et al. [3], Songer and Rosowski [4], Ruggero et al. [5]) with present model. A WKB asymptotic method combined with Fourier series expansions is used to provide an efficient simulation. VBM and PST simulation results for the gerbil cochlea show good agreement both in the magnitude and the phase for the physiological measurements without larger phase excursion. ZC simulation from the gerbil and chinchilla model show reasonably good agreement with measurement.

  19. Identification of multivariable nonlinear systems in the presence of colored noises using iterative hierarchical least squares algorithm.

    PubMed

    Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam

    2014-07-01

    This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector

    PubMed Central

    2018-01-01

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644

  1. A new method to estimate average hourly global solar radiation on the horizontal surface

    NASA Astrophysics Data System (ADS)

    Pandey, Pramod K.; Soupir, Michelle L.

    2012-10-01

    A new model, Global Solar Radiation on Horizontal Surface (GSRHS), was developed to estimate the average hourly global solar radiation on the horizontal surfaces (Gh). The GSRHS model uses the transmission function (Tf,ij), which was developed to control hourly global solar radiation, for predicting solar radiation. The inputs of the model were: hour of day, day (Julian) of year, optimized parameter values, solar constant (H0), latitude, and longitude of the location of interest. The parameter values used in the model were optimized at a location (Albuquerque, NM), and these values were applied into the model for predicting average hourly global solar radiations at four different locations (Austin, TX; El Paso, TX; Desert Rock, NV; Seattle, WA) of the United States. The model performance was assessed using correlation coefficient (r), Mean Absolute Bias Error (MABE), Root Mean Square Error (RMSE), and coefficient of determinations (R2). The sensitivities of parameter to prediction were estimated. Results show that the model performed very well. The correlation coefficients (r) range from 0.96 to 0.99, while coefficients of determination (R2) range from 0.92 to 0.98. For daily and monthly prediction, error percentages (i.e. MABE and RMSE) were less than 20%. The approach we proposed here can be potentially useful for predicting average hourly global solar radiation on the horizontal surface for different locations, with the use of readily available data (i.e. latitude and longitude of the location) as inputs.

  2. Estimating the Probability of Rare Events Occurring Using a Local Model Averaging.

    PubMed

    Chen, Jin-Hua; Chen, Chun-Shu; Huang, Meng-Fan; Lin, Hung-Chih

    2016-10-01

    In statistical applications, logistic regression is a popular method for analyzing binary data accompanied by explanatory variables. But when one of the two outcomes is rare, the estimation of model parameters has been shown to be severely biased and hence estimating the probability of rare events occurring based on a logistic regression model would be inaccurate. In this article, we focus on estimating the probability of rare events occurring based on logistic regression models. Instead of selecting a best model, we propose a local model averaging procedure based on a data perturbation technique applied to different information criteria to obtain different probability estimates of rare events occurring. Then an approximately unbiased estimator of Kullback-Leibler loss is used to choose the best one among them. We design complete simulations to show the effectiveness of our approach. For illustration, a necrotizing enterocolitis (NEC) data set is analyzed. © 2016 Society for Risk Analysis.

  3. Production model in the conditions of unstable demand taking into account the influence of trading infrastructure: Ergodicity and its application

    NASA Astrophysics Data System (ADS)

    Obrosova, N. K.; Shananin, A. A.

    2015-04-01

    A production model with allowance for a working capital deficit and a restricted maximum possible sales volume is proposed and analyzed. The study is motivated by an attempt to analyze the problems of functioning of low competitive macroeconomic structures. The model is formalized in the form of a Bellman equation, for which a closed-form solution is found. The stochastic process of product stock variations is proved to be ergodic and its final probability distribution is found. Expressions for the average production load and the average product stock are found by analyzing the stochastic process. A system of model equations relating the model variables to official statistical parameters is derived. The model is identified using data from the Fiat and KAMAZ companies. The influence of the credit interest rate on the firm market value assessment and the production load level are analyzed using comparative statics methods.

  4. Computational technique and performance of Transient Inundation Model for Rivers--2 Dimensional (TRIM2RD) : a depth-averaged two-dimensional flow model

    USGS Publications Warehouse

    Fulford, Janice M.

    2003-01-01

    A numerical computer model, Transient Inundation Model for Rivers -- 2 Dimensional (TrimR2D), that solves the two-dimensional depth-averaged flow equations is documented and discussed. The model uses a semi-implicit, semi-Lagrangian finite-difference method. It is a variant of the Trim model and has been used successfully in estuarine environments such as San Francisco Bay. The abilities of the model are documented for three scenarios: uniform depth flows, laboratory dam-break flows, and large-scale riverine flows. The model can start computations from a ?dry? bed and converge to accurate solutions. Inflows are expressed as source terms, which limits the use of the model to sufficiently long reaches where the flow reaches equilibrium with the channel. The data sets used by the investigation demonstrate that the model accurately propagates flood waves through long river reaches and simulates dam breaks with abrupt water-surface changes.

  5. An assessment of air pollutant exposure methods in Mexico City, Mexico.

    PubMed

    Rivera-González, Luis O; Zhang, Zhenzhen; Sánchez, Brisa N; Zhang, Kai; Brown, Daniel G; Rojas-Bracho, Leonora; Osornio-Vargas, Alvaro; Vadillo-Ortega, Felipe; O'Neill, Marie S

    2015-05-01

    Geostatistical interpolation methods to estimate individual exposure to outdoor air pollutants can be used in pregnancy cohorts where personal exposure data are not collected. Our objectives were to a) develop four assessment methods (citywide average (CWA); nearest monitor (NM); inverse distance weighting (IDW); and ordinary Kriging (OK)), and b) compare daily metrics and cross-validations of interpolation models. We obtained 2008 hourly data from Mexico City's outdoor air monitoring network for PM10, PM2.5, O3, CO, NO2, and SO2 and constructed daily exposure metrics for 1,000 simulated individual locations across five populated geographic zones. Descriptive statistics from all methods were calculated for dry and wet seasons, and by zone. We also evaluated IDW and OK methods' ability to predict measured concentrations at monitors using cross validation and a coefficient of variation (COV). All methods were performed using SAS 9.3, except ordinary Kriging which was modeled using R's gstat package. Overall, mean concentrations and standard deviations were similar among the different methods for each pollutant. Correlations between methods were generally high (r=0.77 to 0.99). However, ranges of estimated concentrations determined by NM, IDW, and OK were wider than the ranges for CWA. Root mean square errors for OK were consistently equal to or lower than for the IDW method. OK standard errors varied considerably between pollutants and the computed COVs ranged from 0.46 (least error) for SO2 and PM10 to 3.91 (most error) for PM2.5. OK predicted concentrations measured at the monitors better than IDW and NM. Given the similarity in results for the exposure methods, OK is preferred because this method alone provides predicted standard errors which can be incorporated in statistical models. The daily estimated exposures calculated using these different exposure methods provide flexibility to evaluate multiple windows of exposure during pregnancy, not just trimester or pregnancy-long exposures. Many studies evaluating associations between outdoor air pollution and adverse pregnancy outcomes rely on outdoor air pollution monitoring data linked to information gathered from large birth registries, and often lack residence location information needed to estimate individual exposure. This study simulated 1,000 residential locations to evaluate four air pollution exposure assessment methods, and describes possible exposure misclassification from using spatial averaging versus geostatistical interpolation models. An implication of this work is that policies to reduce air pollution and exposure among pregnant women based on epidemiologic literature should take into account possible error in estimates of effect when spatial averages alone are evaluated.

  6. Effects of Turbulence Model on Prediction of Hot-Gas Lateral Jet Interaction in a Supersonic Crossflow

    DTIC Science & Technology

    2015-07-01

    performance computing time from the US Department of Defense (DOD) High Performance Computing Modernization program at the US Army Research Laboratory...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...dimensional, compressible, Reynolds-averaged Navier-Stokes (RANS) equations are solved using a finite volume method. A point-implicit time - integration

  7. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  8. 40 CFR Table 2 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators Before [Date to be specified in state...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... parts per million by dry volume 3-run average (1 hour minimum sample time per run) Performance test..., appendix A-4). Oxides of nitrogen 388 parts per million by dry volume 3-run average (1 hour minimum sample... (1 hour minimum sample time per run) Performance test (Method 6 or 6c of appendix A of this part) a...

  9. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A Comparison of Rule-based Analysis with Regression Methods in Understanding the Risk Factors for Study Withdrawal in a Pediatric Study.

    PubMed

    Haghighi, Mona; Johnson, Suzanne Bennett; Qian, Xiaoning; Lynch, Kristian F; Vehik, Kendra; Huang, Shuai

    2016-08-26

    Regression models are extensively used in many epidemiological studies to understand the linkage between specific outcomes of interest and their risk factors. However, regression models in general examine the average effects of the risk factors and ignore subgroups with different risk profiles. As a result, interventions are often geared towards the average member of the population, without consideration of the special health needs of different subgroups within the population. This paper demonstrates the value of using rule-based analysis methods that can identify subgroups with heterogeneous risk profiles in a population without imposing assumptions on the subgroups or method. The rules define the risk pattern of subsets of individuals by not only considering the interactions between the risk factors but also their ranges. We compared the rule-based analysis results with the results from a logistic regression model in The Environmental Determinants of Diabetes in the Young (TEDDY) study. Both methods detected a similar suite of risk factors, but the rule-based analysis was superior at detecting multiple interactions between the risk factors that characterize the subgroups. A further investigation of the particular characteristics of each subgroup may detect the special health needs of the subgroup and lead to tailored interventions.

  11. Direct Numerical Simulation of Pebble Bed Flows: Database Development and Investigation of Low-Frequency Temporal Instabilities

    DOE PAGES

    Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.

    2017-02-20

    Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less

  12. Direct Numerical Simulation of Pebble Bed Flows: Database Development and Investigation of Low-Frequency Temporal Instabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fick, Lambert H.; Merzari, Elia; Hassan, Yassin A.

    Computational analyses of fluid flow through packed pebble bed domains using the Reynolds-averaged NavierStokes framework have had limited success in the past. Because of a lack of high-fidelity experimental or computational data, optimization of Reynolds-averaged closure models for these geometries has not been extensively developed. In the present study, direct numerical simulation was employed to develop a high-fidelity database that can be used for optimizing Reynolds-averaged closure models for pebble bed flows. A face-centered cubic domain with periodic boundaries was used. Flow was simulated at a Reynolds number of 9308 and cross-verified by using available quasi-DNS data. During the simulations,more » low-frequency instability modes were observed that affected the stationary solution. Furthermore, these instabilities were investigated by using the method of proper orthogonal decomposition, and a correlation was found between the time-dependent asymmetry of the averaged velocity profile data and the behavior of the highest energy eigenmodes.« less

  13. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.

  14. Estimating current and future streamflow characteristics at ungaged sites, central and eastern Montana, with application to evaluating effects of climate change on fish populations

    USGS Publications Warehouse

    Sando, Roy; Chase, Katherine J.

    2017-03-23

    A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.

  15. Bayesian block-diagonal variable selection and model averaging

    PubMed Central

    Papaspiliopoulos, O.; Rossell, D.

    2018-01-01

    Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501

  16. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  17. Moisture Damage Modeling in Lime and Chemically Modified Asphalt at Nanolevel Using Ensemble Computational Intelligence

    PubMed Central

    2018-01-01

    This paper measures the adhesion/cohesion force among asphalt molecules at nanoscale level using an Atomic Force Microscopy (AFM) and models the moisture damage by applying state-of-the-art Computational Intelligence (CI) techniques (e.g., artificial neural network (ANN), support vector regression (SVR), and an Adaptive Neuro Fuzzy Inference System (ANFIS)). Various combinations of lime and chemicals as well as dry and wet environments are used to produce different asphalt samples. The parameters that were varied to generate different asphalt samples and measure the corresponding adhesion/cohesion forces are percentage of antistripping agents (e.g., Lime and Unichem), AFM tips K values, and AFM tip types. The CI methods are trained to model the adhesion/cohesion forces given the variation in values of the above parameters. To achieve enhanced performance, the statistical methods such as average, weighted average, and regression of the outputs generated by the CI techniques are used. The experimental results show that, of the three individual CI methods, ANN can model moisture damage to lime- and chemically modified asphalt better than the other two CI techniques for both wet and dry conditions. Moreover, the ensemble of CI along with statistical measurement provides better accuracy than any of the individual CI techniques. PMID:29849551

  18. Forecasting Natural Rubber Price In Malaysia Using Arima

    NASA Astrophysics Data System (ADS)

    Zahari, Fatin Z.; Khalid, Kamil; Roslan, Rozaini; Sufahani, Suliadi; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Ali, Maselan

    2018-04-01

    This paper contains introduction, materials and methods, results and discussions, conclusions and references. Based on the title mentioned, high volatility of the price of natural rubber nowadays will give the significant risk to the producers, traders, consumers, and others parties involved in the production of natural rubber. To help them in making decisions, forecasting is needed to predict the price of natural rubber. The main objective of the research is to forecast the upcoming price of natural rubber by using the reliable statistical method. The data are gathered from Malaysia Rubber Board which the data are from January 2000 until December 2015. In this research, average monthly price of Standard Malaysia Rubber 20 (SMR20) will be forecast by using Box-Jenkins approach. Time series plot is used to determine the pattern of the data. The data have trend pattern which indicates the data is non-stationary data and the data need to be transformed. By using the Box-Jenkins method, the best fit model for the time series data is ARIMA (1, 1, 0) which this model satisfy all the criteria needed. Hence, ARIMA (1, 1, 0) is the best fitted model and the model will be used to forecast the average monthly price of Standard Malaysia Rubber 20 (SMR20) for twelve months ahead.

  19. A Stochastic Model of Space-Time Variability of Tropical Rainfall: I. Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Global maps of rainfall are of great importance in connection with modeling of the earth s climate. Comparison between the maps of rainfall predicted by computer-generated climate models with observation provides a sensitive test for these models. To make such a comparison, one typically needs the total precipitation amount over a large area, which could be hundreds of kilometers in size over extended periods of time of order days or months. This presents a difficult problem since rain varies greatly from place to place as well as in time. Remote sensing methods using ground radar or satellites detect rain over a large area by essentially taking a series of snapshots at infrequent intervals and indirectly deriving the average rain intensity within a collection of pixels , usually several kilometers in size. They measure area average of rain at a particular instant. Rain gauges, on the other hand, record rain accumulation continuously in time but only over a very small area tens of centimeters across, say, the size of a dinner plate. They measure only a time average at a single location. In making use of either method one needs to fill in the gaps in the observation - either the gaps in the area covered or the gaps in time of observation. This involves using statistical models to obtain information about the rain that is missed from what is actually detected. This paper investigates such a statistical model and validates it with rain data collected over the tropical Western Pacific from ship borne radars during TOGA COARE (Tropical Oceans Global Atmosphere Coupled Ocean-Atmosphere Response Experiment). The model incorporates a number of commonly observed features of rain. While rain varies rapidly with location and time, the variability diminishes when averaged over larger areas or longer periods of time. Moreover, rain is patchy in nature - at any instant on the average only a certain fraction of the observed pixels contain rain. The fraction of area covered by rain decreases, as the size of a pixel becomes smaller. This means that within what looks like a patch of rainy area in a coarse resolution view with larger pixel size, one finds clusters of rainy and dry patches when viewed on a finer scale. The model makes definite predictions about how these and other related statistics depend on the pixel size. These predictions were found to agree well with data. In a subsequent second part of the work we plan to test the model with rain gauge data collected during the TRMM (Tropical Rainfall Measuring Mission) ground validation campaign.

  20. A comparison of evaluation metrics for biomedical journals, articles, and websites in terms of sensitivity to topic.

    PubMed

    Fu, Lawrence D; Aphinyanaphongs, Yindalon; Wang, Lily; Aliferis, Constantin F

    2011-08-01

    Evaluating the biomedical literature and health-related websites for quality are challenging information retrieval tasks. Current commonly used methods include impact factor for journals, PubMed's clinical query filters and machine learning-based filter models for articles, and PageRank for websites. Previous work has focused on the average performance of these methods without considering the topic, and it is unknown how performance varies for specific topics or focused searches. Clinicians, researchers, and users should be aware when expected performance is not achieved for specific topics. The present work analyzes the behavior of these methods for a variety of topics. Impact factor, clinical query filters, and PageRank vary widely across different topics while a topic-specific impact factor and machine learning-based filter models are more stable. The results demonstrate that a method may perform excellently on average but struggle when used on a number of narrower topics. Topic-adjusted metrics and other topic robust methods have an advantage in such situations. Users of traditional topic-sensitive metrics should be aware of their limitations. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Nonlinear System Identification for Aeroelastic Systems with Application to Experimental Data

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.

    2008-01-01

    Representation and identification of a nonlinear aeroelastic pitch-plunge system as a model of the Nonlinear AutoRegressive, Moving Average eXogenous (NARMAX) class is considered. A nonlinear difference equation describing this aircraft model is derived theoretically and shown to be of the NARMAX form. Identification methods for NARMAX models are applied to aeroelastic dynamics and its properties demonstrated via continuous-time simulations of experimental conditions. Simulation results show that (1) the outputs of the NARMAX model closely match those generated using continuous-time methods, and (2) NARMAX identification methods applied to aeroelastic dynamics provide accurate discrete-time parameter estimates. Application of NARMAX identification to experimental pitch-plunge dynamics data gives a high percent fit for cross-validated data.

  2. Evaluation of the quality of the college library websites in Iranian medical Universities based on the Stover model.

    PubMed

    Nasajpour, Mohammad Reza; Ashrafi-Rizi, Hasan; Soleymani, Mohammad Reza; Shahrzadi, Leila; Hassanzadeh, Akbar

    2014-01-01

    Today, the websites of college and university libraries play an important role in providing the necessary services for clients. These websites not only allow the users to access different collections of library resources, but also provide them with the necessary guidance in order to use the information. The goal of this study is the quality evaluation of the college library websites in Iranian Medical Universities based on the Stover model. This study uses an analytical survey method and is an applied study. The data gathering tool is the standard checklist provided by Stover, which was modified by the researchers for this study. The statistical population is the college library websites of the Iranian Medical Universities (146 websites) and census method was used for investigation. The data gathering method was a direct access to each website and filling of the checklist was based on the researchers' observations. Descriptive and analytical statistics (Analysis of Variance (ANOVA)) were used for data analysis with the help of the SPSS software. The findings showed that in the dimension of the quality of contents, the highest average belonged to type one universities (46.2%) and the lowest average belonged to type three universities (24.8%). In the search and research capabilities, the highest average belonged to type one universities (48.2%) and the lowest average belonged to type three universities. In the dimension of facilities provided for the users, type one universities again had the highest average (37.2%), while type three universities had the lowest average (15%). In general the library websites of type one universities had the highest quality (44.2%), while type three universities had the lowest quality (21.1%). Also the library websites of the College of Rehabilitation and the College of Paramedics, of the Shiraz University of Medical Science, had the highest quality scores. The results showed that there was a meaningful difference between the quality of the college library websites and the university types, resulting in college libraries of type one universities having the highest average score and the college libraries of type three universities having the lowest score.

  3. Deep learning architecture for air quality predictions.

    PubMed

    Li, Xiang; Peng, Ling; Hu, Yuan; Shao, Jing; Chi, Tianhe

    2016-11-01

    With the rapid development of urbanization and industrialization, many developing countries are suffering from heavy air pollution. Governments and citizens have expressed increasing concern regarding air pollution because it affects human health and sustainable development worldwide. Current air quality prediction methods mainly use shallow models; however, these methods produce unsatisfactory results, which inspired us to investigate methods of predicting air quality based on deep architecture models. In this paper, a novel spatiotemporal deep learning (STDL)-based air quality prediction method that inherently considers spatial and temporal correlations is proposed. A stacked autoencoder (SAE) model is used to extract inherent air quality features, and it is trained in a greedy layer-wise manner. Compared with traditional time series prediction models, our model can predict the air quality of all stations simultaneously and shows the temporal stability in all seasons. Moreover, a comparison with the spatiotemporal artificial neural network (STANN), auto regression moving average (ARMA), and support vector regression (SVR) models demonstrates that the proposed method of performing air quality predictions has a superior performance.

  4. Finger muscle attachments for an OpenSim upper-extremity model.

    PubMed

    Lee, Jong Hwa; Asakawa, Deanna S; Dennerlein, Jack T; Jindrich, Devin L

    2015-01-01

    We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements.

  5. Finger Muscle Attachments for an OpenSim Upper-Extremity Model

    PubMed Central

    Lee, Jong Hwa; Asakawa, Deanna S.; Dennerlein, Jack T.; Jindrich, Devin L.

    2015-01-01

    We determined muscle attachment points for the index, middle, ring and little fingers in an OpenSim upper-extremity model. Attachment points were selected to match both experimentally measured locations and mechanical function (moment arms). Although experimental measurements of finger muscle attachments have been made, models differ from specimens in many respects such as bone segment ratio, joint kinematics and coordinate system. Likewise, moment arms are not available for all intrinsic finger muscles. Therefore, it was necessary to scale and translate muscle attachments from one experimental or model environment to another while preserving mechanical function. We used a two-step process. First, we estimated muscle function by calculating moment arms for all intrinsic and extrinsic muscles using the partial velocity method. Second, optimization using Simulated Annealing and Hooke-Jeeves algorithms found muscle-tendon paths that minimized root mean square (RMS) differences between experimental and modeled moment arms. The partial velocity method resulted in variance accounted for (VAF) between measured and calculated moment arms of 75.5% on average (range from 48.5% to 99.5%) for intrinsic and extrinsic index finger muscles where measured data were available. RMS error between experimental and optimized values was within one standard deviation (S.D) of measured moment arm (mean RMS error = 1.5 mm < measured S.D = 2.5 mm). Validation of both steps of the technique allowed for estimation of muscle attachment points for muscles whose moment arms have not been measured. Differences between modeled and experimentally measured muscle attachments, averaged over all finger joints, were less than 4.9 mm (within 7.1% of the average length of the muscle-tendon paths). The resulting non-proprietary musculoskeletal model of the human fingers could be useful for many applications, including better understanding of complex multi-touch and gestural movements. PMID:25853869

  6. Early Dose Response to Yttrium-90 Microsphere Treatment of Metastatic Liver Cancer by a Patient-Specific Method Using Single Photon Emission Computed Tomography and Positron Emission Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Janice M.; Department of Radiation Oncology, Wayne State University, Detroit, MI; Wong, C. Oliver

    2009-05-01

    Purpose: To evaluate a patient-specific single photon emission computed tomography (SPECT)-based method of dose calculation for treatment planning of yttrium-90 ({sup 90}Y) microsphere selective internal radiotherapy (SIRT). Methods and Materials: Fourteen consecutive {sup 90}Y SIRTs for colorectal liver metastasis were retrospectively analyzed. Absorbed dose to tumor and normal liver tissue was calculated by partition methods with two different tumor/normal liver vascularity ratios: an average 3:1 and a patient-specific ratio derived from pretreatment technetium-99m macroaggregated albumin SPECT. Tumor response was quantitatively evaluated from fluorine-18 fluoro-2-deoxy-D-glucose positron emission tomography scans. Results: Positron emission tomography showed a significant decrease in total tumor standardizedmore » uptake value (average, 52%). There was a significant difference in the tumor absorbed dose between the average and specific methods (p = 0.009). Response vs. dose curves fit by linear and linear-quadratic modeling showed similar results. Linear fit r values increased for all tumor response parameters with the specific method (+0.20 for mean standardized uptake value). Conclusion: Tumor dose calculated with the patient-specific method was more predictive of response in liver-directed {sup 90}Y SIRT.« less

  7. A Preliminary Bayesian Analysis of Incomplete Longitudinal Data from a Small Sample: Methodological Advances in an International Comparative Study of Educational Inequality

    ERIC Educational Resources Information Center

    Hsieh, Chueh-An; Maier, Kimberly S.

    2009-01-01

    The capacity of Bayesian methods in estimating complex statistical models is undeniable. Bayesian data analysis is seen as having a range of advantages, such as an intuitive probabilistic interpretation of the parameters of interest, the efficient incorporation of prior information to empirical data analysis, model averaging and model selection.…

  8. Error Reduction Methods for Integrated-path Differential-absorption Lidar Measurements

    NASA Technical Reports Server (NTRS)

    Chen, Jeffrey R.; Numata, Kenji; Wu, Stewart T.

    2012-01-01

    We report new modeling and error reduction methods for differential-absorption optical-depth (DAOD) measurements of atmospheric constituents using direct-detection integrated-path differential-absorption lidars. Errors from laser frequency noise are quantified in terms of the line center fluctuation and spectral line shape of the laser pulses, revealing relationships verified experimentally. A significant DAOD bias is removed by introducing a correction factor. Errors from surface height and reflectance variations can be reduced to tolerable levels by incorporating altimetry knowledge and "log after averaging", or by pointing the laser and receiver to a fixed surface spot during each wavelength cycle to shorten the time of "averaging before log".

  9. Application of the Hilbert space average method on heat conduction models.

    PubMed

    Michel, Mathias; Gemmer, Jochen; Mahler, Günter

    2006-01-01

    We analyze closed one-dimensional chains of weakly coupled many level systems, by means of the so-called Hilbert space average method (HAM). Subject to some concrete conditions on the Hamiltonian of the system, our theory predicts energy diffusion with respect to a coarse-grained description for almost all initial states. Close to the respective equilibrium, we investigate this behavior in terms of heat transport and derive the heat conduction coefficient. Thus, we are able to show that both heat (energy) diffusive behavior as well as Fourier's law follows from and is compatible with a reversible Schrödinger dynamics on the complete level of description.

  10. Assessment of a virtual functional prototyping process for the rapid manufacture of passive-dynamic ankle-foot orthoses.

    PubMed

    Schrank, Elisa S; Hitch, Lester; Wallace, Kevin; Moore, Richard; Stanhope, Steven J

    2013-10-01

    Passive-dynamic ankle-foot orthosis (PD-AFO) bending stiffness is a key functional characteristic for achieving enhanced gait function. However, current orthosis customization methods inhibit objective premanufacture tuning of the PD-AFO bending stiffness, making optimization of orthosis function challenging. We have developed a novel virtual functional prototyping (VFP) process, which harnesses the strengths of computer aided design (CAD) model parameterization and finite element analysis, to quantitatively tune and predict the functional characteristics of a PD-AFO, which is rapidly manufactured via fused deposition modeling (FDM). The purpose of this study was to assess the VFP process for PD-AFO bending stiffness. A PD-AFO CAD model was customized for a healthy subject and tuned to four bending stiffness values via VFP. Two sets of each tuned model were fabricated via FDM using medical-grade polycarbonate (PC-ISO). Dimensional accuracy of the fabricated orthoses was excellent (average 0.51 ± 0.39 mm). Manufacturing precision ranged from 0.0 to 0.74 Nm/deg (average 0.30 ± 0.36 Nm/deg). Bending stiffness prediction accuracy was within 1 Nm/deg using the manufacturer provided PC-ISO elastic modulus (average 0.48 ± 0.35 Nm/deg). Using an experimentally derived PC-ISO elastic modulus improved the optimized bending stiffness prediction accuracy (average 0.29 ± 0.57 Nm/deg). Robustness of the derived modulus was tested by carrying out the VFP process for a disparate subject, tuning the PD-AFO model to five bending stiffness values. For this disparate subject, bending stiffness prediction accuracy was strong (average 0.20 ± 0.14 Nm/deg). Overall, the VFP process had excellent dimensional accuracy, good manufacturing precision, and strong prediction accuracy with the derived modulus. Implementing VFP as part of our PD-AFO customization and manufacturing framework, which also includes fit customization, provides a novel and powerful method to predictably tune and precisely manufacture orthoses with objectively customized fit and functional characteristics.

  11. Modeling Heavy/Medium-Duty Fuel Consumption Based on Drive Cycle Properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Duran, Adam; Gonder, Jeffrey

    This paper presents multiple methods for predicting heavy/medium-duty vehicle fuel consumption based on driving cycle information. A polynomial model, a black box artificial neural net model, a polynomial neural network model, and a multivariate adaptive regression splines (MARS) model were developed and verified using data collected from chassis testing performed on a parcel delivery diesel truck operating over the Heavy Heavy-Duty Diesel Truck (HHDDT), City Suburban Heavy Vehicle Cycle (CSHVC), New York Composite Cycle (NYCC), and hydraulic hybrid vehicle (HHV) drive cycles. Each model was trained using one of four drive cycles as a training cycle and the other threemore » as testing cycles. By comparing the training and testing results, a representative training cycle was chosen and used to further tune each method. HHDDT as the training cycle gave the best predictive results, because HHDDT contains a variety of drive characteristics, such as high speed, acceleration, idling, and deceleration. Among the four model approaches, MARS gave the best predictive performance, with an average absolute percent error of -1.84% over the four chassis dynamometer drive cycles. To further evaluate the accuracy of the predictive models, the approaches were first applied to real-world data. MARS outperformed the other three approaches, providing an average absolute percent error of -2.2% of four real-world road segments. The MARS model performance was then compared to HHDDT, CSHVC, NYCC, and HHV drive cycles with the performance from Future Automotive System Technology Simulator (FASTSim). The results indicated that the MARS method achieved a comparative predictive performance with FASTSim.« less

  12. Comparative Performance Evaluation of Rainfall-runoff Models, Six of Black-box Type and One of Conceptual Type, From The Galway Flow Forecasting System (gffs) Package, Applied On Two Irish Catchments

    NASA Astrophysics Data System (ADS)

    Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.

    The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.

  13. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  14. Modal description—A better way of characterizing human vibration behavior

    NASA Astrophysics Data System (ADS)

    Rützel, Sebastian; Hinz, Barbara; Wölfel, Horst Peter

    2006-12-01

    Biodynamic responses to whole body vibrations are usually characterized in terms of transfer functions, such as impedance or apparent mass. Data measurements from subjects are averaged and analyzed with respect to certain attributes (anthropometrics, posture, excitation intensity, etc.). Averaging involves the risk of identifying unnatural vibration characteristics. The use of a modal description as an alternative method is presented and its contribution to biodynamic modelling is discussed. Modal description is not limited to just one biodynamic function: The method holds for all transfer functions. This is shown in terms of the apparent mass and the seat-to-head transfer function. The advantages of modal description are illustrated using apparent mass data of six male individuals of the same mass percentile. From experimental data, modal parameters such as natural frequencies, damping ratios and modal masses are identified which can easily be used to set up a mathematical model. Following the phenomenological approach, this model will provide the global vibration behavior relating to the input data. The modal description could be used for the development of hardware vibration dummies. With respect to software models such as finite element models, the validation process for these models can be supported by the modal approach. Modal parameters of computational models and of the experimental data can establish a basis for comparison.

  15. A non-asymptotic model of dynamics of honeycomb lattice-type plates

    NASA Astrophysics Data System (ADS)

    Cielecka, Iwona; Jędrysiak, Jarosław

    2006-09-01

    Lightweight structures, consisted of special composite material systems like sandwich plates, are often used in aerospace or naval engineering. In composite sandwich plates, the intermediate core is usually made of cellular structures, e.g. honeycomb micro-frames, reinforcing static and dynamic properties of these plates. Here, a new non-asymptotic continuum model of honeycomb lattice-type plates is shown and applied to the analysis of dynamic problems. The general formulation of the model for periodic lattice-type plates of an arbitrary lay-out was presented by Cielecka and Jędrysiak [Journal of Theoretical and Applied Mechanics 40 (2002) 23-46]. This model, partly based on the tolerance averaging method developed for periodic composite solids by Woźniak and Wierzbicki [Averaging techniques in thermomechanics of composite solids, Wydawnictwo Politechniki Częstochowskiej, Częstochowa, 2000], takes into account the effect of the length microstructure size on the dynamic plate behaviour. The shown method leads to the model equations describing the above effect for honeycomb lattice-type plates. These equations have the form similar to equations for isotropic cases. The dynamic analysis of such plates exemplifies this effect, which is significant and cannot be neglected. The physical correctness of the obtained results is also discussed.

  16. Gasification Characteristics and Kinetics of Coke with Chlorine Addition

    NASA Astrophysics Data System (ADS)

    Wang, Cui; Zhang, Jianliang; Jiao, Kexin; Liu, Zhengjian; Chou, Kuochih

    2017-10-01

    The gasification process of metallurgical coke with 0, 1.122, 3.190, and 7.132 wt pct chlorine was investigated through thermogravimetric method from ambient temperature to 1593 K (1320 °C) in purified CO2 atmosphere. The variations in the temperature parameters that T i decreases gradually with increasing chlorine, T f and T max first decrease and then increase, but both in a downward trend indicated that the coke gasification process was catalyzed by the chlorine addition. Then the kinetic model of the chlorine-containing coke gasification was obtained through the advanced determination of the average apparent activation energy, the optimal reaction model, and the pre-exponential factor. The average apparent activation energies were 182.962, 118.525, 139.632, and 111.953 kJ/mol, respectively, which were in the same decreasing trend with the temperature parameters analyzed by the thermogravimetric method. It was also demonstrated that the coke gasification process was catalyzed by chlorine. The optimal kinetic model to describe the gasification process of chlorine-containing coke was the Šesták Berggren model using Málek's method, and the pre-exponential factors were 6.688 × 105, 2.786 × 103, 1.782 × 104, and 1.324 × 103 min-1, respectively. The predictions of chlorine-containing coke gasification from the Šesták Berggren model were well fitted with the experimental data.

  17. Reduction of the dimension of neural network models in problems of pattern recognition and forecasting

    NASA Astrophysics Data System (ADS)

    Nasertdinova, A. D.; Bochkarev, V. V.

    2017-11-01

    Deep neural networks with a large number of parameters are a powerful tool for solving problems of pattern recognition, prediction and classification. Nevertheless, overfitting remains a serious problem in the use of such networks. A method of solving the problem of overfitting is proposed in this article. This method is based on reducing the number of independent parameters of a neural network model using the principal component analysis, and can be implemented using existing libraries of neural computing. The algorithm was tested on the problem of recognition of handwritten symbols from the MNIST database, as well as on the task of predicting time series (rows of the average monthly number of sunspots and series of the Lorentz system were used). It is shown that the application of the principal component analysis enables reducing the number of parameters of the neural network model when the results are good. The average error rate for the recognition of handwritten figures from the MNIST database was 1.12% (which is comparable to the results obtained using the "Deep training" methods), while the number of parameters of the neural network can be reduced to 130 times.

  18. A novel false color mapping model-based fusion method of visual and infrared images

    NASA Astrophysics Data System (ADS)

    Qi, Bin; Kun, Gao; Tian, Yue-xin; Zhu, Zhen-yu

    2013-12-01

    A fast and efficient image fusion method is presented to generate near-natural colors from panchromatic visual and thermal imaging sensors. Firstly, a set of daytime color reference images are analyzed and the false color mapping principle is proposed according to human's visual and emotional habits. That is, object colors should remain invariant after color mapping operations, differences between infrared and visual images should be enhanced and the background color should be consistent with the main scene content. Then a novel nonlinear color mapping model is given by introducing the geometric average value of the input visual and infrared image gray and the weighted average algorithm. To determine the control parameters in the mapping model, the boundary conditions are listed according to the mapping principle above. Fusion experiments show that the new fusion method can achieve the near-natural appearance of the fused image, and has the features of enhancing color contrasts and highlighting the infrared brilliant objects when comparing with the traditional TNO algorithm. Moreover, it owns the low complexity and is easy to realize real-time processing. So it is quite suitable for the nighttime imaging apparatus.

  19. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  20. Improved Statistical Fault Detection Technique and Application to Biological Phenomena Modeled by S-Systems.

    PubMed

    Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N

    2017-09-01

    In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.

  1. Evaluating the compatibility of multi-functional and intensive urban land uses

    NASA Astrophysics Data System (ADS)

    Taleai, M.; Sharifi, A.; Sliuzas, R.; Mesgari, M.

    2007-12-01

    This research is aimed at developing a model for assessing land use compatibility in densely built-up urban areas. In this process, a new model was developed through the combination of a suite of existing methods and tools: geographical information system, Delphi methods and spatial decision support tools: namely multi-criteria evaluation analysis, analytical hierarchy process and ordered weighted average method. The developed model has the potential to calculate land use compatibility in both horizontal and vertical directions. Furthermore, the compatibility between the use of each floor in a building and its neighboring land uses can be evaluated. The method was tested in a built-up urban area located in Tehran, the capital city of Iran. The results show that the model is robust in clarifying different levels of physical compatibility between neighboring land uses. This paper describes the various steps and processes of developing the proposed land use compatibility evaluation model (CEM).

  2. Database and new models based on a group contribution method to predict the refractive index of ionic liquids.

    PubMed

    Wang, Xinxin; Lu, Xingmei; Zhou, Qing; Zhao, Yongsheng; Li, Xiaoqian; Zhang, Suojiang

    2017-08-02

    Refractive index is one of the important physical properties, which is widely used in separation and purification. In this study, the refractive index data of ILs were collected to establish a comprehensive database, which included about 2138 pieces of data from 1996 to 2014. The Group Contribution-Artificial Neural Network (GC-ANN) model and Group Contribution (GC) method were employed to predict the refractive index of ILs at different temperatures from 283.15 K to 368.15 K. Average absolute relative deviations (AARD) of the GC-ANN model and the GC method were 0.179% and 0.628%, respectively. The results showed that a GC-ANN model provided an effective way to estimate the refractive index of ILs, whereas the GC method was simple and extensive. In summary, both of the models were accurate and efficient approaches for estimating refractive indices of ILs.

  3. Aerosol Measurements in the Mid-Atlantic: Trends and Uncertainty

    NASA Astrophysics Data System (ADS)

    Hains, J. C.; Chen, L. A.; Taubman, B. F.; Dickerson, R. R.

    2006-05-01

    Elevated levels of PM2.5 are associated with cardiovascular and respiratory problems and even increased mortality rates. In 2002 we ran two commonly used PM2.5 speciation samplers (an IMPROVE sampler and an EPA sampler) in parallel at Fort Meade, Maryland (a suburban site located in the Baltimore- Washington urban corridor). The filters were analyzed at different labs. This experiment allowed us to calculate the 'real world' uncertainties associated with these instruments. The EPA method retrieved a January average PM2.5 mass of 9.3 μg/m3 with a standard deviation of 2.8 μg/m3, while the IMPROVE method retrieved an average mass of 7.3 μg/m3 with a standard deviation of 2.1 μg/m3. The EPA method retrieved a July average PM2.5 mass of 26.4 μg/m3 with a standard deviation of 14.6 μg/m3, while the IMPROVE method retrieved an average mass of 23.3 μg/m3 with a standard deviation of 13.0 μg/m3. We calculated a 5% uncertainty associated with the EPA and IMPROVE methods that accounts for uncertainties in flow control strategies and laboratory analysis. The RMS difference between the two methods in January was 2.1 μg/m3, which is about 25% of the monthly average mass and greater than the uncertainty we calculated. In July the RMS difference between the two methods was 5.2 μg/m3, about 20% of the monthly average mass, and greater than the uncertainty we calculated. The EPA methods retrieve consistently higher concentrations of PM2.5 than the IMPROVE methods on a daily basis in January and July. This suggests a systematic bias possibly resulting from contamination of either of the sampling methods. We reconstructed the mass and found that both samplers have good correlation between reconstructed and gravimetric mass, though the IMPROVE method has slightly better correlation than the EPA method. In January, organic carbon is the largest contributor to PM2.5 mass, and in July both sulfate and organic matter contribute substantially to PM2.5. Source apportionment models suggest that regional and local power plants are the major sources of sulfate, while mobile and vegetative burning factors are the major sources of organic carbon.

  4. Combining remotely sensed and other measurements for hydrologic areal averages

    NASA Technical Reports Server (NTRS)

    Johnson, E. R.; Peck, E. L.; Keefer, T. N.

    1982-01-01

    A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.

  5. Ice-sheet contributions to future sea-level change.

    PubMed

    Gregory, J M; Huybrechts, P

    2006-07-15

    Accurate simulation of ice-sheet surface mass balance requires higher spatial resolution than is afforded by typical atmosphere-ocean general circulation models (AOGCMs), owing, in particular, to the need to resolve the narrow and steep margins where the majority of precipitation and ablation occurs. We have developed a method for calculating mass-balance changes by combining ice-sheet average time-series from AOGCM projections for future centuries, both with information from high-resolution climate models run for short periods and with a 20km ice-sheet mass-balance model. Antarctica contributes negatively to sea level on account of increased accumulation, while Greenland contributes positively because ablation increases more rapidly. The uncertainty in the results is about 20% for Antarctica and 35% for Greenland. Changes in ice-sheet topography and dynamics are not included, but we discuss their possible effects. For an annual- and area-average warming exceeding 4.5+/-0.9K in Greenland and 3.1+/-0.8K in the global average, the net surface mass balance of the Greenland ice sheet becomes negative, in which case it is likely that the ice sheet would eventually be eliminated, raising global-average sea level by 7m.

  6. Numerical Investigation of a Model Scramjet Combustor Using DDES

    NASA Astrophysics Data System (ADS)

    Shin, Junsu; Sung, Hong-Gye

    2017-04-01

    Non-reactive flows moving through a model scramjet were investigated using a delayed detached eddy simulation (DDES), which is a hybrid scheme combining Reynolds averaged Navier-Stokes scheme and a large eddy simulation. The three dimensional Navier-Stokes equations were solved numerically on a structural grid using finite volume methods. An in-house was developed. This code used a monotonic upstream-centered scheme for conservation laws (MUSCL) with an advection upstream splitting method by pressure weight function (AUSMPW+) for space. In addition, a 4th order Runge-Kutta scheme was used with preconditioning for time integration. The geometries and boundary conditions of a scramjet combustor operated by DLR, a German aerospace center, were considered. The profiles of the lower wall pressure and axial velocity obtained from a time-averaged solution were compared with experimental results. Also, the mixing efficiency and total pressure recovery factor were provided in order to inspect the performance of the combustor.

  7. On the use of the generalized SPRT method in the equivalent hard sphere approximation for nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Noguere, Gilles; Archier, Pascal; Bouland, Olivier; Capote, Roberto; Jean, Cyrille De Saint; Kopecky, Stefan; Schillebeeckx, Peter; Sirakov, Ivan; Tamagno, Pierre

    2017-09-01

    A consistent description of the neutron cross sections from thermal energy up to the MeV region is challenging. One of the first steps consists in optimizing the optical model parameters using average resonance parameters, such as the neutron strength functions. They can be derived from a statistical analysis of the resolved resonance parameters, or calculated with the generalized form of the SPRT method by using scattering matrix elements provided by optical model calculations. One of the difficulties is to establish the contributions of the direct and compound nucleus reactions. This problem was solved by using a slightly modified average R-Matrix formula with an equivalent hard sphere radius deduced from the phase shift originating from the potential. The performances of the proposed formalism are illustrated with results obtained for the 238U+n nuclear systems.

  8. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  9. Mixed Estimation for a Forest Survey Sample Design

    Treesearch

    Francis A. Roesch

    1999-01-01

    Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...

  10. Fast method for reactor and feature scale coupling in ALD and CVD

    DOEpatents

    Yanguas-Gil, Angel; Elam, Jeffrey W.

    2017-08-08

    Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.

  11. Impacts of river segmentation strategies on reach-averaged product uncertainties for the upcoming Surface Water and Ocean Topography (SWOT) mission

    NASA Astrophysics Data System (ADS)

    Frasson, R. P. M.; Wei, R.; Minear, J. T.; Tuozzolo, S.; Domeneghetti, A.; Durand, M. T.

    2016-12-01

    Averaging is a powerful method to reduce measurement noise associated with remote sensing observation of water surfaces. However, when dealing with river measurements, the choice of which points are averaged may affect the quality of the products. We examine the effectiveness of three fully automated reach definition strategies: In the first, we break up reaches at regular intervals measured along the rivers' centerlines. The second strategy consists of identifying hydraulic controls by searching for inflection points on water surface profiles. The third strategy takes into consideration river planform features, breaking up reaches according to channel sinuosity. We employed the Jet Propulsion Laboratory's (JPL) SWOT hydrology simulator to generate 9 synthetic SWOT observations of the Sacramento River in California, USA and 14 overpasses of the Po River in northern Italy. In order to create the synthetic SWOT data, the simulator requires the true water digital elevation model (DEM), which we constructed from hydraulic models of both rivers, and the terrain DEM, which we built from LiDAR data of both basins. We processed the simulated pixel clouds using the JPL's RiverObs package, which traces the river centerline and estimates water surface height and river width on equally spaced nodes located along the centerline. Subsequently, we applied the three reach definition methodologies to the nodes and to the hydraulic models' outputs to generate simulated reach-averaged observations and the reach-averaged truth respectively. Our results generally indicate that height, width, slope, and discharge errors decrease with increasing reach length, with most of the accuracy gains occurring when reach length increases to up to 15 km for both the narrow (Sacramento) and the wide (Po) rivers. The "smart" methods led to smaller slope, width, and discharge errors for the Sacramento River when compared to arbitrary reaches of similar length whereas, for the for the Po River all methods had comparable performance. Our results suggest that river segmentation strategies that take into consideration the hydraulic characteristics of rivers may lead to more meaningful reach boundaries and to better products especially for narrower and more complex rivers.

  12. Preliminary Computational Fluid Dynamics (CFD) Simulation of EIIB Push Barge in Shallow Water

    NASA Astrophysics Data System (ADS)

    Beneš, Petr; Kollárik, Róbert

    2011-12-01

    This study presents preliminary CFD simulation of EIIb push barge in inland conditions using CFD software Ansys Fluent. The RANSE (Reynolds Averaged Navier-Stokes Equation) methods are used for the viscosity solution of turbulent flow around the ship hull. Different RANSE methods are used for the comparison of their results in ship resistance calculations, for selecting the appropriate and removing inappropriate methods. This study further familiarizes on the creation of geometrical model which considers exact water depth to vessel draft ratio in shallow water conditions, grid generation, setting mathematical model in Fluent and evaluation of the simulations results.

  13. The effect of Reynolds number and turbulence on airfoil aerodynamics at -90 degrees incidence

    NASA Technical Reports Server (NTRS)

    Stremel, Paul M.

    1993-01-01

    A method has been developed for calculating the viscous flow about airfoils in with and without deflected flaps at -90 deg incidence. This method provides for the solution of the unsteady incompressible Navier-Stokes equations by means of an implicit technique. The solution is calculated on a body-fitted computational mesh using a staggered grid method. The vorticity is defined at the node points, and the velocity components are defined at the mesh-cell sides. The staggered-grid orientation provides for accurate representation of vorticity at the node points and the continuity equation at the mesh-cell centers. The method provides for the direct solution of the flow field and satisfies the continuity equation to machine zero at each time-step. The method is evaluated in terms of its ability to predict two-dimensional flow about an airfoil at -90 degrees incidence for varying Reynolds number and different boundary layer models. A laminar and a turbulent boundary layer models. A laminar and a turbulent boundary layer model are considered in the evaluation of the method. The variation of the average loading and surface pressure distribution due to flap deflection, Reynolds number, and laminar or turbulent flow are presented and compared with experimental results. The comparisons indicate that the calculated drag and drag reduction caused by flap deflection and the calculated average surface pressure are in excellent agreement with the measured results at a similar Reynolds number.

  14. Modeling for stress-strain curve of a porous NiTi under compressive loading

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Taya, Minoru

    2005-05-01

    Two models for predicting the stress-strain curve of porous NiTi under compressive loading are presented in this paper. Porous NiTi shape memory alloy is investigated as a composite composed of solid NiTi as matrix and pores as inclusions. Eshelby"s equivalent inclusion method and Mori-Tanaka"s mean-field theory are employed in both models. In the first model, the geometry of the pores is assumed as sphere. The composite is with close-cells. While in the second model, two geometries of the pores, sphere and ellipsoid, are investigated. The pores are interconnected to each other forming an open-cell microstructure. The two adjacent pores connected along equator ring are investigated as a unit. Two pores interact with each other as they are connected. The average eigenstrain of each unit is obtained by taking the average of each pore"s eigenstrain. The stress-strain curves of porous shape memory alloy with spherical pores and ellipsoidal pores are compared, it is found that the shape of the pores has a nonignorable influence on the mechanical property of the porous NiTi. Comparison of the stress-strain curves of the two models shows that introducing of the average eigenstrains in the second model makes the predictions more agreeable to the experimental results.

  15. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-09-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, we developed a new method for simulating stand-replacing disturbances that is both accurate and faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing (e.g., as a result of climate change), GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the vegetation model LPJ-GUESS, and evaluated it in a series of simulations along an altitudinal transect of an inner-Alpine valley. We obtained results very similar to the output of the original LPJ-GUESS model that uses 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited for rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and results of other forest models.

  16. Regional patterns of future runoff changes from Earth system models constrained by observation

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Zhou, Feng; Piao, Shilong; Huang, Mengtian; Chen, Anping; Ciais, Philippe; Li, Yue; Lian, Xu; Peng, Shushi; Zeng, Zhenzhong

    2017-06-01

    In the recent Intergovernmental Panel on Climate Change assessment, multimodel ensembles (arithmetic model averaging, AMA) were constructed with equal weights given to Earth system models, without considering the performance of each model at reproducing current conditions. Here we use Bayesian model averaging (BMA) to construct a weighted model ensemble for runoff projections. Higher weights are given to models with better performance in estimating historical decadal mean runoff. Using the BMA method, we find that by the end of this century, the increase of global runoff (9.8 ± 1.5%) under Representative Concentration Pathway 8.5 is significantly lower than estimated from AMA (12.2 ± 1.3%). BMA presents a less severe runoff increase than AMA at northern high latitudes and a more severe decrease in Amazonia. Runoff decrease in Amazonia is stronger than the intermodel difference. The intermodel difference in runoff changes is mainly caused not only by precipitation differences among models, but also by evapotranspiration differences at the high northern latitudes.

  17. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    PubMed

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  18. ARMA Cholesky Factor Models for the Covariance Matrix of Linear Models.

    PubMed

    Lee, Keunbaik; Baek, Changryong; Daniels, Michael J

    2017-11-01

    In longitudinal studies, serial dependence of repeated outcomes must be taken into account to make correct inferences on covariate effects. As such, care must be taken in modeling the covariance matrix. However, estimation of the covariance matrix is challenging because there are many parameters in the matrix and the estimated covariance matrix should be positive definite. To overcomes these limitations, two Cholesky decomposition approaches have been proposed: modified Cholesky decomposition for autoregressive (AR) structure and moving average Cholesky decomposition for moving average (MA) structure, respectively. However, the correlations of repeated outcomes are often not captured parsimoniously using either approach separately. In this paper, we propose a class of flexible, nonstationary, heteroscedastic models that exploits the structure allowed by combining the AR and MA modeling of the covariance matrix that we denote as ARMACD. We analyze a recent lung cancer study to illustrate the power of our proposed methods.

  19. Method of model reduction and multifidelity models for solute transport in random layered porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    This work presents a hierarchical model for solute transport in bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the hierarchical model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. We observe that velocity fluctuations enhance dispersion in a non-monotonic fashion: the dispersionmore » initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity. Maximum enhancement can be obtained at the correlation length about 0.25 the size of the porous media perpendicular to flow.« less

  20. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices

    NASA Astrophysics Data System (ADS)

    Gandhi, Om P.; Kang, Gang

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  1. Calculation of induced current densities for humans by magnetic fields from electronic article surveillance devices.

    PubMed

    Gandhi, O P; Kang, G

    2001-11-01

    This paper illustrates the use of the impedance method to calculate the electric fields and current densities induced in millimetre resolution anatomic models of the human body, namely an adult and 10- and 5-year-old children, for exposure to nonuniform magnetic fields typical of two assumed but representative electronic article surveillance (EAS) devices at 1 and 30 kHz, respectively. The devices assumed for the calculations are a solenoid type magnetic deactivator used at store checkouts and a pass-by panel-type EAS system consisting of two overlapping rectangular current-carrying coils used at entry and exit from a store. The impedance method code is modified to obtain induced current densities averaged over a cross section of 1 cm2 perpendicular to the direction of induced currents. This is done to compare the peak current densities with the limits or the basic restrictions given in the ICNIRP safety guidelines. Because of the stronger magnetic fields at lower heights for both the assumed devices, the peak 1 cm2 area-averaged current densities for the CNS tissues such as the brain and the spinal cord are increasingly larger for smaller models and are the highest for the model of the 5-year-old child. For both the EAS devices, the maximum 1 cm2 area-averaged current densities for the brain of the model of the adult are lower than the ICNIRP safety guideline, but may approach or exceed the ICNIRP basic restrictions for models of 10- and 5-year-old children if sufficiently strong magnetic fields are used.

  2. Patient-specific model of a scoliotic torso for surgical planning

    NASA Astrophysics Data System (ADS)

    Harmouche, Rola; Cheriet, Farida; Labelle, Hubert; Dansereau, Jean

    2013-03-01

    A method for the construction of a patient-specific model of a scoliotic torso for surgical planning via inter-patient registration is presented. Magnetic Resonance Images (MRI) of a generic model are registered to surface topography (TP) and X-ray data of a test patient. A partial model is first obtained via thin-plate spline registration between TP and X-ray data of the test patient. The MRIs from the generic model are then fit into the test patient using articulated model registration between the vertebrae of the generic model's MRIs in prone position and the test patient's X-rays in standing position. A non-rigid deformation of the soft tissues is performed using a modified thin-plate spline constrained to maintain bone rigidity and to fit in the space between the vertebrae and the surface of the torso. Results show average Dice values of 0:975 +/- 0:012 between the MRIs following inter-patient registration and the surface topography of the test patient, which is comparable to the average value of 0:976 +/- 0:009 previously obtained following intra-patient registration. The results also show a significant improvement compared to rigid inter-patient registration. Future work includes validating the method on a larger cohort of patients and incorporating soft tissue stiffness constraints. The method developed can be used to obtain a geometric model of a patient including bone structures, soft tissues and the surface of the torso which can be incorporated in a surgical simulator in order to better predict the outcome of scoliosis surgery, even if MRI data cannot be acquired for the patient.

  3. Automatic abdominal multi-organ segmentation using deep convolutional neural network and time-implicit level sets.

    PubMed

    Hu, Peijun; Wu, Fa; Peng, Jialin; Bao, Yuanyuan; Chen, Feng; Kong, Dexing

    2017-03-01

    Multi-organ segmentation from CT images is an essential step for computer-aided diagnosis and surgery planning. However, manual delineation of the organs by radiologists is tedious, time-consuming and poorly reproducible. Therefore, we propose a fully automatic method for the segmentation of multiple organs from three-dimensional abdominal CT images. The proposed method employs deep fully convolutional neural networks (CNNs) for organ detection and segmentation, which is further refined by a time-implicit multi-phase evolution method. Firstly, a 3D CNN is trained to automatically localize and delineate the organs of interest with a probability prediction map. The learned probability map provides both subject-specific spatial priors and initialization for subsequent fine segmentation. Then, for the refinement of the multi-organ segmentation, image intensity models, probability priors as well as a disjoint region constraint are incorporated into an unified energy functional. Finally, a novel time-implicit multi-phase level-set algorithm is utilized to efficiently optimize the proposed energy functional model. Our method has been evaluated on 140 abdominal CT scans for the segmentation of four organs (liver, spleen and both kidneys). With respect to the ground truth, average Dice overlap ratios for the liver, spleen and both kidneys are 96.0, 94.2 and 95.4%, respectively, and average symmetric surface distance is less than 1.3 mm for all the segmented organs. The computation time for a CT volume is 125 s in average. The achieved accuracy compares well to state-of-the-art methods with much higher efficiency. A fully automatic method for multi-organ segmentation from abdominal CT images was developed and evaluated. The results demonstrated its potential in clinical usage with high effectiveness, robustness and efficiency.

  4. Fuzzy model approach for estimating time of hospitalization due to cardiovascular diseases.

    PubMed

    Coutinho, Karine Mayara Vieira; Rizol, Paloma Maria Silva Rocha; Nascimento, Luiz Fernando Costa; de Medeiros, Andréa Paula Peneluppi

    2015-08-01

    A fuzzy linguistic model based on the Mamdani method with input variables, particulate matter, sulfur dioxide, temperature and wind obtained from CETESB with two membership functions each was built to predict the average hospitalization time due to cardiovascular diseases related to exposure to air pollutants in São José dos Campos in the State of São Paulo in 2009. The output variable is the average length of hospitalization obtained from DATASUS with six membership functions. The average time given by the model was compared to actual data using lags of 0 to 4 days. This model was built using the Matlab v. 7.5 fuzzy toolbox. Its accuracy was assessed with the ROC curve. Hospitalizations with a mean time of 7.9 days (SD = 4.9) were recorded in 1119 cases. The data provided revealed a significant correlation with the actual data according to the lags of 0 to 4 days. The pollutant that showed the greatest accuracy was sulfur dioxide. This model can be used as the basis of a specialized system to assist the city health authority in assessing the risk of hospitalizations due to air pollutants.

  5. Downscaled and debiased climate simulations for North America from 21,000 years ago to 2100AD

    PubMed Central

    Lorenz, David J.; Nieto-Lugilde, Diego; Blois, Jessica L.; Fitzpatrick, Matthew C.; Williams, John W.

    2016-01-01

    Increasingly, ecological modellers are integrating paleodata with future projections to understand climate-driven biodiversity dynamics from the past through the current century. Climate simulations from earth system models are necessary to this effort, but must be debiased and downscaled before they can be used by ecological models. Downscaling methods and observational baselines vary among researchers, which produces confounding biases among downscaled climate simulations. We present unified datasets of debiased and downscaled climate simulations for North America from 21 ka BP to 2100AD, at 0.5° spatial resolution. Temporal resolution is decadal averages of monthly data until 1950AD, average climates for 1950–2005 AD, and monthly data from 2010 to 2100AD, with decadal averages also provided. This downscaling includes two transient paleoclimatic simulations and 12 climate models for the IPCC AR5 (CMIP5) historical (1850–2005), RCP4.5, and RCP8.5 21st-century scenarios. Climate variables include primary variables and derived bioclimatic variables. These datasets provide a common set of climate simulations suitable for seamlessly modelling the effects of past and future climate change on species distributions and diversity. PMID:27377537

  6. Downscaled and debiased climate simulations for North America from 21,000 years ago to 2100AD.

    PubMed

    Lorenz, David J; Nieto-Lugilde, Diego; Blois, Jessica L; Fitzpatrick, Matthew C; Williams, John W

    2016-07-05

    Increasingly, ecological modellers are integrating paleodata with future projections to understand climate-driven biodiversity dynamics from the past through the current century. Climate simulations from earth system models are necessary to this effort, but must be debiased and downscaled before they can be used by ecological models. Downscaling methods and observational baselines vary among researchers, which produces confounding biases among downscaled climate simulations. We present unified datasets of debiased and downscaled climate simulations for North America from 21 ka BP to 2100AD, at 0.5° spatial resolution. Temporal resolution is decadal averages of monthly data until 1950AD, average climates for 1950-2005 AD, and monthly data from 2010 to 2100AD, with decadal averages also provided. This downscaling includes two transient paleoclimatic simulations and 12 climate models for the IPCC AR5 (CMIP5) historical (1850-2005), RCP4.5, and RCP8.5 21st-century scenarios. Climate variables include primary variables and derived bioclimatic variables. These datasets provide a common set of climate simulations suitable for seamlessly modelling the effects of past and future climate change on species distributions and diversity.

  7. A Novel Application of Machine Learning Methods to Model Microcontroller Upset Due to Intentional Electromagnetic Interference

    NASA Astrophysics Data System (ADS)

    Bilalic, Rusmir

    A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.

  8. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China

    PubMed Central

    Liu, Dong-jun; Li, Li

    2015-01-01

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332

  9. Application Study of Comprehensive Forecasting Model Based on Entropy Weighting Method on Trend of PM2.5 Concentration in Guangzhou, China.

    PubMed

    Liu, Dong-jun; Li, Li

    2015-06-23

    For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.

  10. Methods for Tier 1 Modeling within the Training Range Environmental Evaluation and Characterization System

    DTIC Science & Technology

    2009-08-01

    properties, part b. USLE K-Factor by Organic Matter Content Soil -Texture Classification Dry Bulk Density, g/cm3 Field Capacity, % Available...Universal Soil Loss Equation ( USLE ) can be used to estimate annual average sheet and rill erosion, A (tons/acre-yr), from the equation A R K L S...erodibility factors, K, for various soil classifications and percent organic matter content ( USLE Fact Sheet 2008). Textural Class Average Less than 2

  11. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...

  12. 40 CFR Table 6 to Subpart Dddd of... - Model Rule-Emission Limitations That Apply to Incinerators on and After [Date to be specified in...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... per million dry volume 3-run average (1 hour minimum sample time per run) Performance test (Method 10..., appendix A-3 or appendix A-8). Sulfur dioxide 11 parts per million dry volume 3-run average (1 hour minimum... Apply to Incinerators on and After [Date to be specified in state plan] a 6 Table 6 to Subpart DDDD of...

  13. Time averaging, ageing and delay analysis of financial time series

    NASA Astrophysics Data System (ADS)

    Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf

    2017-06-01

    We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.

  14. Effect of Fuel Temperature Profile on Eigenvalue Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greifenkamp, Tom E; Clarno, Kevin T; Gehin, Jess C

    2008-01-01

    Use of an average fuel temperature is a current practice when modeling fuel for eigenvalue (k-inf) calculations. This is an approximation, as it is known from Heat-transfer methods that a fuel pin having linear power q', will have a temperature that varies radially and has a maximum temperature at the center line [1]. This paper describes an investigation into the effects on k-inf and isotopic concentrations of modeling a fuel pin using a single average temperature versus a radially varying fuel temperature profile. The axial variation is not discussed in this paper. A single fuel pin was modeled having 1,more » 3, 5, 8, or 10 regions of equal volumes (areas). Fig. 1 shows a model of a 10-ring fuel pin surrounded by a gap and then cladding.« less

  15. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  16. Application of data assimilation methods for analysis and integration of observed and modeled Arctic Sea ice motions

    NASA Astrophysics Data System (ADS)

    Meier, Walter Neil

    This thesis demonstrates the applicability of data assimilation methods to improve observed and modeled ice motion fields and to demonstrate the effects of assimilated motion on Arctic processes important to the global climate and of practical concern to human activities. Ice motions derived from 85 GHz and 37 GHz SSM/I imagery and estimated from two-dimensional dynamic-thermodynamic sea ice models are compared to buoy observations. Mean error, error standard deviation, and correlation with buoys are computed for the model domain. SSM/I motions generally have a lower bias, but higher error standard deviations and lower correlation with buoys than model motions. There are notable variations in the statistics depending on the region of the Arctic, season, and ice characteristics. Assimilation methods are investigated and blending and optimal interpolation strategies are implemented. Blending assimilation improves error statistics slightly, but the effect of the assimilation is reduced due to noise in the SSM/I motions and is thus not an effective method to improve ice motion estimates. However, optimal interpolation assimilation reduces motion errors by 25--30% over modeled motions and 40--45% over SSM/I motions. Optimal interpolation assimilation is beneficial in all regions, seasons and ice conditions, and is particularly effective in regimes where modeled and SSM/I errors are high. Assimilation alters annual average motion fields. Modeled ice products of ice thickness, ice divergence, Fram Strait ice volume export, transport across the Arctic and interannual basin averages are also influenced by assimilated motions. Assimilation improves estimates of pollutant transport and corrects synoptic-scale errors in the motion fields caused by incorrect forcings or errors in model physics. The portability of the optimal interpolation assimilation method is demonstrated by implementing the strategy in an ice thickness distribution (ITD) model. This research presents an innovative method of combining a new data set of SSM/I-derived ice motions with three different sea ice models via two data assimilation methods. The work described here is the first example of assimilating remotely-sensed data within high-resolution and detailed dynamic-thermodynamic sea ice models. The results demonstrate that assimilation is a valuable resource for determining accurate ice motion in the Arctic.

  17. Core shifts, magnetic fields and magnetization of extragalactic jets

    NASA Astrophysics Data System (ADS)

    Zdziarski, Andrzej A.; Sikora, Marek; Pjanka, Patryk; Tchekhovskoy, Alexander

    2015-07-01

    We study the effect of radio-jet core shift, which is a dependence of the position of the jet radio core on the observational frequency. We derive a new method of measuring the jet magnetic field based on both the value of the shift and the observed radio flux, which complements the standard method that assumes equipartition. Using both methods, we re-analyse the blazar sample of Zamaninasab et al. We find that equipartition is satisfied only if the jet opening angle in the radio core region is close to the values found observationally, ≃0.1-0.2 divided by the bulk Lorentz factor, Γj. Larger values, e.g. 1/Γj, would imply magnetic fields much above equipartition. A small jet opening angle implies in turn the magnetization parameter of ≪1. We determine the jet magnetic flux taking into account this effect. We find that the transverse-averaged jet magnetic flux is fully compatible with the model of jet formation due to black hole (BH) spin-energy extraction and the accretion being a magnetically arrested disc (MAD). We calculate the jet average mass-flow rate corresponding to this model and find it consists of a substantial fraction of the mass accretion rate. This suggests the jet composition with a large fraction of baryons. We also calculate the average jet power, and find it moderately exceeds the accretion power, dot{M} c^2, reflecting BH spin energy extraction. We find our results for radio galaxies at low Eddington ratios are compatible with MADs but require a low radiative efficiency, as predicted by standard accretion models.

  18. Mathematics Literacy on Problem Based Learning with Indonesian Realistic Mathematics Education Approach Assisted E-Learning Edmodo

    NASA Astrophysics Data System (ADS)

    Wardono; Waluya, S. B.; Mariani, Scolastika; Candra D, S.

    2016-02-01

    This study aims to find out that there are differences in mathematical literacy ability in content Change and Relationship class VII Junior High School 19, Semarang by Problem Based Learning (PBL) model with an Indonesian Realistic Mathematics Education (called Pendidikan Matematika Realistik Indonesia or PMRI in Indonesia) approach assisted Elearning Edmodo, PBL with a PMRI approach, and expository; to know whether the group of students with learning PBL models with PMRI approach and assisted E-learning Edmodo can improve mathematics literacy; to know that the quality of learning PBL models with a PMRI approach assisted E-learning Edmodo has a good category; to describe the difficulties of students in working the problems of mathematical literacy ability oriented PISA. This research is a mixed methods study. The population was seventh grade students of Junior High School 19, Semarang Indonesia. Sample selection is done by random sampling so that the selected experimental class 1, class 2 and the control experiment. Data collected by the methods of documentation, tests and interviews. From the results of this study showed average mathematics literacy ability of students in the group PBL models with a PMRI approach assisted E-learning Edmodo better than average mathematics literacy ability of students in the group PBL models with a PMRI approach and better than average mathematics literacy ability of students in the expository models; Mathematics literacy ability in the class using the PBL model with a PMRI approach assisted E-learning Edmodo have increased and the improvement of mathematics literacy ability is higher than the improvement of mathematics literacy ability of class that uses the model of PBL learning with PMRI approach and is higher than the improvement of mathematics literacy ability of class that uses the expository models; The quality of learning using PBL models with a PMRI approach assisted E-learning Edmodo have very good category.

  19. Real-time tumor motion estimation using respiratory surrogate via memory-based learning

    NASA Astrophysics Data System (ADS)

    Li, Ruijiang; Lewis, John H.; Berbeco, Ross I.; Xing, Lei

    2012-08-01

    Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95th percentile error of 3.4 mm on unseen test data. The average 3D error was further reduced to 1.4 mm when the model was tuned to its optimal setting for each respiratory trace. In one trace where a few outliers are present in the training data, the proposed method achieved an error reduction of as much as ∼50% compared with the best linear model (1.0 mm versus 2.1 mm). The memory-based learning technique is able to accurately capture the highly complex and nonlinear relations between tumor and surrogate motion in an efficient manner (a few milliseconds per estimate). Furthermore, the algorithm is particularly suitable to handle situations where the training data are contaminated by large errors or outliers. These desirable properties make it an ideal candidate for accurate and robust tumor gating/tracking using respiratory surrogates.

  20. Selected approaches to estimate water-budget components of the High Plains, 1940 through 1949 and 2000 through 2009

    USGS Publications Warehouse

    Stanton, Jennifer S.; Qi, Sharon L.; Ryter, Derek W.; Falk, Sarah E.; Houston, Natalie A.; Peterson, Steven M.; Westenbroek, Stephen M.; Christenson, Scott C.

    2011-01-01

    The High Plains aquifer, underlying almost 112 million acres in the central United States, is one of the largest aquifers in the Nation. It is the primary water supply for drinking water, irrigation, animal production, and industry in the region. Expansion of irrigated agriculture throughout the past 60 years has helped make the High Plains one of the most productive agricultural regions in the Nation. Extensive withdrawals of groundwater for irrigation have caused water-level declines in many parts of the aquifer and increased concerns about the long-term sustainability of the aquifer. Quantification of water-budget components is a prerequisite for effective water-resources management. Components analyzed as part of this study were precipitation, evapotranspiration, recharge, surface runoff, groundwater discharge to streams, groundwater fluxes to and from adjacent geologic units, irrigation, and groundwater in storage. These components were assessed for 1940 through 1949 (representing conditions prior to substantial groundwater development and referred to as "pregroundwater development" throughout this report) and 2000 through 2009. Because no single method can perfectly quantify the magnitude of any part of a water budget at a regional scale, results from several methods and previously published work were compiled and compared for this study when feasible. Results varied among the several methods applied, as indicated by the range of average annual volumes given for each component listed in the following paragraphs. Precipitation was derived from three sources: the Parameter-Elevation Regressions on Independent Slopes Model, data developed using Next Generation Weather Radar and measured precipitation from weather stations by the Office of Hydrologic Development at the National Weather Service for the Sacramento-Soil Moisture Accounting model, and precipitation measured at weather stations and spatially distributed using an inverse-distance-weighted interpolation method. Precipitation estimates using these sources, as a 10-year average annual total volume for the High Plains, ranged from 192 to 199 million acre-feet (acre-ft) for 1940 through 1949 and from 185 to 199 million acre-ft for 2000 through 2009. Evapotranspiration was obtained from three sources: the National Weather Service Sacramento-Soil Moisture Accounting model, the Simplified-Surface-Energy-Balance model using remotely sensed data, and the Soil-Water-Balance model. Average annual total evapotranspiration estimated using these sources was 148 million acre-ft for 1940 through 1949 and ranged from 154 to 193 million acre-ft for 2000 through 2009. The maximum amount of shallow groundwater lost to evapotranspiration was approximated for areas where the water table was within 5 feet of land surface. The average annual total volume of evapotranspiration from shallow groundwater was 9.0 million acre-ft for 1940 through 1949 and ranged from 9.6 to 12.6 million acre-ft for 2000 through 2009. Recharge was estimated using two soil-water-balance models as well as previously published studies for various locations across the High Plains region. Average annual total recharge ranged from 8.3 to 13.2 million acre-ft for 1940 through 1949 and from 15.9 to 35.0 million acre-ft for 2000 through 2009. Surface runoff and groundwater discharge to streams were determined using discharge records from streamflow-gaging stations near the edges of the High Plains and the Base-Flow Index program. For 1940 through 1949, the average annual net surface runoff leaving the High Plains was 1.9 million acre-ft, and the net loss from the High Plains aquifer by groundwater discharge to streams was 3.1 million acre-ft. For 2000 through 2009, the average annual net surface runoff leaving the High Plains region was 1.3 million acre-ft and the net loss by groundwater discharge to streams was 3.9 million acre-ft. For 2000 through 2009, the average annual total estimated groundwater pumpage volume from two soil-water-balance models ranged from 8.7 to 16.2 million acre-ft. Average annual irrigation application rates for the High Plains ranged from 8.4 to 16.2 inches per year. The USGS Water-Use Program published estimated total annual pumpage from the High Plains aquifer for 2000 and 2005. Those volumes were greater than those estimated from the two soil-water-balance models. Total groundwater in storage in the High Plains aquifer was estimated as 3,173 million acre-ft prior to groundwater development and 2,907 million acre-ft in 2007. The average annual decrease of groundwater in storage between 2000 and 2007 was 10 million acre-ft per year.

  1. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.

  2. A two-step ionospheric modeling algorithm considering the impact of GLONASS pseudo-range inter-channel biases

    NASA Astrophysics Data System (ADS)

    Zhang, Rui; Yao, Yi-bin; Hu, Yue-ming; Song, Wei-wei

    2017-12-01

    The Global Navigation Satellite System presents a plausible and cost-effective way of computing the total electron content (TEC). But TEC estimated value could be seriously affected by the differential code biases (DCB) of frequency-dependent satellites and receivers. Unlike GPS and other satellite systems, GLONASS adopts a frequency-division multiplexing access mode to distinguish different satellites. This strategy leads to different wavelengths and inter-frequency biases (IFBs) for both pseudo-range and carrier phase observations, whose impacts are rarely considered in ionospheric modeling. We obtained observations from four groups of co-stations to analyze the characteristics of the GLONASS receiver P1P2 pseudo-range IFB with a double-difference method. The results showed that the GLONASS P1P2 pseudo-range IFB remained stable for a period of time and could catch up to several meters, which cannot be absorbed by the receiver DCB during ionospheric modeling. Given the characteristics of the GLONASS P1P2 pseudo-range IFB, we proposed a two-step ionosphere modeling method with the priori IFB information. The experimental analysis showed that the new algorithm can effectively eliminate the adverse effects on ionospheric model and hardware delay parameters estimation in different space environments. During high solar activity period, compared to the traditional GPS + GLONASS modeling algorithm, the absolute average deviation of TEC decreased from 2.17 to 2.07 TECu (TEC unit); simultaneously, the average RMS of GPS satellite DCB decreased from 0.225 to 0.219 ns, and the average deviation of GLONASS satellite DCB decreased from 0.253 to 0.113 ns with a great improvement in over 55%.

  3. The life of a meander bend: Connecting shape and dynamics via analysis of a numerical model

    NASA Astrophysics Data System (ADS)

    Schwenk, Jon; Lanzoni, Stefano; Foufoula-Georgiou, Efi

    2015-04-01

    Analysis of bend-scale meandering river dynamics is a problem of theoretical and practical interest. This work introduces a method for extracting and analyzing the history of individual meander bends from inception until cutoff (called "atoms") by tracking backward through time the set of two cutoff nodes in numerical meander migration models. Application of this method to a simplified yet physically based model provides access to previously unavailable bend-scale meander dynamics over long times and at high temporal resolutions. We find that before cutoffs, the intrinsic model dynamics invariably simulate a prototypical cutoff atom shape we dub simple. Once perturbations from cutoffs occur, two other archetypal cutoff planform shapes emerge called long and round that are distinguished by a stretching along their long and perpendicular axes, respectively. Three measures of meander migration—growth rate, average migration rate, and centroid migration rate—are introduced to capture the dynamic lives of individual bends and reveal that similar cutoff atom geometries share similar dynamic histories. Specifically, through the lens of the three shape types, simples are seen to have the highest growth and average migration rates, followed by rounds, and finally longs. Using the maximum average migration rate as a metric describing an atom's dynamic past, we show a strong connection between it and two metrics of cutoff geometry. This result suggests both that early formative dynamics may be inferred from static cutoff planforms and that there exists a critical period early in a meander bend's life when its dynamic trajectory is most sensitive to cutoff perturbations. An example of how these results could be applied to Mississippi River oxbow lakes with unknown historic dynamics is shown. The results characterize the underlying model and provide a framework for comparisons against more complex models and observed dynamics.

  4. Choosing Models for Health Care Cost Analyses: Issues of Nonlinearity and Endogeneity

    PubMed Central

    Garrido, Melissa M; Deb, Partha; Burgess, James F; Penrod, Joan D

    2012-01-01

    Objective To compare methods of analyzing endogenous treatment effect models for nonlinear outcomes and illustrate the impact of model specification on estimates of treatment effects such as health care costs. Data Sources Secondary data on cost and utilization for inpatients hospitalized in five Veterans Affairs acute care facilities in 2005–2006. Study Design We compare results from analyses with full information maximum simulated likelihood (FIMSL); control function (CF) approaches employing different types and functional forms for the residuals, including the special case of two-stage residual inclusion; and two-stage least squares (2SLS). As an example, we examine the effect of an inpatient palliative care (PC) consultation on direct costs of care per day. Data Collection/Extraction Methods We analyzed data for 3,389 inpatients with one or more life-limiting diseases. Principal Findings The distribution of average treatment effects on the treated and local average treatment effects of a PC consultation depended on model specification. CF and FIMSL estimates were more similar to each other than to 2SLS estimates. CF estimates were sensitive to choice and functional form of residual. Conclusions When modeling cost or other nonlinear data with endogeneity, one should be aware of the impact of model specification and treatment effect choice on results. PMID:22524165

  5. Warp-averaging event-related potentials.

    PubMed

    Wang, K; Begleiter, H; Porjesz, B

    2001-10-01

    To align the repeated single trials of the event-related potential (ERP) in order to get an improved estimate of the ERP. A new implementation of the dynamic time warping is applied to compute a warp-average of the single trials. The trilinear modeling method is applied to filter the single trials prior to alignment. Alignment is based on normalized signals and their estimated derivatives. These features reduce the misalignment due to aligning the random alpha waves, explaining amplitude differences in latency differences, or the seemingly small amplitudes of some components. Simulations and applications to visually evoked potentials show significant improvement over some commonly used methods. The new implementation of the dynamic time warping can be used to align the major components (P1, N1, P2, N2, P3) of the repeated single trials. The average of the aligned single trials is an improved estimate of the ERP. This could lead to more accurate results in subsequent analysis.

  6. SIMULATION OF FLOOD HYDROGRAPHS FOR GEORGIA STREAMS.

    USGS Publications Warehouse

    Inman, E.J.; Armbruster, J.T.

    1986-01-01

    Flood hydrographs are needed for the design of many highway drainage structures and embankments. A method for simulating these flood hydrographs at urban and rural ungauged sites in Georgia is presented. The O'Donnell method was used to compute unit hydrographs from 355 flood events from 80 stations. An average unit hydrograph and an average lag time were computed for each station. These average unit hydrographs were transformed to unit hydrographs having durations of one-fourth, one-third, one-half, and three-fourths lag time and then reduced to dimensionless terms by dividing the time by lag time and the discharge by peak discharge. Hydrographs were simulated for these 355 flood events and their widths were compared with the widths of the observed hydrographs at 50 and 75 percent of peak flow. For simulating hydrographs at sites larger than 500 mi**2, the U. S. Geological Survey computer model CONROUT can be used.

  7. Factors influencing suspended solids concentrations in activated sludge settling tanks.

    PubMed

    Kim, Y; Pipes, W O

    1999-05-31

    A significant fraction of the total mass of sludge in an activated sludge process may be in the settling tanks if the sludge has a high sludge volume index (SVI) or when a hydraulic overload occurs during a rainstorm. Under those conditions, an accurate estimate of the amount of sludge in the settling tanks is needed in order to calculate the mean cell residence time or to determine the capacity of the settling tanks to store sludge. Determination of the amount of sludge in the settling tanks requires estimation of the average concentration of suspended solids in the layer of sludge (XSB) in the bottom of the settling tanks. A widely used reference recommends averaging the concentrations of suspended solids in the mixed liquor (X) and in the underflow (Xu) from the settling tanks (XSB=0. 5{X+Xu}). This method does not take into consideration other pertinent information available to an operator. This is a report of a field study which had the objective of developing a more accurate method for estimation of the XSB in the bottom of the settling tanks. By correlation analysis, it was found that only 44% of the variation in the measured XSB is related to sum of X and Xu. XSB is also influenced by the SVI, the zone settling velocity at X and the overflow and underflow rates of the settling tanks. The method of averaging X and Xu tends to overestimate the XSB. A new empirical estimation technique for XSB was developed. The estimation technique uses dimensionless ratios; i.e., the ratio of XSB to Xu, the ratio of the overflow rate to the sum of the underflow rate and the initial settling velocity of the mixed liquor and sludge compaction expressed as a ratio (dimensionless SVI). The empirical model is compared with the method of averaging X and Xu for the entire range of sludge depths in the settling tanks and for SVI values between 100 and 300 ml/g. Since the empirical model uses dimensionless ratios, the regression parameters are also dimensionless and the model can be readily adopted for other activated sludge processes. A simplified version of the empirical model provides an estimation of XSB as a function of X, Xu and SVf and can be used by an operator when flow conditions are normal. Copyright 1999 Elsevier Science B.V.

  8. Model averaging in linkage analysis.

    PubMed

    Matthysse, Steven

    2006-06-05

    Methods for genetic linkage analysis are traditionally divided into "model-dependent" and "model-independent," but there may be a useful place for an intermediate class, in which a broad range of possible models is considered as a parametric family. It is possible to average over model space with an empirical Bayes prior that weights models according to their goodness of fit to epidemiologic data, such as the frequency of the disease in the population and in first-degree relatives (and correlations with other traits in the pleiotropic case). For averaging over high-dimensional spaces, Markov chain Monte Carlo (MCMC) has great appeal, but it has a near-fatal flaw: it is not possible, in most cases, to provide rigorous sufficient conditions to permit the user safely to conclude that the chain has converged. A way of overcoming the convergence problem, if not of solving it, rests on a simple application of the principle of detailed balance. If the starting point of the chain has the equilibrium distribution, so will every subsequent point. The first point is chosen according to the target distribution by rejection sampling, and subsequent points by an MCMC process that has the target distribution as its equilibrium distribution. Model averaging with an empirical Bayes prior requires rapid estimation of likelihoods at many points in parameter space. Symbolic polynomials are constructed before the random walk over parameter space begins, to make the actual likelihood computations at each step of the random walk very fast. Power analysis in an illustrative case is described. (c) 2006 Wiley-Liss, Inc.

  9. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  10. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  11. Twisting short dsDNA with applied tension

    NASA Astrophysics Data System (ADS)

    Zoli, Marco

    2018-02-01

    The twisting deformation of mechanically stretched DNA molecules is studied by a coarse grained Hamiltonian model incorporating the fundamental interactions that stabilize the double helix and accounting for the radial and angular base pair fluctuations. The latter are all the more important at short length scales in which DNA fragments maintain an intrinsic flexibility. The presented computational method simulates a broad ensemble of possible molecule conformations characterized by a specific average twist and determines the energetically most convenient helical twist by free energy minimization. As this is done for any external load, the method yields the characteristic twist-stretch profile of the molecule and also computes the changes in the macroscopic helix parameters i.e. average diameter and rise distance. It is predicted that short molecules under stretching should first over-twist and then untwist by increasing the external load. Moreover, applying a constant load and simulating a torsional strain which over-twists the helix, it is found that the average helix diameter shrinks while the molecule elongates, in agreement with the experimental trend observed in kilo-base long sequences. The quantitative relation between percent relative elongation and superhelical density at fixed load is derived. The proposed theoretical model and computational method offer a general approach to characterize specific DNA fragments and predict their macroscopic elastic response as a function of the effective potential parameters of the mesoscopic Hamiltonian.

  12. The value of vital sign trends for detecting clinical deterioration on the wards

    PubMed Central

    Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P

    2016-01-01

    Aim Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient’s current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Methods Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). Results A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC −0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Conclusion Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. PMID:26898412

  13. Modeling Of In-Vehicle Human Exposure to Ambient Fine Particulate Matter

    PubMed Central

    Liu, Xiaozhen; Frey, H. Christopher

    2012-01-01

    A method for estimating in-vehicle PM2.5 exposure as part of a scenario-based population simulation model is developed and assessed. In existing models, such as the Stochastic Exposure and Dose Simulation model for Particulate Matter (SHEDS-PM), in-vehicle exposure is estimated using linear regression based on area-wide ambient PM2.5 concentration. An alternative modeling approach is explored based on estimation of near-road PM2.5 concentration and an in-vehicle mass balance. Near-road PM2.5 concentration is estimated using a dispersion model and fixed site monitor (FSM) data. In-vehicle concentration is estimated based on air exchange rate and filter efficiency. In-vehicle concentration varies with road type, traffic flow, windspeed, stability class, and ventilation. Average in-vehicle exposure is estimated to contribute 10 to 20 percent of average daily exposure. The contribution of in-vehicle exposure to total daily exposure can be higher for some individuals. Recommendations are made for updating exposure models and implementation of the alternative approach. PMID:23101000

  14. SPATIO-TEMPORAL MODELING OF AGRICULTURAL YIELD DATA WITH AN APPLICATION TO PRICING CROP INSURANCE CONTRACTS

    PubMed Central

    Ozaki, Vitor A.; Ghosh, Sujit K.; Goodwin, Barry K.; Shirota, Ricardo

    2009-01-01

    This article presents a statistical model of agricultural yield data based on a set of hierarchical Bayesian models that allows joint modeling of temporal and spatial autocorrelation. This method captures a comprehensive range of the various uncertainties involved in predicting crop insurance premium rates as opposed to the more traditional ad hoc, two-stage methods that are typically based on independent estimation and prediction. A panel data set of county-average yield data was analyzed for 290 counties in the State of Paraná (Brazil) for the period of 1990 through 2002. Posterior predictive criteria are used to evaluate different model specifications. This article provides substantial improvements in the statistical and actuarial methods often applied to the calculation of insurance premium rates. These improvements are especially relevant to situations where data are limited. PMID:19890450

  15. Pharmacophore-Map-Pick: A Method to Generate Pharmacophore Models for All Human GPCRs.

    PubMed

    Dai, Shao-Xing; Li, Gong-Hua; Gao, Yue-Dong; Huang, Jing-Fei

    2016-02-01

    GPCR-based drug discovery is hindered by a lack of effective screening methods for most GPCRs that have neither ligands nor high-quality structures. With the aim to identify lead molecules for these GPCRs, we developed a new method called Pharmacophore-Map-Pick to generate pharmacophore models for all human GPCRs. The model of ADRB2 generated using this method not only predicts the binding mode of ADRB2-ligands correctly but also performs well in virtual screening. Findings also demonstrate that this method is powerful for generating high-quality pharmacophore models. The average enrichment for the pharmacophore models of the 15 targets in different GPCR families reached 15-fold at 0.5 % false-positive rate. Therefore, the pharmacophore models can be applied in virtual screening directly with no requirement for any ligand information or shape constraints. A total of 2386 pharmacophore models for 819 different GPCRs (99 % coverage (819/825)) were generated and are available at http://bsb.kiz.ac.cn/GPCRPMD. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Evaluation of the safety performance of highway alignments based on fault tree analysis and safety boundaries.

    PubMed

    Chen, Yikai; Wang, Kai; Xu, Chengcheng; Shi, Qin; He, Jie; Li, Peiqing; Shi, Ting

    2018-05-19

    To overcome the limitations of previous highway alignment safety evaluation methods, this article presents a highway alignment safety evaluation method based on fault tree analysis (FTA) and the characteristics of vehicle safety boundaries, within the framework of dynamic modeling of the driver-vehicle-road system. Approaches for categorizing the vehicle failure modes while driving on highways and the corresponding safety boundaries were comprehensively investigated based on vehicle system dynamics theory. Then, an overall crash probability model was formulated based on FTA considering the risks of 3 failure modes: losing steering capability, losing track-holding capability, and rear-end collision. The proposed method was implemented on a highway segment between Bengbu and Nanjing in China. A driver-vehicle-road multibody dynamics model was developed based on the 3D alignments of the Bengbu to Nanjing section of Ning-Luo expressway using Carsim, and the dynamics indices, such as sideslip angle and, yaw rate were obtained. Then, the average crash probability of each road section was calculated with a fixed-length method. Finally, the average crash probability was validated against the crash frequency per kilometer to demonstrate the accuracy of the proposed method. The results of the regression analysis and correlation analysis indicated good consistency between the results of the safety evaluation and the crash data and that it outperformed the safety evaluation methods used in previous studies. The proposed method has the potential to be used in practical engineering applications to identify crash-prone locations and alignment deficiencies on highways in the planning and design phases, as well as those in service.

  17. [Spatiotemporal variation characteristics and related affecting factors of actual evapotranspiration in the Hun-Taizi River Basin, Northeast China].

    PubMed

    Feng, Xue; Cai, Yan-Cong; Guan, De-Xin; Jin, Chang-Jie; Wang, An-Zhi; Wu, Jia-Bing; Yuan, Feng-Hui

    2014-10-01

    Based on the meteorological and hydrological data from 1970 to 2006, the advection-aridity (AA) model with calibrated parameters was used to calculate evapotranspiration in the Hun-Taizi River Basin in Northeast China. The original parameter of the AA model was tuned according to the water balance method and then four subbasins were selected to validate. Spatiotemporal variation characteristics of evapotranspiration and related affecting factors were analyzed using the methods of linear trend analysis, moving average, kriging interpolation and sensitivity analysis. The results showed that the empirical parameter value of 0.75 of AA model was suitable for the Hun-Taizi River Basin with an error of 11.4%. In the Hun-Taizi River Basin, the average annual actual evapotranspiration was 347.4 mm, which had a slightly upward trend with a rate of 1.58 mm · (10 a(-1)), but did not change significantly. It also indicated that the annual actual evapotranspiration presented a single-peaked pattern and its peak value occurred in July; the evapotranspiration in summer was higher than in spring and autumn, and it was the smallest in winter. The annual average evapotranspiration showed a decreasing trend from the northwest to the southeast in the Hun-Taizi River Basin from 1970 to 2006 with minor differences. Net radiation was largely responsible for the change of actual evapotranspiration in the Hun-Taizi River Basin.

  18. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    PubMed

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  19. Near-optimal protocols in complex nonequilibrium transformations

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...

    2016-08-29

    The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less

  20. 20 CFR 404.220 - Average-monthly-wage method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... average-monthly-wage method if it is to your advantage. Being eligible for either the average-monthly-wage method or the modified average-monthly-wage method does not preclude your eligibility under the old-start...

  1. Internal rotation of 13 low-mass low-luminosity red giants in the Kepler field

    NASA Astrophysics Data System (ADS)

    Triana, S. A.; Corsaro, E.; De Ridder, J.; Bonanno, A.; Pérez Hernández, F.; García, R. A.

    2017-06-01

    Context. The Kepler space telescope has provided time series of red giants of such unprecedented quality that a detailed asteroseismic analysis becomes possible. For a limited set of about a dozen red giants, the observed oscillation frequencies obtained by peak-bagging together with the most recent pulsation codes allowed us to reliably determine the core/envelope rotation ratio. The results so far show that the current models are unable to reproduce the rotation ratios, predicting higher values than what is observed and thus indicating that an efficient angular momentum transport mechanism should be at work. Here we provide an asteroseismic analysis of a sample of 13 low-luminosity low-mass red giant stars observed by Kepler during its first nominal mission. These targets form a subsample of the 19 red giants studied previously, which not only have a large number of extracted oscillation frequencies, but also unambiguous mode identifications. Aims: We aim to extend the sample of red giants for which internal rotation ratios obtained by theoretical modeling of peak-bagged frequencies are available. We also derive the rotation ratios using different methods, and compare the results of these methods with each other. Methods: We built seismic models using a grid search combined with a Nelder-Mead simplex algorithm and obtained rotation averages employing Bayesian inference and inversion methods. We compared these averages with those obtained using a previously developed model-independent method. Results: We find that the cores of the red giants in this sample are rotating 5 to 10 times faster than their envelopes, which is consistent with earlier results. The rotation rates computed from the different methods show good agreement for some targets, while some discrepancies exist for others.

  2. Computer-assisted design and finite element simulation of braces for the treatment of adolescent idiopathic scoliosis using a coronal plane radiograph and surface topography.

    PubMed

    Pea, Rany; Dansereau, Jean; Caouette, Christiane; Cobetto, Nikita; Aubin, Carl-Éric

    2018-05-01

    Orthopedic braces made by Computer-Aided Design and Manufacturing and numerical simulation were shown to improve spinal deformities correction in adolescent idiopathic scoliosis while using less material. Simulations with BraceSim (Rodin4D, Groupe Lagarrigue, Bordeaux, France) require a sagittal radiograph, not always available. The objective was to develop an innovative modeling method based on a single coronal radiograph and surface topography, and assess the effectiveness of braces designed with this approach. With a patient coronal radiograph and a surface topography, the developed method allowed the 3D reconstruction of the spine, rib cage and pelvis using geometric models from a database and a free form deformation technique. The resulting 3D reconstruction converted into a finite element model was used to design and simulate the correction of a brace. The developed method was tested with data from ten scoliosis cases. The simulated correction was compared to analogous simulations performed with a 3D reconstruction built using two radiographs and surface topography (validated gold standard reference). There was an average difference of 1.4°/1.7° for the thoracic/lumbar Cobb angle, and 2.6°/5.5° for the kyphosis/lordosis between the developed reconstruction method and the reference. The average difference of the simulated correction was 2.8°/2.4° for the thoracic/lumbar Cobb angles and 3.5°/5.4° the kyphosis/lordosis. This study showed the feasibility to design and simulate brace corrections based on a new modeling method with a single coronal radiograph and surface topography. This innovative method could be used to improve brace designs, at a lesser radiation dose for the patient. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Model-based registration for assessment of spinal deformities in idiopathic scoliosis

    NASA Astrophysics Data System (ADS)

    Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Knutsson, Hans

    2014-01-01

    Detailed analysis of spinal deformity is important within orthopaedic healthcare, in particular for assessment of idiopathic scoliosis. This paper addresses this challenge by proposing an image analysis method, capable of providing a full three-dimensional spine characterization. The proposed method is based on the registration of a highly detailed spine model to image data from computed tomography. The registration process provides an accurate segmentation of each individual vertebra and the ability to derive various measures describing the spinal deformity. The derived measures are estimated from landmarks attached to the spine model and transferred to the patient data according to the registration result. Evaluation of the method provides an average point-to-surface error of 0.9 mm ± 0.9 (comparing segmentations), and an average target registration error of 2.3 mm ± 1.7 (comparing landmarks). Comparing automatic and manual measurements of axial vertebral rotation provides a mean absolute difference of 2.5° ± 1.8, which is on a par with other computerized methods for assessing axial vertebral rotation. A significant advantage of our method, compared to other computerized methods for rotational measurements, is that it does not rely on vertebral symmetry for computing the rotational measures. The proposed method is fully automatic and computationally efficient, only requiring three to four minutes to process an entire image volume covering vertebrae L5 to T1. Given the use of landmarks, the method can be readily adapted to estimate other measures describing a spinal deformity by changing the set of employed landmarks. In addition, the method has the potential to be utilized for accurate segmentations of the vertebrae in routine computed tomography examinations, given the relatively low point-to-surface error.

  4. Dynamic Assessment of Water Quality Based on a Variable Fuzzy Pattern Recognition Model

    PubMed Central

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-01-01

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results. PMID:25689998

  5. Dynamic assessment of water quality based on a variable fuzzy pattern recognition model.

    PubMed

    Xu, Shiguo; Wang, Tianxiang; Hu, Suduan

    2015-02-16

    Water quality assessment is an important foundation of water resource protection and is affected by many indicators. The dynamic and fuzzy changes of water quality lead to problems for proper assessment. This paper explores a method which is in accordance with the water quality changes. The proposed method is based on the variable fuzzy pattern recognition (VFPR) model and combines the analytic hierarchy process (AHP) model with the entropy weight (EW) method. The proposed method was applied to dynamically assess the water quality of Biliuhe Reservoir (Dailan, China). The results show that the water quality level is between levels 2 and 3 and worse in August or September, caused by the increasing water temperature and rainfall. Weights and methods are compared and random errors of the values of indicators are analyzed. It is concluded that the proposed method has advantages of dynamism, fuzzification and stability by considering the interval influence of multiple indicators and using the average level characteristic values of four models as results.

  6. The GeoClaw software for depth-averaged flows with adaptive refinement

    USGS Publications Warehouse

    Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, Kyle T.

    2011-01-01

    Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.

  7. Spectral-spatial classification using tensor modeling for cancer detection with hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Lu, Guolan; Halig, Luma; Wang, Dongsheng; Chen, Zhuo Georgia; Fei, Baowei

    2014-03-01

    As an emerging technology, hyperspectral imaging (HSI) combines both the chemical specificity of spectroscopy and the spatial resolution of imaging, which may provide a non-invasive tool for cancer detection and diagnosis. Early detection of malignant lesions could improve both survival and quality of life of cancer patients. In this paper, we introduce a tensor-based computation and modeling framework for the analysis of hyperspectral images to detect head and neck cancer. The proposed classification method can distinguish between malignant tissue and healthy tissue with an average sensitivity of 96.97% and an average specificity of 91.42% in tumor-bearing mice. The hyperspectral imaging and classification technology has been demonstrated in animal models and can have many potential applications in cancer research and management.

  8. Debris-flow runout predictions based on the average channel slope (ACS)

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Prediction of the runout distance of a debris flow is an important element in the delineation of potentially hazardous areas on alluvial fans and for the siting of mitigation structures. Existing runout estimation methods rely on input parameters that are often difficult to estimate, including volume, velocity, and frictional factors. In order to provide a simple method for preliminary estimates of debris-flow runout distances, we developed a model that provides runout predictions based on the average channel slope (ACS model) for non-volcanic debris flows that emanate from confined channels and deposit on well-defined alluvial fans. This model was developed from 20 debris-flow events in the western United States and British Columbia. Based on a runout estimation method developed for snow avalanches, this model predicts debris-flow runout as an angle of reach from a fixed point in the drainage channel to the end of the runout zone. The best fixed point was found to be the mid-point elevation of the drainage channel, measured from the apex of the alluvial fan to the top of the drainage basin. Predicted runout lengths were more consistent than those obtained from existing angle-of-reach estimation methods. Results of the model compared well with those of laboratory flume tests performed using the same range of channel slopes. The robustness of this model was tested by applying it to three debris-flow events not used in its development: predicted runout ranged from 82 to 131% of the actual runout for these three events. Prediction interval multipliers were also developed so that the user may calculate predicted runout within specified confidence limits. ?? 2008 Elsevier B.V. All rights reserved.

  9. Optimized Free Energies from Bidirectional Single-Molecule Force Spectroscopy

    NASA Astrophysics Data System (ADS)

    Minh, David D. L.; Adib, Artur B.

    2008-05-01

    An optimized method for estimating path-ensemble averages using data from processes driven in opposite directions is presented. Based on this estimator, bidirectional expressions for reconstructing free energies and potentials of mean force from single-molecule force spectroscopy—valid for biasing potentials of arbitrary stiffness—are developed. Numerical simulations on a model potential indicate that these methods perform better than unidirectional strategies.

  10. Girsanov reweighting for path ensembles and Markov state models

    NASA Astrophysics Data System (ADS)

    Donati, L.; Hartmann, C.; Keller, B. G.

    2017-06-01

    The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

  11. BIMOS transistor solutions for ESD protection in FD-SOI UTBB CMOS technology

    NASA Astrophysics Data System (ADS)

    Galy, Philippe; Athanasiou, S.; Cristoloveanu, S.

    2016-01-01

    We evaluate the Electro-Static Discharge (ESD) protection capability of BIpolar MOS (BIMOS) transistors integrated in ultrathin silicon film for 28 nm Fully Depleted SOI (FD-SOI) Ultra Thin Body and BOX (UTBB) high-k metal gate technology. Using as a reference our measurements in hybrid bulk-SOI structures, we extend the BIMOS design towards the ultrathin silicon film. Detailed study and pragmatic evaluations are done based on 3D TCAD simulation with standard physical models using Average Current Slope (ACS) method and quasi-static DC stress (Average Voltage Slope AVS method). These preliminary 3D TACD results are very encouraging in terms of ESD protection efficiency in advanced FD-SOI CMOS.

  12. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Using a motion capture system for spatial localization of EEG electrodes

    PubMed Central

    Reis, Pedro M. R.; Lochmann, Matthias

    2015-01-01

    Electroencephalography (EEG) is often used in source analysis studies, in which the locations of cortex regions responsible for a signal are determined. For this to be possible, accurate positions of the electrodes at the scalp surface must be determined, otherwise errors in the source estimation will occur. Today, several methods for acquiring these positions exist but they are often not satisfyingly accurate or take a long time to perform. Therefore, in this paper we describe a method capable of determining the positions accurately and fast. This method uses an infrared light motion capture system (IR-MOCAP) with 8 cameras arranged around a human participant. It acquires 3D coordinates of each electrode and automatically labels them. Each electrode has a small reflector on top of it thus allowing its detection by the cameras. We tested the accuracy of the presented method by acquiring the electrodes positions on a rigid sphere model and comparing these with measurements from computer tomography (CT). The average Euclidean distance between the sphere model CT measurements and the presented method was 1.23 mm with an average standard deviation of 0.51 mm. We also tested the method with a human participant. The measurement was quickly performed and all positions were captured. These results tell that, with this method, it is possible to acquire electrode positions with minimal error and little time effort for the study participants and investigators. PMID:25941468

  14. A Comparison of the Spatial Linear Model to Nearest Neighbor (k-NN) Methods for Forestry Applications

    Treesearch

    Jay M. Ver Hoef; Hailemariam Temesgen; Sergio Gómez

    2013-01-01

    Forest surveys provide critical information for many diverse interests. Data are often collected from samples, and from these samples, maps of resources and estimates of aerial totals or averages are required. In this paper, two approaches for mapping and estimating totals; the spatial linear model (SLM) and k-NN (k-Nearest Neighbor) are compared, theoretically,...

  15. Modeling of Academic Achievement of Primary School Students in Ethiopia Using Bayesian Multilevel Approach

    ERIC Educational Resources Information Center

    Sebro, Negusse Yohannes; Goshu, Ayele Taye

    2017-01-01

    This study aims to explore Bayesian multilevel modeling to investigate variations of average academic achievement of grade eight school students. A sample of 636 students is randomly selected from 26 private and government schools by a two-stage stratified sampling design. Bayesian method is used to estimate the fixed and random effects. Input and…

  16. Modelling and Analysis of the Excavation Phase by the Theory of Blocks Method of Tunnel 4 Kherrata Gorge, Algeria

    NASA Astrophysics Data System (ADS)

    Boukarm, Riadh; Houam, Abdelkader; Fredj, Mohammed; Boucif, Rima

    2017-12-01

    The aim of our work is to check the stability during excavation tunnel work in the rock mass of Kherrata, connecting the cities of Bejaia to Setif. The characterization methods through the Q system (method of Barton), RMR (Bieniawski classification) allowed us to conclude that the quality of rock mass is average in limestone, and poor in fractured limestone. Then modelling of excavation phase using the theory of blocks method (Software UNWEDGE) with the parameters from the recommendations of classification allowed us to check stability and to finally conclude that the use of geomechanical classification and the theory of blocks can be considered reliable in preliminary design.

  17. Uncertainty propagation by using spectral methods: A practical application to a two-dimensional turbulence fluid model

    NASA Astrophysics Data System (ADS)

    Riva, Fabio; Milanese, Lucio; Ricci, Paolo

    2017-10-01

    To reduce the computational cost of the uncertainty propagation analysis, which is used to study the impact of input parameter variations on the results of a simulation, a general and simple to apply methodology based on decomposing the solution to the model equations in terms of Chebyshev polynomials is discussed. This methodology, based on the work by Scheffel [Am. J. Comput. Math. 2, 173-193 (2012)], approximates the model equation solution with a semi-analytic expression that depends explicitly on time, spatial coordinates, and input parameters. By employing a weighted residual method, a set of nonlinear algebraic equations for the coefficients appearing in the Chebyshev decomposition is then obtained. The methodology is applied to a two-dimensional Braginskii model used to simulate plasma turbulence in basic plasma physics experiments and in the scrape-off layer of tokamaks, in order to study the impact on the simulation results of the input parameter that describes the parallel losses. The uncertainty that characterizes the time-averaged density gradient lengths, time-averaged densities, and fluctuation density level are evaluated. A reasonable estimate of the uncertainty of these distributions can be obtained with a single reduced-cost simulation.

  18. Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response

    PubMed Central

    Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.

    2016-01-01

    Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420

  19. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11

    PubMed Central

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-01-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671

  20. Reinterpretaion of the friendship paradox

    NASA Astrophysics Data System (ADS)

    Fu, Jingcheng; Wu, Jianliang

    The friendship paradox (FP) is a sociological phenomenon stating that most people have fewer friends than their friends do. It is to say that in a social network, the number of friends that most individuals have is smaller than the average number of friends of friends. This has been verified by Feld. We call this interpreting method mean value version. But is it the best choice to portray the paradox? In this paper, we propose a probability method to reinterpret this paradox, and we illustrate that the explanation using our method is more persuasive. An individual satisfies the FP if his (her) randomly chosen friend has more friends than him (her) with probability not less than 1/2. Comparing the ratios of nodes satisfying the FP in networks, rp, we can see that the probability version is stronger than the mean value version in real networks both online and offline. We also show some results about the effects of several parameters on rp in random network models. Most importantly, rp is a quadratic polynomial of the power law exponent γ in Price model, and rp is higher when the average clustering coefficient is between 0.4 and 0.5 in Petter-Beom (PB) model. The introduction of the probability method to FP can shed light on understanding the network structure in complex networks especially in social networks.

  1. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning

    PubMed Central

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-01-01

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively. PMID:28387744

  2. Characteristics of the BDS Carrier Phase Multipath and Its Mitigation Methods in Relative Positioning.

    PubMed

    Dai, Wujiao; Shi, Qiang; Cai, Changsheng

    2017-04-07

    The carrier phase multipath effect is one of the most significant error sources in the precise positioning of BeiDou Navigation Satellite System (BDS). We analyzed the characteristics of BDS multipath, and found the multipath errors of geostationary earth orbit (GEO) satellite signals are systematic, whereas those of inclined geosynchronous orbit (IGSO) or medium earth orbit (MEO) satellites are both systematic and random. The modified multipath mitigation methods, including sidereal filtering algorithm and multipath hemispherical map (MHM) model, were used to improve BDS dynamic deformation monitoring. The results indicate that the sidereal filtering methods can reduce the root mean square (RMS) of positioning errors in the east, north and vertical coordinate directions by 15%, 37%, 25% and 18%, 51%, 27% in the coordinate and observation domains, respectively. By contrast, the MHM method can reduce the RMS by 22%, 52% and 27% on average. In addition, the BDS multipath errors in static baseline solutions are a few centimeters in multipath-rich environments, which is different from that of Global Positioning System (GPS) multipath. Therefore, we add a parameter representing the GEO multipath error in observation equation to the adjustment model to improve the precision of BDS static baseline solutions. And the results show that the modified model can achieve an average precision improvement of 82%, 54% and 68% in the east, north and up coordinate directions, respectively.

  3. A finite-strain homogenization model for viscoplastic porous single crystals: I - Theory

    NASA Astrophysics Data System (ADS)

    Song, Dawei; Ponte Castañeda, P.

    2017-10-01

    This paper presents a homogenization-based constitutive model for the finite-strain, macroscopic response of porous viscoplastic single crystals. The model accounts explicitly for the evolution of the average lattice orientation, as well as the porosity, average shape and orientation of the voids (and their distribution), by means of appropriate microstructural variables playing the role of internal variables and serving to characterize the evolution of both the "crystallographic" and "morphological" anisotropy of the porous single crystals. The model makes use of the fully optimized second-order variational method of Ponte Castañeda (2015), together with the iterated homogenization approach of Agoras and Ponte Castañeda (2013), to characterize the instantaneous effective response of the porous single crystals with fixed values of the microstructural variables. Consistent homogenization estimates for the average strain rate and vorticity fields in the phases are then used to derive evolution equations for the associated microstructural variables. The model is 100% predictive, requiring no fitting parameters, and applies for porous viscoplastic single crystals with general crystal anisotropy and average void shape and orientation, which are subjected to general loading conditions. In Part II of this work (Song and Ponte Castañeda, 2017a), results for both the instantaneous response and the evolution of the microstructure will be presented for porous FCC and HCP single crystals under a wide range of loading conditions, and good agreement with available FEM results will be shown.

  4. A Real-Time Method for Estimating Viscous Forebody Drag Coefficients

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Hurtado, Marco; Rivera, Jose; Naughton, Jonathan W.

    2000-01-01

    This paper develops a real-time method based on the law of the wake for estimating forebody skin-friction coefficients. The incompressible law-of-the-wake equations are numerically integrated across the boundary layer depth to develop an engineering model that relates longitudinally averaged skin-friction coefficients to local boundary layer thickness. Solutions applicable to smooth surfaces with pressure gradients and rough surfaces with negligible pressure gradients are presented. Model accuracy is evaluated by comparing model predictions with previously measured flight data. This integral law procedure is beneficial in that skin-friction coefficients can be indirectly evaluated in real-time using a single boundary layer height measurement. In this concept a reference pitot probe is inserted into the flow, well above the anticipated maximum thickness of the local boundary layer. Another probe is servomechanism-driven and floats within the boundary layer. A controller regulates the position of the floating probe. The measured servomechanism position of this second probe provides an indirect measurement of both local and longitudinally averaged skin friction. Simulation results showing the performance of the control law for a noisy boundary layer are then presented.

  5. Computer-assisted virtual preoperative planning in orthopedic surgery for acetabular fractures based on actual computed tomography data.

    PubMed

    Wang, Guang-Ye; Huang, Wen-Jun; Song, Qi; Qin, Yun-Tian; Liang, Jin-Feng

    2016-12-01

    Acetabular fractures have always been very challenging for orthopedic surgeons; therefore, appropriate preoperative evaluation and planning are particularly important. This study aimed to explore the application methods and clinical value of preoperative computer simulation (PCS) in treating pelvic and acetabular fractures. Spiral computed tomography (CT) was performed on 13 patients with pelvic and acetabular fractures, and Digital Imaging and Communications in Medicine (DICOM) data were then input into Mimics software to reconstruct three-dimensional (3D) models of actual pelvic and acetabular fractures for preoperative simulative reduction and fixation, and to simulate each surgical procedure. The times needed for virtual surgical modeling and reduction and fixation were also recorded. The average fracture-modeling time was 45 min (30-70 min), and the average time for bone reduction and fixation was 28 min (16-45 min). Among the surgical approaches planned for these 13 patients, 12 were finally adopted; 12 cases used the simulated surgical fixation, and only 1 case used a partial planned fixation method. PCS can provide accurate surgical plans and data support for actual surgeries.

  6. Quasi-analytical treatment of spatially averaged radiation transfer in complex terrain

    NASA Astrophysics Data System (ADS)

    Löwe, H.; Helbig, N.

    2012-04-01

    We provide a new quasi-analytical method to compute the topographic influence on the effective albedo of complex topography as required for meteorological, land-surface or climate models. We investigate radiative transfer in complex terrain via the radiosity equation on isotropic Gaussian random fields. Under controlled approximations we derive expressions for domain averages of direct, diffuse and terrain radiation and the sky view factor. Domain averaged quantities are related to a type of level-crossing probability of the random field which is approximated by longstanding results developed for acoustic scattering at ocean boundaries. This allows us to express all non-local horizon effects in terms of a local terrain parameter, namely the mean squared slope. Emerging integrals are computed numerically and fit formulas are given for practical purposes. As an implication of our approach we provide an expression for the effective albedo of complex terrain in terms of the sun elevation angle, mean squared slope, the area averaged surface albedo, and the direct-to-diffuse ratio of solar radiation. As an application, we compute the effective albedo for the Swiss Alps and discuss possible generalizations of the method.

  7. Overview of aerothermodynamic loads definition study

    NASA Technical Reports Server (NTRS)

    Gaugler, Raymond E.

    1991-01-01

    The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.

  8. A January angular momentum balance in the OSU two-level atmospheric general circulation model

    NASA Technical Reports Server (NTRS)

    Kim, J.-W.; Grady, W.

    1982-01-01

    The present investigation is concerned with an analysis of the atmospheric angular momentum balance, based on the simulation data of the Oregon State University two-level atmospheric general circulation model (AGCM). An attempt is also made to gain an understanding of the involved processes. Preliminary results on the angular momentum and mass balance in the AGCM are shown. The basic equations are examined, and questions of turbulent momentum transfer are investigated. The methods of analysis are discussed, taking into account time-averaged balance equations, time and longitude-averaged balance equations, mean meridional circulation, the mean meridional balance of relative angular momentum, and standing and transient components of motion.

  9. An ODE-Based Wall Model for Turbulent Flow Simulations

    NASA Technical Reports Server (NTRS)

    Berger, Marsha J.; Aftosmis, Michael J.

    2017-01-01

    Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.

  10. Covariate selection with group lasso and doubly robust estimation of causal effects

    PubMed Central

    Koch, Brandon; Vock, David M.; Wolfson, Julian

    2017-01-01

    Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276

  11. Covariate selection with group lasso and doubly robust estimation of causal effects.

    PubMed

    Koch, Brandon; Vock, David M; Wolfson, Julian

    2018-03-01

    The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.

  12. A bone marrow toxicity model for 223Ra alpha-emitter radiopharmaceutical therapy

    NASA Astrophysics Data System (ADS)

    Hobbs, Robert F.; Song, Hong; Watchman, Christopher J.; Bolch, Wesley E.; Aksnes, Anne-Kirsti; Ramdahl, Thomas; Flux, Glenn D.; Sgouros, George

    2012-05-01

    Ra-223, an α-particle emitting bone-seeking radionuclide, has recently been used in clinical trials for osseous metastases of prostate cancer. We investigated the relationship between absorbed fraction-based red marrow dosimetry and cell level-dosimetry using a model that accounts for the expected localization of this agent relative to marrow cavity architecture. We show that cell level-based dosimetry is essential to understanding potential marrow toxicity. The GEANT4 software package was used to create simple spheres representing marrow cavities. Ra-223 was positioned on the trabecular bone surface or in the endosteal layer and simulated for decay, along with the descendants. The interior of the sphere was divided into cell-size voxels and the energy was collected in each voxel and interpreted as dose cell histograms. The average absorbed dose values and absorbed fractions were also calculated in order to compare those results with previously published values. The absorbed dose was predominantly deposited near the trabecular surface. The dose cell histogram results were used to plot the percentage of cells that received a potentially toxic absorbed dose (2 or 4 Gy) as a function of the average absorbed dose over the marrow cavity. The results show (1) a heterogeneous distribution of cellular absorbed dose, strongly dependent on the position of the cell within the marrow cavity; and (2) that increasing the average marrow cavity absorbed dose, or equivalently, increasing the administered activity resulted in only a small increase in potential marrow toxicity (i.e. the number of cells receiving more than 4 or 2 Gy), for a range of average marrow cavity absorbed doses from 1 to 20 Gy. The results from the trabecular model differ markedly from a standard absorbed fraction method while presenting comparable average dose values. These suggest that increasing the amount of radioactivity may not substantially increase the risk of toxicity, a result unavailable to the absorbed fraction method of dose calculation.

  13. Parameter estimation of an ARMA model for river flow forecasting using goal programming

    NASA Astrophysics Data System (ADS)

    Mohammadi, Kourosh; Eslami, H. R.; Kahawita, Rene

    2006-11-01

    SummaryRiver flow forecasting constitutes one of the most important applications in hydrology. Several methods have been developed for this purpose and one of the most famous techniques is the Auto regressive moving average (ARMA) model. In the research reported here, the goal was to minimize the error for a specific season of the year as well as for the complete series. Goal programming (GP) was used to estimate the ARMA model parameters. Shaloo Bridge station on the Karun River with 68 years of observed stream flow data was selected to evaluate the performance of the proposed method. The results when compared with the usual method of maximum likelihood estimation were favorable with respect to the new proposed algorithm.

  14. Dynamics of Polarons in Organic Conjugated Polymers with Side Radicals.

    PubMed

    Liu, J J; Wei, Z J; Zhang, Y L; Meng, Y; Di, B

    2017-03-16

    Based on the one-dimensional tight-binding Su-Schrieffer-Heeger (SSH) model, and using the molecular dynamics method, we discuss the dynamics of electron and hole polarons propagating along a polymer chain, as a function of the distance between side radicals and the magnitude of the transfer integrals between the main chain and the side radicals. We first discuss the average velocities of electron and hole polarons as a function of the distance between side radicals. It is found that the average velocities of the electron polarons remain almost unchanged, while the average velocities of hole polarons decrease significantly when the radical distance is comparable to the polaron width. Second, we have found that the average velocities of electron polarons decrease with increasing transfer integral, but the average velocities of hole polarons increase. These results may provide a theoretical basis for understanding carriers transport properties in polymers chain with side radicals.

  15. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe.

    PubMed

    Aksoy, Ece; Yigini, Yusuf; Montanarella, Luca

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they're collected from the "Land Use/Cover Area frame Statistical Survey" (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and "Soil Transformations in European Catchments" (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960-1990 and 2000-2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas; agricultural areas have much lower soil organic carbon content than forest and semi natural areas; Ireland, Sweden and Finland has the highest SOC, on the contrary, Portugal, Poland, Hungary, Spain, Italy have the lowest values with the average 3%.

  16. Combining Soil Databases for Topsoil Organic Carbon Mapping in Europe

    PubMed Central

    Aksoy, Ece

    2016-01-01

    Accuracy in assessing the distribution of soil organic carbon (SOC) is an important issue because of playing key roles in the functions of both natural ecosystems and agricultural systems. There are several studies in the literature with the aim of finding the best method to assess and map the distribution of SOC content for Europe. Therefore this study aims searching for another aspect of this issue by looking to the performances of using aggregated soil samples coming from different studies and land-uses. The total number of the soil samples in this study was 23,835 and they’re collected from the “Land Use/Cover Area frame Statistical Survey” (LUCAS) Project (samples from agricultural soil), BioSoil Project (samples from forest soil), and “Soil Transformations in European Catchments” (SoilTrEC) Project (samples from local soil data coming from six different critical zone observatories (CZOs) in Europe). Moreover, 15 spatial indicators (slope, aspect, elevation, compound topographic index (CTI), CORINE land-cover classification, parent material, texture, world reference base (WRB) soil classification, geological formations, annual average temperature, min-max temperature, total precipitation and average precipitation (for years 1960–1990 and 2000–2010)) were used as auxiliary variables in this prediction. One of the most popular geostatistical techniques, Regression-Kriging (RK), was applied to build the model and assess the distribution of SOC. This study showed that, even though RK method was appropriate for successful SOC mapping, using combined databases was not helpful to increase the statistical significance of the method results for assessing the SOC distribution. According to our results; SOC variation was mainly affected by elevation, slope, CTI, average temperature, average and total precipitation, texture, WRB and CORINE variables for Europe scale in our model. Moreover, the highest average SOC contents were found in the wetland areas; agricultural areas have much lower soil organic carbon content than forest and semi natural areas; Ireland, Sweden and Finland has the highest SOC, on the contrary, Portugal, Poland, Hungary, Spain, Italy have the lowest values with the average 3%. PMID:27011357

  17. Averaged model to study long-term dynamics of a probe about Mercury

    NASA Astrophysics Data System (ADS)

    Tresaco, Eva; Carvalho, Jean Paulo S.; Prado, Antonio F. B. A.; Elipe, Antonio; de Moraes, Rodolpho Vilhena

    2018-02-01

    This paper provides a method for finding initial conditions of frozen orbits for a probe around Mercury. Frozen orbits are those whose orbital elements remain constant on average. Thus, at the same point in each orbit, the satellite always passes at the same altitude. This is very interesting for scientific missions that require close inspection of any celestial body. The orbital dynamics of an artificial satellite about Mercury is governed by the potential attraction of the main body. Besides the Keplerian attraction, we consider the inhomogeneities of the potential of the central body. We include secondary terms of Mercury gravity field from J_2 up to J_6, and the tesseral harmonics \\overline{C}_{22} that is of the same magnitude than zonal J_2. In the case of science missions about Mercury, it is also important to consider third-body perturbation (Sun). Circular restricted three body problem can not be applied to Mercury-Sun system due to its non-negligible orbital eccentricity. Besides the harmonics coefficients of Mercury's gravitational potential, and the Sun gravitational perturbation, our average model also includes Solar acceleration pressure. This simplified model captures the majority of the dynamics of low and high orbits about Mercury. In order to capture the dominant characteristics of the dynamics, short-period terms of the system are removed applying a double-averaging technique. This algorithm is a two-fold process which firstly averages over the period of the satellite, and secondly averages with respect to the period of the third body. This simplified Hamiltonian model is introduced in the Lagrange Planetary equations. Thus, frozen orbits are characterized by a surface depending on three variables: the orbital semimajor axis, eccentricity and inclination. We find frozen orbits for an average altitude of 400 and 1000 km, which are the predicted values for the BepiColombo mission. Finally, the paper delves into the orbital stability of frozen orbits and the temporal evolution of the eccentricity of these orbits.

  18. Modeling the effects of high-G stress on pilots in a tracking task

    NASA Technical Reports Server (NTRS)

    Korn, J.; Kleinman, D. L.

    1978-01-01

    Air-to-air tracking experiments were conducted at the Aerospace Medical Research Laboratories using both fixed and moving base dynamic environment simulators. The obtained data, which includes longitudinal error of a simulated air-to-air tracking task as well as other auxiliary variables, was analyzed using an ensemble averaging method. In conjunction with these experiments, the optimal control model is applied to model a human operator under high-G stress.

  19. Co Modeling and Co Synthesis of Safety Critical Multi threaded Embedded Software for Multi Core Embedded Platforms

    DTIC Science & Technology

    2017-03-20

    computation, Prime Implicates, Boolean Abstraction, real- time embedded software, software synthesis, correct by construction software design , model...types for time -dependent data-flow networks". J.-P. Talpin, P. Jouvelot, S. Shukla. ACM-IEEE Conference on Methods and Models for System Design ...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and

  20. "Intelligent Ensemble" Projections of Precipitation and Surface Radiation in Support of Agricultural Climate Change Adaptation

    NASA Technical Reports Server (NTRS)

    Taylor, Patrick C.; Baker, Noel C.

    2015-01-01

    Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.

Top