Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Xu, Daolin; Lu, Fangfang
2006-12-01
We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.
A pilot modeling technique for handling-qualities research
NASA Technical Reports Server (NTRS)
Hess, R. A.
1980-01-01
A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.
Exploring the Components of Dynamic Modeling Techniques
ERIC Educational Resources Information Center
Turnitsa, Charles Daniel
2012-01-01
Upon defining the terms modeling and simulation, it becomes apparent that there is a wide variety of different models, using different techniques, appropriate for different levels of representation for any one system to be modeled. Selecting an appropriate conceptual modeling technique from those available is an open question for the practitioner.…
Selected Logistics Models and Techniques.
1984-09-01
TI - 59 Programmable Calculator LCC...Program 27 TI - 59 Programmable Calculator LCC Model 30 Unmanned Spacecraft Cost Model 31 iv I: TABLE OF CONTENTS (CONT’D) (Subject Index) LOGISTICS...34"" - % - "° > - " ° .° - " .’ > -% > ]*° - LOGISTICS ANALYSIS MODEL/TECHNIQUE DATA MODEL/TECHNIQUE NAME: TI - 59 Programmable Calculator LCC Model TYPE MODEL: Cost Estimating DEVELOPED BY:
Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition
Fraley, Chris; Percival, Daniel
2014-01-01
Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001
Torija, Antonio J; Ruiz, Diego P
2015-02-01
The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.
Firefly as a novel swarm intelligence variable selection method in spectroscopy.
Goodarzi, Mohammad; dos Santos Coelho, Leandro
2014-12-10
A critical step in multivariate calibration is wavelength selection, which is used to build models with better prediction performance when applied to spectral data. Up to now, many feature selection techniques have been developed. Among all different types of feature selection techniques, those based on swarm intelligence optimization methodologies are more interesting since they are usually simulated based on animal and insect life behavior to, e.g., find the shortest path between a food source and their nests. This decision is made by a crowd, leading to a more robust model with less falling in local minima during the optimization cycle. This paper represents a novel feature selection approach to the selection of spectroscopic data, leading to more robust calibration models. The performance of the firefly algorithm, a swarm intelligence paradigm, was evaluated and compared with genetic algorithm and particle swarm optimization. All three techniques were coupled with partial least squares (PLS) and applied to three spectroscopic data sets. They demonstrate improved prediction results in comparison to when only a PLS model was built using all wavelengths. Results show that firefly algorithm as a novel swarm paradigm leads to a lower number of selected wavelengths while the prediction performance of built PLS stays the same. Copyright © 2014. Published by Elsevier B.V.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
A study for development of aerothermodynamic test model materials and fabrication technique
NASA Technical Reports Server (NTRS)
Dean, W. G.; Connor, L. E.
1972-01-01
A literature survey, materials reformulation and tailoring, fabrication problems, and materials selection and evaluation for fabricating models to be used with the phase-change technique for obtaining quantitative aerodynamic heat transfer data are presented. The study resulted in the selection of two best materials, stycast 2762 FT, and an alumina ceramic. Characteristics of these materials and detailed fabrication methods are presented.
Design and Evaluation of Perceptual-based Object Group Selection Techniques
NASA Astrophysics Data System (ADS)
Dehmeshki, Hoda
Selecting groups of objects is a frequent task in graphical user interfaces. It is required prior to many standard operations such as deletion, movement, or modification. Conventional selection techniques are lasso, rectangle selection, and the selection and de-selection of items through the use of modifier keys. These techniques may become time-consuming and error-prone when target objects are densely distributed or when the distances between target objects are large. Perceptual-based selection techniques can considerably improve selection tasks when targets have a perceptual structure, for example when arranged along a line. Current methods to detect such groups use ad hoc grouping algorithms that are not based on results from perception science. Moreover, these techniques do not allow selecting groups with arbitrary arrangements or permit modifying a selection. This dissertation presents two domain-independent perceptual-based systems that address these issues. Based on established group detection models from perception research, the proposed systems detect perceptual groups formed by the Gestalt principles of good continuation and proximity. The new systems provide gesture-based or click-based interaction techniques for selecting groups with curvilinear or arbitrary structures as well as clusters. Moreover, the gesture-based system is adapted for the graph domain to facilitate path selection. This dissertation includes several user studies that show the proposed systems outperform conventional selection techniques when targets form salient perceptual groups and are still competitive when targets are semi-structured.
Dierker, Lisa; Rose, Jennifer; Tan, Xianming; Li, Runze
2010-12-01
This paper describes and compares a selection of available modeling techniques for identifying homogeneous population subgroups in the interest of informing targeted substance use intervention. We present a nontechnical review of the common and unique features of three methods: (a) trajectory analysis, (b) functional hierarchical linear modeling (FHLM), and (c) decision tree methods. Differences among the techniques are described, including required data features, strengths and limitations in terms of the flexibility with which outcomes and predictors can be modeled, and the potential of each technique for helping to inform the selection of targets and timing of substance intervention programs.
Frequentist Model Averaging in Structural Equation Modelling.
Jin, Shaobo; Ankargren, Sebastian
2018-06-04
Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
Automatic welding detection by an intelligent tool pipe inspection
NASA Astrophysics Data System (ADS)
Arizmendi, C. J.; Garcia, W. L.; Quintero, M. A.
2015-07-01
This work provide a model based on machine learning techniques in welds recognition, based on signals obtained through in-line inspection tool called “smart pig” in Oil and Gas pipelines. The model uses a signal noise reduction phase by means of pre-processing algorithms and attribute-selection techniques. The noise reduction techniques were selected after a literature review and testing with survey data. Subsequently, the model was trained using recognition and classification algorithms, specifically artificial neural networks and support vector machines. Finally, the trained model was validated with different data sets and the performance was measured with cross validation and ROC analysis. The results show that is possible to identify welding automatically with an efficiency between 90 and 98 percent.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
NASA Technical Reports Server (NTRS)
Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.
1999-01-01
This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.
Network Modeling and Simulation Environment (NEMSE)
2012-07-01
the NEMSE program investigated complex emulation techniques and selected compatible emulation techniques for all OSI network stack layers. Other...EMULAB; 2) Completed the selection of compatible emulation techniques that allows working with all layers of the Open System Interconnect ( OSI ...elements table, Figure 3, reconciles the various elements of NEMSE against the OSI stack and other functions. OSI Layer or Function EM UL AB NS 2
A Systematic Comparison of Data Selection Criteria for SMT Domain Adaptation
Chao, Lidia S.; Lu, Yi; Xing, Junwen
2014-01-01
Data selection has shown significant improvements in effective use of training data by extracting sentences from large general-domain corpora to adapt statistical machine translation (SMT) systems to in-domain data. This paper performs an in-depth analysis of three different sentence selection techniques. The first one is cosine tf-idf, which comes from the realm of information retrieval (IR). The second is perplexity-based approach, which can be found in the field of language modeling. These two data selection techniques applied to SMT have been already presented in the literature. However, edit distance for this task is proposed in this paper for the first time. After investigating the individual model, a combination of all three techniques is proposed at both corpus level and model level. Comparative experiments are conducted on Hong Kong law Chinese-English corpus and the results indicate the following: (i) the constraint degree of similarity measuring is not monotonically related to domain-specific translation quality; (ii) the individual selection models fail to perform effectively and robustly; but (iii) bilingual resources and combination methods are helpful to balance out-of-vocabulary (OOV) and irrelevant data; (iv) finally, our method achieves the goal to consistently boost the overall translation performance that can ensure optimal quality of a real-life SMT system. PMID:24683356
Attallah, Omneya; Karthikesalingam, Alan; Holt, Peter J E; Thompson, Matthew M; Sayers, Rob; Bown, Matthew J; Choke, Eddie C; Ma, Xianghong
2017-08-03
Feature selection (FS) process is essential in the medical area as it reduces the effort and time needed for physicians to measure unnecessary features. Choosing useful variables is a difficult task with the presence of censoring which is the unique characteristic in survival analysis. Most survival FS methods depend on Cox's proportional hazard model; however, machine learning techniques (MLT) are preferred but not commonly used due to censoring. Techniques that have been proposed to adopt MLT to perform FS with survival data cannot be used with the high level of censoring. The researcher's previous publications proposed a technique to deal with the high level of censoring. It also used existing FS techniques to reduce dataset dimension. However, in this paper a new FS technique was proposed and combined with feature transformation and the proposed uncensoring approaches to select a reduced set of features and produce a stable predictive model. In this paper, a FS technique based on artificial neural network (ANN) MLT is proposed to deal with highly censored Endovascular Aortic Repair (EVAR). Survival data EVAR datasets were collected during 2004 to 2010 from two vascular centers in order to produce a final stable model. They contain almost 91% of censored patients. The proposed approach used a wrapper FS method with ANN to select a reduced subset of features that predict the risk of EVAR re-intervention after 5 years to patients from two different centers located in the United Kingdom, to allow it to be potentially applied to cross-centers predictions. The proposed model is compared with the two popular FS techniques; Akaike and Bayesian information criteria (AIC, BIC) that are used with Cox's model. The final model outperforms other methods in distinguishing the high and low risk groups; as they both have concordance index and estimated AUC better than the Cox's model based on AIC, BIC, Lasso, and SCAD approaches. These models have p-values lower than 0.05, meaning that patients with different risk groups can be separated significantly and those who would need re-intervention can be correctly predicted. The proposed approach will save time and effort made by physicians to collect unnecessary variables. The final reduced model was able to predict the long-term risk of aortic complications after EVAR. This predictive model can help clinicians decide patients' future observation plan.
Execution models for mapping programs onto distributed memory parallel computers
NASA Technical Reports Server (NTRS)
Sussman, Alan
1992-01-01
The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esfahani, M. Nasr; Yilmaz, M.; Sonne, M. R.
The trend towards nanomechanical resonator sensors with increasing sensitivity raises the need to address challenges encountered in the modeling of their mechanical behavior. Selecting the best approach in mechanical response modeling amongst the various potential computational solid mechanics methods is subject to controversy. A guideline for the selection of the appropriate approach for a specific set of geometry and mechanical properties is needed. In this study, geometrical limitations in frequency response modeling of flexural nanomechanical resonators are investigated. Deviation of Euler and Timoshenko beam theories from numerical techniques including finite element modeling and Surface Cauchy-Born technique are studied. The resultsmore » provide a limit beyond which surface energy contribution dominates the mechanical behavior. Using the Surface Cauchy-Born technique as the reference, a maximum error on the order of 50 % is reported for high-aspect ratio resonators.« less
Hippert, Henrique S; Taylor, James W
2010-04-01
Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.
A Simplified Technique for Scoring DSM-IV Personality Disorders with the Five-Factor Model
ERIC Educational Resources Information Center
Miller, Joshua D.; Bagby, R. Michael; Pilkonis, Paul A.; Reynolds, Sarah K.; Lynam, Donald R.
2005-01-01
The current study compares the use of two alternative methodologies for using the Five-Factor Model (FFM) to assess personality disorders (PDs). Across two clinical samples, a technique using the simple sum of selected FFM facets is compared with a previously used prototype matching technique. The results demonstrate that the more easily…
NASA Astrophysics Data System (ADS)
Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad
2015-11-01
One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
Ground Vibration Test Planning and Pre-Test Analysis for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Bedrossian, Herand; Tinker, Michael L.; Hidalgo, Homero
2000-01-01
This paper describes the results of the modal test planning and the pre-test analysis for the X-33 vehicle. The pre-test analysis included the selection of the target modes, selection of the sensor and shaker locations and the development of an accurate Test Analysis Model (TAM). For target mode selection, four techniques were considered, one based on the Modal Cost technique, one based on Balanced Singular Value technique, a technique known as the Root Sum Squared (RSS) method, and a Modal Kinetic Energy (MKE) approach. For selecting sensor locations, four techniques were also considered; one based on the Weighted Average Kinetic Energy (WAKE), one based on Guyan Reduction (GR), one emphasizing engineering judgment, and one based on an optimum sensor selection technique using Genetic Algorithm (GA) search technique combined with a criteria based on Hankel Singular Values (HSV's). For selecting shaker locations, four techniques were also considered; one based on the Weighted Average Driving Point Residue (WADPR), one based on engineering judgment and accessibility considerations, a frequency response method, and an optimum shaker location selection based on a GA search technique combined with a criteria based on HSV's. To evaluate the effectiveness of the proposed sensor and shaker locations for exciting the target modes, extensive numerical simulations were performed. Multivariate Mode Indicator Function (MMIF) was used to evaluate the effectiveness of each sensor & shaker set with respect to modal parameter identification. Several TAM reduction techniques were considered including, Guyan, IRS, Modal, and Hybrid. Based on a pre-test cross-orthogonality checks using various reduction techniques, a Hybrid TAM reduction technique was selected and was used for all three vehicle fuel level configurations.
Propensity Score Estimation with Data Mining Techniques: Alternatives to Logistic Regression
ERIC Educational Resources Information Center
Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M.
2013-01-01
Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…
On using sample selection methods in estimating the price elasticity of firms' demand for insurance.
Marquis, M Susan; Louis, Thomas A
2002-01-01
We evaluate a technique based on sample selection models that has been used by health economists to estimate the price elasticity of firms' demand for insurance. We demonstrate that, this technique produces inflated estimates of the price elasticity. We show that alternative methods lead to valid estimates.
Data mining and statistical inference in selective laser melting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, Chandrika
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Data mining and statistical inference in selective laser melting
Kamath, Chandrika
2016-01-11
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
An Introduction to Markov Modeling: Concepts and Uses
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Lau, Sonie (Technical Monitor)
1998-01-01
Kharkov modeling is a modeling technique that is widely useful for dependability analysis of complex fault tolerant systems. It is very flexible in the type of systems and system behavior it can model. It is not, however, the most appropriate modeling technique for every modeling situation. The first task in obtaining a reliability or availability estimate for a system is selecting which modeling technique is most appropriate to the situation at hand. A person performing a dependability analysis must confront the question: is Kharkov modeling most appropriate to the system under consideration, or should another technique be used instead? The need to answer this gives rise to other more basic questions regarding Kharkov modeling: what are the capabilities and limitations of Kharkov modeling as a modeling technique? How does it relate to other modeling techniques? What kind of system behavior can it model? What kinds of software tools are available for performing dependability analyses with Kharkov modeling techniques? These questions and others will be addressed in this tutorial.
Thermal sensing of cryogenic wind tunnel model surfaces Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, K.; Ash, R. L.; Dillon-Townes, L. A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
Thermal sensing of cryogenic wind tunnel model surfaces - Evaluation of silicon diodes
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran; Ash, Robert L.; Dillon-Townes, Lawrence A.
1986-01-01
Different sensors and installation techniques for surface temperature measurement of cryogenic wind tunnel models were investigated. Silicon diodes were selected for further consideration because of their good inherent accuracy. Their average absolute temperature deviation in comparison tests with standard platinum resistance thermometers was found to be 0.2 K in the range from 125 to 273 K. Subsurface temperature measurement was selected as the installation technique in order to minimize aerodynamic interference. Temperature distortion caused by an embedded silicon diode was studied numerically.
NASA Astrophysics Data System (ADS)
Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.
2016-04-01
The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.
IT vendor selection model by using structural equation model & analytical hierarchy process
NASA Astrophysics Data System (ADS)
Maitra, Sarit; Dominic, P. D. D.
2012-11-01
Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
Conducting field studies for testing pesticide leaching models
Smith, Charles N.; Parrish, Rudolph S.; Brown, David S.
1990-01-01
A variety of predictive models are being applied to evaluate the transport and transformation of pesticides in the environment. These include well known models such as the Pesticide Root Zone Model (PRZM), the Risk of Unsaturated-Saturated Transport and Transformation Interactions for Chemical Concentrations Model (RUSTIC) and the Groundwater Loading Effects of Agricultural Management Systems Model (GLEAMS). The potentially large impacts of using these models as tools for developing pesticide management strategies and regulatory decisions necessitates development of sound model validation protocols. This paper offers guidance on many of the theoretical and practical problems encountered in the design and implementation of field-scale model validation studies. Recommendations are provided for site selection and characterization, test compound selection, data needs, measurement techniques, statistical design considerations and sampling techniques. A strategy is provided for quantitatively testing models using field measurements.
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
NASA Astrophysics Data System (ADS)
Lei, Li
1999-07-01
In this study the researcher develops and presents a new model, founded on the laws of physics, for analyzing dance technique. Based on a pilot study of four advanced dance techniques, she creates a new model for diagnosing, analyzing and describing basic, intermediate and advanced dance techniques. The name for this model is ``PED,'' which stands for Physics of Expressive Dance. The research design consists of five phases: (1) Conduct a pilot study to analyze several advanced dance techniques chosen from Chinese dance, modem dance, and ballet; (2) Based on learning obtained from the pilot study, create the PED Model for analyzing dance technique; (3) Apply this model to eight categories of dance technique; (4) Select two advanced dance techniques from each category and analyze these sample techniques to demonstrate how the model works; (5) Develop an evaluation framework and use it to evaluate the effectiveness of the model, taking into account both scientific and artistic aspects of dance training. In this study the researcher presents new solutions to three problems highly relevant to dance education: (1) Dancers attempting to learn difficult movements often fail because they are unaware of physics laws; (2) Even those who do master difficult movements can suffer injury due to incorrect training methods; (3) Even the best dancers can waste time learning by trial and error, without scientific instruction. In addition, the researcher discusses how the application of the PED model can benefit dancers, allowing them to avoid inefficient and ineffective movements and freeing them to focus on the artistic expression of dance performance. This study is unique, presenting the first comprehensive system for analyzing dance techniques in terms of physics laws. The results of this study are useful, allowing a new level of awareness about dance techniques that dance professionals can utilize for more effective and efficient teaching and learning. The approach utilized in this study is universal, and can be applied to any dance movement and to any dance style.
Wang, Kung-Jeng; Makond, Bunjira; Wang, Kung-Min
2013-11-09
Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE), cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR.
2013-01-01
Background Breast cancer is one of the most critical cancers and is a major cause of cancer death among women. It is essential to know the survivability of the patients in order to ease the decision making process regarding medical treatment and financial preparation. Recently, the breast cancer data sets have been imbalanced (i.e., the number of survival patients outnumbers the number of non-survival patients) whereas the standard classifiers are not applicable for the imbalanced data sets. The methods to improve survivability prognosis of breast cancer need for study. Methods Two well-known five-year prognosis models/classifiers [i.e., logistic regression (LR) and decision tree (DT)] are constructed by combining synthetic minority over-sampling technique (SMOTE) ,cost-sensitive classifier technique (CSC), under-sampling, bagging, and boosting. The feature selection method is used to select relevant variables, while the pruning technique is applied to obtain low information-burden models. These methods are applied on data obtained from the Surveillance, Epidemiology, and End Results database. The improvements of survivability prognosis of breast cancer are investigated based on the experimental results. Results Experimental results confirm that the DT and LR models combined with SMOTE, CSC, and under-sampling generate higher predictive performance consecutively than the original ones. Most of the time, DT and LR models combined with SMOTE and CSC use less informative burden/features when a feature selection method and a pruning technique are applied. Conclusions LR is found to have better statistical power than DT in predicting five-year survivability. CSC is superior to SMOTE, under-sampling, bagging, and boosting to improve the prognostic performance of DT and LR. PMID:24207108
Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene
2015-05-01
In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.
Jaspers, Arne; De Beéck, Tim Op; Brink, Michel S; Frencken, Wouter G P; Staes, Filip; Davis, Jesse J; Helsen, Werner F
2018-05-01
Machine learning may contribute to understanding the relationship between the external load and internal load in professional soccer. Therefore, the relationship between external load indicators (ELIs) and the rating of perceived exertion (RPE) was examined using machine learning techniques on a group and individual level. Training data were collected from 38 professional soccer players over 2 seasons. The external load was measured using global positioning system technology and accelerometry. The internal load was obtained using the RPE. Predictive models were constructed using 2 machine learning techniques, artificial neural networks and least absolute shrinkage and selection operator (LASSO) models, and 1 naive baseline method. The predictions were based on a large set of ELIs. Using each technique, 1 group model involving all players and 1 individual model for each player were constructed. These models' performance on predicting the reported RPE values for future training sessions was compared with the naive baseline's performance. Both the artificial neural network and LASSO models outperformed the baseline. In addition, the LASSO model made more accurate predictions for the RPE than did the artificial neural network model. Furthermore, decelerations were identified as important ELIs. Regardless of the applied machine learning technique, the group models resulted in equivalent or better predictions for the reported RPE values than the individual models. Machine learning techniques may have added value in predicting RPE for future sessions to optimize training design and evaluation. These techniques may also be used in conjunction with expert knowledge to select key ELIs for load monitoring.
NASA Astrophysics Data System (ADS)
Zink, Frank Edward
The detection and classification of pulmonary nodules is of great interest in chest radiography. Nodules are often indicative of primary cancer, and their detection is particularly important in asymptomatic patients. The ability to classify nodules as calcified or non-calcified is important because calcification is a positive indicator that the nodule is benign. Dual-energy methods offer the potential to improve both the detection and classification of nodules by allowing the formation of material-selective images. Tissue-selective images can improve detection by virtue of the elimination of obscuring rib structure. Bone -selective images are essentially calcium images, allowing classification of the nodule. A dual-energy technique is introduced which uses a computed radiography system to acquire dual-energy chest radiographs in a single-exposure. All aspects of the dual-energy technique are described, with particular emphasis on scatter-correction, beam-hardening correction, and noise-reduction algorithms. The adaptive noise-reduction algorithm employed improves material-selective signal-to-noise ratio by up to a factor of seven with minimal sacrifice in selectivity. A clinical comparison study is described, undertaken to compare the dual-energy technique to conventional chest radiography for the tasks of nodule detection and classification. Observer performance data were collected using the Free Response Observer Characteristic (FROC) method and the bi-normal Alternative FROC (AFROC) performance model. Results of the comparison study, analyzed using two common multiple observer statistical models, showed that the dual-energy technique was superior to conventional chest radiography for detection of nodules at a statistically significant level (p < .05). Discussion of the comparison study emphasizes the unique combination of data collection and analysis techniques employed, as well as the limitations of comparison techniques in the larger context of technology assessment.
Selective laser sintering in biomedical engineering.
Mazzoli, Alida
2013-03-01
Selective laser sintering (SLS) is a solid freeform fabrication technique, developed by Carl Deckard for his master's thesis at the University of Texas, patented in 1989. SLS manufacturing is a technique that produces physical models through a selective solidification of a variety of fine powders. SLS technology is getting a great amount of attention in the clinical field. In this paper the characteristics features of SLS and the materials that have been developed for are reviewed together with a discussion on the principles of the above-mentioned manufacturing technique. The applications of SLS in tissue engineering, and at-large in the biomedical field, are reviewed and discussed.
Curve fitting and modeling with splines using statistical variable selection techniques
NASA Technical Reports Server (NTRS)
Smith, P. L.
1982-01-01
The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.
Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach
Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...
Model for spectral and chromatographic data
Jarman, Kristin [Richland, WA; Willse, Alan [Richland, WA; Wahl, Karen [Richland, WA; Wahl, Jon [Richland, WA
2002-11-26
A method and apparatus using a spectral analysis technique are disclosed. In one form of the invention, probabilities are selected to characterize the presence (and in another form, also a quantification of a characteristic) of peaks in an indexed data set for samples that match a reference species, and other probabilities are selected for samples that do not match the reference species. An indexed data set is acquired for a sample, and a determination is made according to techniques exemplified herein as to whether the sample matches or does not match the reference species. When quantification of peak characteristics is undertaken, the model is appropriately expanded, and the analysis accounts for the characteristic model and data. Further techniques are provided to apply the methods and apparatuses to process control, cluster analysis, hypothesis testing, analysis of variance, and other procedures involving multiple comparisons of indexed data.
Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework
NASA Astrophysics Data System (ADS)
Achieng, K. O.; Zhu, J.
2017-12-01
There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?
Low Speed Rot or/Fuselage Interactional Aerodynamics
NASA Technical Reports Server (NTRS)
Barnwell, Richard W.; Prichard, Devon S.
2003-01-01
This report presents work performed under a Cooperative Research Agreement between Virginia Tech and the NASA Langley Research Center. The work involved development of computational techniques for modeling helicopter rotor/airframe aerodynamic interaction. A brief overview of the problem is presented, the modeling techniques are described, and selected example calculations are briefly discussed.
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick
2007-04-01
High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.
Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer
NASA Astrophysics Data System (ADS)
Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad
2017-04-01
Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.
Ahmadi, Mehdi; Shahlaei, Mohsen
2015-01-01
P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858
ERIC Educational Resources Information Center
Wholeben, Brent Edward
This report describing the use of operations research techniques to determine which courseware packages or what microcomputer systems best address varied instructional objectives focuses on the MICROPIK model, a highly structured evaluation technique for making such complex instructional decisions. MICROPIK is a multiple alternatives model (MAA)…
Application of an Optimal Tuner Selection Approach for On-Board Self-Tuning Engine Models
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Armstrong, Jeffrey B.; Garg, Sanjay
2012-01-01
An enhanced design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented in this paper. It specific-ally addresses the under-determined estimation problem, in which there are more unknown parameters than available sensor measurements. This work builds upon an existing technique for systematically selecting a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. While the existing technique was optimized for open-loop engine operation at a fixed design point, in this paper an alternative formulation is presented that enables the technique to be optimized for an engine operating under closed-loop control throughout the flight envelope. The theoretical Kalman filter mean squared estimation error at a steady-state closed-loop operating point is derived, and the tuner selection approach applied to minimize this error is discussed. A technique for constructing a globally optimal tuning parameter vector, which enables full-envelope application of the technology, is also presented, along with design steps for adjusting the dynamic response of the Kalman filter state estimates. Results from the application of the technique to linear and nonlinear aircraft engine simulations are presented and compared to the conventional approach of tuner selection. The new methodology is shown to yield a significant improvement in on-line Kalman filter estimation accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, David, E-mail: dhthomas@mednet.ucla.edu; Lamb, James; White, Benjamin
2014-05-01
Purpose: To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Methods and Materials: Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. Themore » tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Results: Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. Conclusions: The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact–free images at a patient dose similar to or less than current 4D-CT techniques.« less
Thomas, David; Lamb, James; White, Benjamin; Jani, Shyam; Gaudio, Sergio; Lee, Percy; Ruan, Dan; McNitt-Gray, Michael; Low, Daniel
2014-05-01
To develop a novel 4-dimensional computed tomography (4D-CT) technique that exploits standard fast helical acquisition, a simultaneous breathing surrogate measurement, deformable image registration, and a breathing motion model to remove sorting artifacts. Ten patients were imaged under free-breathing conditions 25 successive times in alternating directions with a 64-slice CT scanner using a low-dose fast helical protocol. An abdominal bellows was used as a breathing surrogate. Deformable registration was used to register the first image (defined as the reference image) to the subsequent 24 segmented images. Voxel-specific motion model parameters were determined using a breathing motion model. The tissue locations predicted by the motion model in the 25 images were compared against the deformably registered tissue locations, allowing a model prediction error to be evaluated. A low-noise image was created by averaging the 25 images deformed to the first image geometry, reducing statistical image noise by a factor of 5. The motion model was used to deform the low-noise reference image to any user-selected breathing phase. A voxel-specific correction was applied to correct the Hounsfield units for lung parenchyma density as a function of lung air filling. Images produced using the model at user-selected breathing phases did not suffer from sorting artifacts common to conventional 4D-CT protocols. The mean prediction error across all patients between the breathing motion model predictions and the measured lung tissue positions was determined to be 1.19 ± 0.37 mm. The proposed technique can be used as a clinical 4D-CT technique. It is robust in the presence of irregular breathing and allows the entire imaging dose to contribute to the resulting image quality, providing sorting artifact-free images at a patient dose similar to or less than current 4D-CT techniques. Copyright © 2014 Elsevier Inc. All rights reserved.
Four-dimensional modeling of recent vertical movements in the area of the southern California uplift
Vanicek, Petr; Elliot, Michael R.; Castle, Robert O.
1979-01-01
This paper describes an analytical technique that utilizes scattered geodetic relevelings and tide-gauge records to portray Recent vertical crustal movements that may have been characterized by spasmodic changes in velocity. The technique is based on the fitting of a time-varying algebraic surface of prescribed degree to the geodetic data treated as tilt elements and to tide-gauge readings treated as point movements. Desired variations in time can be selected as any combination of powers of vertical movement velocity and episodic events. The state of the modeled vertical displacement can be shown for any number of dates for visual display. Statistical confidence limits of the modeled displacements, derived from the density of measurements in both space and time, line length, and accuracy of input data, are also provided. The capabilities of the technique are demonstrated on selected data from the region of the southern California uplift.
Remote sensing techniques aid in preattack planning for fire management
Lucy Anne Salazar
1982-01-01
Remote sensing techniques were investigated as an alternative for documenting selected prettack fire planning information. Locations of fuel models, road systems, and water sources were recorded by Landsat satellite imagery and aerial photography for a portion of the Six Rivers National Forest in northwestern California. The two fuel model groups used were from the...
The transferability of safety-driven access management models for application to other sites.
DOT National Transportation Integrated Search
2001-01-01
Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...
USDA-ARS?s Scientific Manuscript database
Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...
Liu, Guangkun; Kaushal, Nitin; Liu, Shaozhi; ...
2016-06-24
A recently introduced one-dimensional three-orbital Hubbard model displays orbital-selective Mott phases with exotic spin arrangements such as spin block states [J. Rincón et al., Phys. Rev. Lett. 112, 106405 (2014)]. In this paper we show that the constrained-path quantum Monte Carlo (CPQMC) technique can accurately reproduce the phase diagram of this multiorbital one-dimensional model, paving the way to future CPQMC studies in systems with more challenging geometries, such as ladders and planes. The success of this approach relies on using the Hartree-Fock technique to prepare the trial states needed in CPQMC. In addition, we study a simplified version of themore » model where the pair-hopping term is neglected and the Hund coupling is restricted to its Ising component. The corresponding phase diagrams are shown to be only mildly affected by the absence of these technically difficult-to-implement terms. This is confirmed by additional density matrix renormalization group and determinant quantum Monte Carlo calculations carried out for the same simplified model, with the latter displaying only mild fermion sign problems. Lastly, we conclude that these methods are able to capture quantitatively the rich physics of the several orbital-selective Mott phases (OSMP) displayed by this model, thus enabling computational studies of the OSMP regime in higher dimensions, beyond static or dynamic mean-field approximations.« less
Comparison of Three Optical Methods for Measuring Model Deformation
NASA Technical Reports Server (NTRS)
Burner, A. W.; Fleming, G. A.; Hoppe, J. C.
2000-01-01
The objective of this paper is to compare the current state-of-the-art of the following three optical techniques under study by NASA for measuring model deformation in wind tunnels: (1) video photogrammetry, (2) projection moire interferometry, and (3) the commercially available Optotrak system. An objective comparison of these three techniques should enable the selection of the best technique for a particular test undertaken at various NASA facilities. As might be expected, no one technique is best for all applications. The techniques are also not necessarily mutually exclusive and in some cases can be complementary to one another.
Impact of multicollinearity on small sample hydrologic regression models
NASA Astrophysics Data System (ADS)
Kroll, Charles N.; Song, Peter
2013-06-01
Often hydrologic regression models are developed with ordinary least squares (OLS) procedures. The use of OLS with highly correlated explanatory variables produces multicollinearity, which creates highly sensitive parameter estimators with inflated variances and improper model selection. It is not clear how to best address multicollinearity in hydrologic regression models. Here a Monte Carlo simulation is developed to compare four techniques to address multicollinearity: OLS, OLS with variance inflation factor screening (VIF), principal component regression (PCR), and partial least squares regression (PLS). The performance of these four techniques was observed for varying sample sizes, correlation coefficients between the explanatory variables, and model error variances consistent with hydrologic regional regression models. The negative effects of multicollinearity are magnified at smaller sample sizes, higher correlations between the variables, and larger model error variances (smaller R2). The Monte Carlo simulation indicates that if the true model is known, multicollinearity is present, and the estimation and statistical testing of regression parameters are of interest, then PCR or PLS should be employed. If the model is unknown, or if the interest is solely on model predictions, is it recommended that OLS be employed since using more complicated techniques did not produce any improvement in model performance. A leave-one-out cross-validation case study was also performed using low-streamflow data sets from the eastern United States. Results indicate that OLS with stepwise selection generally produces models across study regions with varying levels of multicollinearity that are as good as biased regression techniques such as PCR and PLS.
Hypergraph Based Feature Selection Technique for Medical Diagnosis.
Somu, Nivethitha; Raman, M R Gauthama; Kirthivasan, Kannan; Sriram, V S Shankar
2016-11-01
The impact of internet and information systems across various domains have resulted in substantial generation of multidimensional datasets. The use of data mining and knowledge discovery techniques to extract the original information contained in the multidimensional datasets play a significant role in the exploitation of complete benefit provided by them. The presence of large number of features in the high dimensional datasets incurs high computational cost in terms of computing power and time. Hence, feature selection technique has been commonly used to build robust machine learning models to select a subset of relevant features which projects the maximal information content of the original dataset. In this paper, a novel Rough Set based K - Helly feature selection technique (RSKHT) which hybridize Rough Set Theory (RST) and K - Helly property of hypergraph representation had been designed to identify the optimal feature subset or reduct for medical diagnostic applications. Experiments carried out using the medical datasets from the UCI repository proves the dominance of the RSKHT over other feature selection techniques with respect to the reduct size, classification accuracy and time complexity. The performance of the RSKHT had been validated using WEKA tool, which shows that RSKHT had been computationally attractive and flexible over massive datasets.
Detection of Erroneous Payments Utilizing Supervised And Unsupervised Data Mining Techniques
2004-09-01
will look at which statistical analysis technique will work best in developing and enhancing existing erroneous payment models . Chapter I and II... payment models that are used for selection of records to be audited. The models are set up such that if two or more records have the same payment...Identification Number, Invoice Number and Delivery Order Number are not compared. The DM0102 Duplicate Payment Model will be analyzed in this thesis
Systematic study of source mask optimization and verification flows
NASA Astrophysics Data System (ADS)
Ben, Yu; Latypov, Azat; Chua, Gek Soon; Zou, Yi
2012-06-01
Source mask optimization (SMO) emerged as powerful resolution enhancement technique (RET) for advanced technology nodes. However, there is a plethora of flow and verification metrics in the field, confounding the end user of the technique. Systemic study of different flows and the possible unification thereof is missing. This contribution is intended to reveal the pros and cons of different SMO approaches and verification metrics, understand the commonality and difference, and provide a generic guideline for RET selection via SMO. The paper discusses 3 different type of variations commonly arise in SMO, namely pattern preparation & selection, availability of relevant OPC recipe for freeform source and finally the metrics used in source verification. Several pattern selection algorithms are compared and advantages of systematic pattern selection algorithms are discussed. In the absence of a full resist model for SMO, alternative SMO flow without full resist model is reviewed. Preferred verification flow with quality metrics of DOF and MEEF is examined.
Prediction of drug synergy in cancer using ensemble-based machine learning techniques
NASA Astrophysics Data System (ADS)
Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder
2018-04-01
Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.
Dense mesh sampling for video-based facial animation
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.
An integrated fuzzy approach for strategic alliance partner selection in third-party logistics.
Erkayman, Burak; Gundogar, Emin; Yilmaz, Aysegul
2012-01-01
Outsourcing some of the logistic activities is a useful strategy for companies in recent years. This makes it possible for firms to concentrate on their main issues and processes and presents facility to improve logistics performance, to reduce costs, and to improve quality. Therefore provider selection and evaluation in third-party logistics become important activities for companies. Making a strategic decision like this is significantly hard and crucial. In this study we proposed a fuzzy multicriteria decision making (MCDM) approach to effectively select the most appropriate provider. First we identify the provider selection criteria and build the hierarchical structure of decision model. After building the hierarchical structure we determined the selection criteria weights by using fuzzy analytical hierarchy process (AHP) technique. Then we applied fuzzy technique for order preference by similarity to ideal solution (TOPSIS) to obtain final rankings for providers. And finally an illustrative example is also given to demonstrate the effectiveness of the proposed model.
An Integrated Fuzzy Approach for Strategic Alliance Partner Selection in Third-Party Logistics
Gundogar, Emin; Yılmaz, Aysegul
2012-01-01
Outsourcing some of the logistic activities is a useful strategy for companies in recent years. This makes it possible for firms to concentrate on their main issues and processes and presents facility to improve logistics performance, to reduce costs, and to improve quality. Therefore provider selection and evaluation in third-party logistics become important activities for companies. Making a strategic decision like this is significantly hard and crucial. In this study we proposed a fuzzy multicriteria decision making (MCDM) approach to effectively select the most appropriate provider. First we identify the provider selection criteria and build the hierarchical structure of decision model. After building the hierarchical structure we determined the selection criteria weights by using fuzzy analytical hierarchy process (AHP) technique. Then we applied fuzzy technique for order preference by similarity to ideal solution (TOPSIS) to obtain final rankings for providers. And finally an illustrative example is also given to demonstrate the effectiveness of the proposed model. PMID:23365520
NASA Astrophysics Data System (ADS)
Iqbal, M.; Islam, A.; Hossain, A.; Mustaque, S.
2016-12-01
Multi-Criteria Decision Making(MCDM) is advanced analytical method to evaluate appropriate result or decision from multiple criterion environment. Present time in advanced research, MCDM technique is progressive analytical process to evaluate a logical decision from various conflict. In addition, Present day Geospatial approach (e.g. Remote sensing and GIS) also another advanced technical approach in a research to collect, process and analyze various spatial data at a time. GIS and Remote sensing together with the MCDM technique could be the best platform to solve a complex decision making process. These two latest process combined very effectively used in site selection for solid waste management in urban policy. The most popular MCDM technique is Weighted Linear Method (WLC) where Analytical Hierarchy Process (AHP) is another popular and consistent techniques used in worldwide as dependable decision making. Consequently, the main objective of this study is improving a AHP model as MCDM technique with Geographic Information System (GIS) to select a suitable landfill site for urban solid waste management. Here AHP technique used as a MCDM tool to select the best suitable landfill location for urban solid waste management. To protect the urban environment in a sustainable way municipal waste needs an appropriate landfill site considering environmental, geological, social and technical aspect of the region. A MCDM model generate from five class related which related to environmental, geological, social and technical using AHP method and input the result set in GIS for final model location for urban solid waste management. The final suitable location comes out that 12.2% of the area corresponds to 22.89 km2 considering the total study area. In this study, Keraniganj sub-district of Dhaka district in Bangladesh is consider as study area which is densely populated city currently undergoes an unmanaged waste management system especially the suitable landfill sites for waste dumping site.
Knowledge discovery in cardiology: A systematic literature review.
Kadi, I; Idri, A; Fernandez-Aleman, J L
2017-01-01
Data mining (DM) provides the methodology and technology needed to transform huge amounts of data into useful information for decision making. It is a powerful process employed to extract knowledge and discover new patterns embedded in large data sets. Data mining has been increasingly used in medicine, particularly in cardiology. In fact, DM applications can greatly benefit all those involved in cardiology, such as patients, cardiologists and nurses. The purpose of this paper is to review papers concerning the application of DM techniques in cardiology so as to summarize and analyze evidence regarding: (1) the DM techniques most frequently used in cardiology; (2) the performance of DM models in cardiology; (3) comparisons of the performance of different DM models in cardiology. We performed a systematic literature review of empirical studies on the application of DM techniques in cardiology published in the period between 1 January 2000 and 31 December 2015. A total of 149 articles published between 2000 and 2015 were selected, studied and analyzed according to the following criteria: DM techniques and performance of the approaches developed. The results obtained showed that a significant number of the studies selected used classification and prediction techniques when developing DM models. Neural networks, decision trees and support vector machines were identified as being the techniques most frequently employed when developing DM models in cardiology. Moreover, neural networks and support vector machines achieved the highest accuracy rates and were proved to be more efficient than other techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
ERIC Educational Resources Information Center
Davis, Laurie Laughlin; Pastor, Dena A.; Dodd, Barbara G.; Chiang, Claire; Fitzpatrick, Steven J.
2003-01-01
Examined the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system based on the partial credit model. Simulation results show the Sympson-Hetter technique can be used with minimal impact on measurement precision,…
Organic Binder Developments for Solid Freeform Fabrication
NASA Technical Reports Server (NTRS)
Cooper, Ken; Mobasher, Amir A.
2003-01-01
A number of rapid prototyping techniques are under development at Marshall Space Flight Center's (MSFC) National Center for Advanced Manufacturing Rapid Prototyping Laboratory. Commercial binder developments in creating solid models for rapid prototyping include: 1) Fused Deposition Modeling; 2) Three Dimensional Printing; 3) Selective Laser Sintering (SLS). This document describes these techniques developed by the private sector, as well as SLS undertaken by MSFC.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
A Parameter Subset Selection Algorithm for Mixed-Effects Models
Schmidt, Kathleen L.; Smith, Ralph C.
2016-01-01
Mixed-effects models are commonly used to statistically model phenomena that include attributes associated with a population or general underlying mechanism as well as effects specific to individuals or components of the general mechanism. This can include individual effects associated with data from multiple experiments. However, the parameterizations used to incorporate the population and individual effects are often unidentifiable in the sense that parameters are not uniquely specified by the data. As a result, the current literature focuses on model selection, by which insensitive parameters are fixed or removed from the model. Model selection methods that employ information criteria are applicablemore » to both linear and nonlinear mixed-effects models, but such techniques are limited in that they are computationally prohibitive for large problems due to the number of possible models that must be tested. To limit the scope of possible models for model selection via information criteria, we introduce a parameter subset selection (PSS) algorithm for mixed-effects models, which orders the parameters by their significance. In conclusion, we provide examples to verify the effectiveness of the PSS algorithm and to test the performance of mixed-effects model selection that makes use of parameter subset selection.« less
Data Mining of Macromolecular Structures.
van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P
2016-01-01
The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
An improved switching converter model. Ph.D. Thesis. Final Report
NASA Technical Reports Server (NTRS)
Shortt, D. J.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters in the continuous mode and discontinuous mode was done by averaging and discrete sampling techniques. A model was developed by combining these two techniques. This model, the discrete average model, accurately predicts the envelope of the output voltage and is easy to implement in circuit and state variable forms. The proposed model is shown to be dependent on the type of duty cycle control. The proper selection of the power stage model, between average and discrete average, is largely a function of the error processor in the feedback loop. The accuracy of the measurement data taken by a conventional technique is affected by the conditions at which the data is collected.
ERIC Educational Resources Information Center
Leow, Christine; Wen, Xiaoli; Korfmacher, Jon
2015-01-01
This article compares regression modeling and propensity score analysis as different types of statistical techniques used in addressing selection bias when estimating the impact of two-year versus one-year Head Start on children's school readiness. The analyses were based on the national Head Start secondary dataset. After controlling for…
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Maloney, Kelly O.; Schmid, Matthias; Weller, Donald E.
2012-01-01
Issues with ecological data (e.g. non-normality of errors, nonlinear relationships and autocorrelation of variables) and modelling (e.g. overfitting, variable selection and prediction) complicate regression analyses in ecology. Flexible models, such as generalized additive models (GAMs), can address data issues, and machine learning techniques (e.g. gradient boosting) can help resolve modelling issues. Gradient boosted GAMs do both. Here, we illustrate the advantages of this technique using data on benthic macroinvertebrates and fish from 1573 small streams in Maryland, USA.
Technique Selectively Represses Immune System
... from attacking myelin in a mouse model of multiple sclerosis. Dr David Furness, Wellcome Images. All rights reserved ... devised a way to successfully treat symptoms resembling multiple sclerosis in a mouse model. With further development, the ...
Muddukrishna, B S; Pai, Vasudev; Lobo, Richard; Pai, Aravinda
2017-11-22
In the present study, five important binary fingerprinting techniques were used to model novel flavones for the selective inhibition of Tankyrase I. From the fingerprints used: the fingerprint atom pairs resulted in a statistically significant 2D QSAR model using a kernel-based partial least square regression method. This model indicates that the presence of electron-donating groups positively contributes to activity, whereas the presence of electron withdrawing groups negatively contributes to activity. This model could be used to develop more potent as well as selective analogues for the inhibition of Tankyrase I. Schematic representation of 2D QSAR work flow.
Size reduction techniques for vital compliant VHDL simulation models
Rich, Marvin J.; Misra, Ashutosh
2006-08-01
A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1993-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Techniques for optimal crop selection in a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Mccormack, Ann; Finn, Cory; Dunsky, Betsy
1992-01-01
A Controlled Ecological Life Support System (CELSS) utilizes a plant's natural ability to regenerate air and water while being grown as a food source in a closed life support system. Current plant research is directed toward obtaining quantitative empirical data on the regenerative ability of each species of plant and the system volume and power requirements. Two techniques were adapted to optimize crop species selection while at the same time minimizing the system volume and power requirements. Each allows the level of life support supplied by the plants to be selected, as well as other system parameters. The first technique uses decision analysis in the form of a spreadsheet. The second method, which is used as a comparison with and validation of the first, utilizes standard design optimization techniques. Simple models of plant processes are used in the development of these methods.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-01-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. PMID:25745272
SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH
While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...
Design and additive manufacture for flow chemistry.
Capel, Andrew J; Edmondson, Steve; Christie, Steven D R; Goodridge, Ruth D; Bibb, Richard J; Thurstans, Matthew
2013-12-07
We review the use of additive manufacturing (AM) as a novel manufacturing technique for the production of milli-scale reactor systems. Five well-developed additive manufacturing techniques: stereolithography (SL), multi-jet modelling (MJM), selective laser melting (SLM), laser sintering (LS) and fused deposition modelling (FDM) were used to manufacture a number of miniaturised reactors which were tested using a range of organic and inorganic reactions.
NASA Astrophysics Data System (ADS)
Munahefi, D. N.; Waluya, S. B.; Rochmad
2018-03-01
The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.
Posada, David; Buckley, Thomas R
2004-10-01
Model selection is a topic of special relevance in molecular phylogenetics that affects many, if not all, stages of phylogenetic inference. Here we discuss some fundamental concepts and techniques of model selection in the context of phylogenetics. We start by reviewing different aspects of the selection of substitution models in phylogenetics from a theoretical, philosophical and practical point of view, and summarize this comparison in table format. We argue that the most commonly implemented model selection approach, the hierarchical likelihood ratio test, is not the optimal strategy for model selection in phylogenetics, and that approaches like the Akaike Information Criterion (AIC) and Bayesian methods offer important advantages. In particular, the latter two methods are able to simultaneously compare multiple nested or nonnested models, assess model selection uncertainty, and allow for the estimation of phylogenies and model parameters using all available models (model-averaged inference or multimodel inference). We also describe how the relative importance of the different parameters included in substitution models can be depicted. To illustrate some of these points, we have applied AIC-based model averaging to 37 mitochondrial DNA sequences from the subgenus Ohomopterus(genus Carabus) ground beetles described by Sota and Vogler (2001).
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Naccarato, Attilio; Furia, Emilia; Sindona, Giovanni; Tagarelli, Antonio
2016-09-01
Four class-modeling techniques (soft independent modeling of class analogy (SIMCA), unequal dispersed classes (UNEQ), potential functions (PF), and multivariate range modeling (MRM)) were applied to multielement distribution to build chemometric models able to authenticate chili pepper samples grown in Calabria respect to those grown outside of Calabria. The multivariate techniques were applied by considering both all the variables (32 elements, Al, As, Ba, Ca, Cd, Ce, Co, Cr, Cs, Cu, Dy, Fe, Ga, La, Li, Mg, Mn, Na, Nd, Ni, Pb, Pr, Rb, Sc, Se, Sr, Tl, Tm, V, Y, Yb, Zn) and variables selected by means of stepwise linear discriminant analysis (S-LDA). In the first case, satisfactory and comparable results in terms of CV efficiency are obtained with the use of SIMCA and MRM (82.3 and 83.2% respectively), whereas MRM performs better than SIMCA in terms of forced model efficiency (96.5%). The selection of variables by S-LDA permitted to build models characterized, in general, by a higher efficiency. MRM provided again the best results for CV efficiency (87.7% with an effective balance of sensitivity and specificity) as well as forced model efficiency (96.5%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection
Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole
2016-01-01
Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048
Managing distribution changes in time series prediction
NASA Astrophysics Data System (ADS)
Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.
2006-07-01
When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.
Uniting statistical and individual-based approaches for animal movement modelling.
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems.
Uniting Statistical and Individual-Based Approaches for Animal Movement Modelling
Latombe, Guillaume; Parrott, Lael; Basille, Mathieu; Fortin, Daniel
2014-01-01
The dynamic nature of their internal states and the environment directly shape animals' spatial behaviours and give rise to emergent properties at broader scales in natural systems. However, integrating these dynamic features into habitat selection studies remains challenging, due to practically impossible field work to access internal states and the inability of current statistical models to produce dynamic outputs. To address these issues, we developed a robust method, which combines statistical and individual-based modelling. Using a statistical technique for forward modelling of the IBM has the advantage of being faster for parameterization than a pure inverse modelling technique and allows for robust selection of parameters. Using GPS locations from caribou monitored in Québec, caribou movements were modelled based on generative mechanisms accounting for dynamic variables at a low level of emergence. These variables were accessed by replicating real individuals' movements in parallel sub-models, and movement parameters were then empirically parameterized using Step Selection Functions. The final IBM model was validated using both k-fold cross-validation and emergent patterns validation and was tested for two different scenarios, with varying hardwood encroachment. Our results highlighted a functional response in habitat selection, which suggests that our method was able to capture the complexity of the natural system, and adequately provided projections on future possible states of the system in response to different management plans. This is especially relevant for testing the long-term impact of scenarios corresponding to environmental configurations that have yet to be observed in real systems. PMID:24979047
Interactive shape metamorphosis
NASA Technical Reports Server (NTRS)
Chen, David T.; State, Andrei; Banks, David
1994-01-01
A technique for controlled metamorphosis between surfaces in 3-space is described. Well-understood techniques to produce shape metamorphosis between models in a 2D parametric space is applied. The user selects morphable features interactively, and the morphing process executes in real time on a high-performance graphics multicomputer.
Finite volume model for two-dimensional shallow environmental flow
Simoes, F.J.M.
2011-01-01
This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.
Covariate selection with group lasso and doubly robust estimation of causal effects
Koch, Brandon; Vock, David M.; Wolfson, Julian
2017-01-01
Summary The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this paper, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. PMID:28636276
Covariate selection with group lasso and doubly robust estimation of causal effects.
Koch, Brandon; Vock, David M; Wolfson, Julian
2018-03-01
The efficiency of doubly robust estimators of the average causal effect (ACE) of a treatment can be improved by including in the treatment and outcome models only those covariates which are related to both treatment and outcome (i.e., confounders) or related only to the outcome. However, it is often challenging to identify such covariates among the large number that may be measured in a given study. In this article, we propose GLiDeR (Group Lasso and Doubly Robust Estimation), a novel variable selection technique for identifying confounders and predictors of outcome using an adaptive group lasso approach that simultaneously performs coefficient selection, regularization, and estimation across the treatment and outcome models. The selected variables and corresponding coefficient estimates are used in a standard doubly robust ACE estimator. We provide asymptotic results showing that, for a broad class of data generating mechanisms, GLiDeR yields a consistent estimator of the ACE when either the outcome or treatment model is correctly specified. A comprehensive simulation study shows that GLiDeR is more efficient than doubly robust methods using standard variable selection techniques and has substantial computational advantages over a recently proposed doubly robust Bayesian model averaging method. We illustrate our method by estimating the causal treatment effect of bilateral versus single-lung transplant on forced expiratory volume in one year after transplant using an observational registry. © 2017, The International Biometric Society.
Professional Orchestral Conductors' Use of Selected Teaching Behaviors in Rehearsal
ERIC Educational Resources Information Center
Whitaker, Jennifer A.
2017-01-01
This descriptive study examined professional conductors' use of rehearsal time in sequential pattern components, discussing task presentation targets, and using verbal imagery and modeling techniques. Commercially available videos of 15 professional conductors rehearsing prominent orchestras were scripted, coded, and timed for selected teaching…
NASA Technical Reports Server (NTRS)
McIlraith, Sheila; Biswas, Gautam; Clancy, Dan; Gupta, Vineet
2005-01-01
This paper reports on an on-going Project to investigate techniques to diagnose complex dynamical systems that are modeled as hybrid systems. In particular, we examine continuous systems with embedded supervisory controllers that experience abrupt, partial or full failure of component devices. We cast the diagnosis problem as a model selection problem. To reduce the space of potential models under consideration, we exploit techniques from qualitative reasoning to conjecture an initial set of qualitative candidate diagnoses, which induce a smaller set of models. We refine these diagnoses using parameter estimation and model fitting techniques. As a motivating case study, we have examined the problem of diagnosing NASA's Sprint AERCam, a small spherical robotic camera unit with 12 thrusters that enable both linear and rotational motion.
Delgado-García, José M; Gruart, Agnès
2008-12-01
The availability of transgenic mice mimicking selective human neurodegenerative and psychiatric disorders calls for new electrophysiological and microstimulation techniques capable of being applied in vivo in this species. In this article, we will concentrate on experiments and techniques developed in our laboratory during the past few years. Thus we have developed different techniques for the study of learning and memory capabilities of wild-type and transgenic mice with deficits in cognitive functions, using classical conditioning procedures. These techniques include different trace (tone/SHOCK and shock/SHOCK) conditioning procedures ? that is, a classical conditioning task involving the cerebral cortex, including the hippocampus. We have also developed implantation and recording techniques for evoking long-term potentiation (LTP) in behaving mice and for recording the evolution of field excitatory postsynaptic potentials (fEPSP) evoked in the hippocampal CA1 area by the electrical stimulation of the commissural/Schaffer collateral pathway across conditioning sessions. Computer programs have also been developed to quantify the appearance and evolution of eyelid conditioned responses and the slope of evoked fEPSPs. According to the present results, the in vivo recording of the electrical activity of selected hippocampal sites during classical conditioning of eyelid responses appears to be a suitable experimental procedure for studying learning capabilities in genetically modified mice, and an excellent model for the study of selected neuropsychiatric disorders compromising cerebral cortex functioning.
Promises of Machine Learning Approaches in Prediction of Absorption of Compounds.
Kumar, Rajnish; Sharma, Anju; Siddiqui, Mohammed Haris; Tiwari, Rajesh Kumar
2018-01-01
The Machine Learning (ML) is one of the fastest developing techniques in the prediction and evaluation of important pharmacokinetic properties such as absorption, distribution, metabolism and excretion. The availability of a large number of robust validation techniques for prediction models devoted to pharmacokinetics has significantly enhanced the trust and authenticity in ML approaches. There is a series of prediction models generated and used for rapid screening of compounds on the basis of absorption in last one decade. Prediction of absorption of compounds using ML models has great potential across the pharmaceutical industry as a non-animal alternative to predict absorption. However, these prediction models still have to go far ahead to develop the confidence similar to conventional experimental methods for estimation of drug absorption. Some of the general concerns are selection of appropriate ML methods and validation techniques in addition to selecting relevant descriptors and authentic data sets for the generation of prediction models. The current review explores published models of ML for the prediction of absorption using physicochemical properties as descriptors and their important conclusions. In addition, some critical challenges in acceptance of ML models for absorption are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Genome-wide heterogeneity of nucleotide substitution model fit.
Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David
2011-01-01
At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale.
Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian
2014-09-01
Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.
How to Choose--and Use--Motion Picture Projectors
ERIC Educational Resources Information Center
Training, 1976
1976-01-01
Suggests techniques for selecting super 8 and 16mm movie projectors for various training and communication needs. Charts list various characteristics for 17 models of 8mm projectors with built-in screen, 7 models without screen, and 33 models of 16mm projectors. (WL)
UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E
A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...
A Rapid Approach to Modeling Species-Habitat Relationships
NASA Technical Reports Server (NTRS)
Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.
2005-01-01
A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.
Lee, Tsair-Fwu; Liou, Ming-Hsiang; Huang, Yu-Jie; Chao, Pei-Ju; Ting, Hui-Min; Lee, Hsiao-Yi
2014-01-01
To predict the incidence of moderate-to-severe patient-reported xerostomia among head and neck squamous cell carcinoma (HNSCC) and nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT). Multivariable normal tissue complication probability (NTCP) models were developed by using quality of life questionnaire datasets from 152 patients with HNSCC and 84 patients with NPC. The primary endpoint was defined as moderate-to-severe xerostomia after IMRT. The numbers of predictive factors for a multivariable logistic regression model were determined using the least absolute shrinkage and selection operator (LASSO) with bootstrapping technique. Four predictive models were achieved by LASSO with the smallest number of factors while preserving predictive value with higher AUC performance. For all models, the dosimetric factors for the mean dose given to the contralateral and ipsilateral parotid gland were selected as the most significant predictors. Followed by the different clinical and socio-economic factors being selected, namely age, financial status, T stage, and education for different models were chosen. The predicted incidence of xerostomia for HNSCC and NPC patients can be improved by using multivariable logistic regression models with LASSO technique. The predictive model developed in HNSCC cannot be generalized to NPC cohort treated with IMRT without validation and vice versa. PMID:25163814
NASA Astrophysics Data System (ADS)
Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua
2017-10-01
Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.
2016-02-10
using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
High resolution x-ray fluorescence spectroscopy - a new technique for site- and spin-selectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xin
1996-12-01
X-ray spectroscopy has long been used to elucidate electronic and structural information of molecules. One of the weaknesses of x-ray absorption is its sensitivity to all of the atoms of a particular element in a sample. Through out this thesis, a new technique for enhancing the site- and spin-selectivity of the x-ray absorption has been developed. By high resolution fluorescence detection, the chemical sensitivity of K emission spectra can be used to identify oxidation and spin states; it can also be used to facilitate site-selective X-ray Absorption Near Edge Structure (XANES) and site-selective Extended X-ray Absorption Fine Structure (EXAFS). Themore » spin polarization in K fluorescence could be used to generate spin selective XANES or spin-polarized EXAFS, which provides a new measure of the spin density, or the nature of magnetic neighboring atoms. Finally, dramatic line-sharpening effects by the combination of absorption and emission processes allow observation of structure that is normally unobservable. All these unique characters can enormously simplify a complex x-ray spectrum. Applications of this novel technique have generated information from various transition-metal model compounds to metalloproteins. The absorption and emission spectra by high resolution fluorescence detection are interdependent. The ligand field multiplet model has been used for the analysis of K{alpha} and K{beta} emission spectra. First demonstration on different chemical states of Fe compounds has shown the applicability of site selectivity and spin polarization. Different interatomic distances of the same element in different chemical forms have been detected using site-selective EXAFS.« less
ERIC Educational Resources Information Center
Powell, Shawn; Dalley, Mahlono
1995-01-01
An identification and treatment model differentiating transient mutism from persistent selective mutism is proposed. The case study of a six-year-old girl is presented, who was treated with a multimodal approach combining behavioral techniques with play therapy and family involvement. At posttreatment and follow-up, she was talking in a manner…
Application of nomographs for analysis and prediction of receiver spurious response EMI
NASA Astrophysics Data System (ADS)
Heather, F. W.
1985-07-01
Spurious response EMI for the front end of a superheterodyne receiver follows a simple mathematic formula; however, the application of the formula to predict test frequencies produces more data than can be evaluated. An analysis technique has been developed to graphically depict all receiver spurious responses usig a nomograph and to permit selection of optimum test frequencies. The discussion includes the math model used to simulate a superheterodyne receiver, the implementation of the model in the computer program, the approach to test frequency selection, interpretation of the nomographs, analysis and prediction of receiver spurious response EMI from the nomographs, and application of the nomographs. In addition, figures are provided of sample applications. This EMI analysis and prediction technique greatly improves the Electromagnetic Compatibility (EMC) test engineer's ability to visualize the scope of receiver spurious response EMI testing and optimize test frequency selection.
Steps Toward Unveiling the True Population of AGN: Photometric Selection of Broad-Line AGN
NASA Astrophysics Data System (ADS)
Schneider, Evan; Impey, C.
2012-01-01
We present an AGN selection technique that enables identification of broad-line AGN using only photometric data. An extension of infrared selection techniques, our method involves fitting a given spectral energy distribution with a model consisting of three physically motivated components: infrared power law emission, optical accretion disk emission, and host galaxy emission. Each component can be varied in intensity, and a reduced chi-square minimization routine is used to determine the optimum parameters for each object. Using this model, both broad- and narrow-line AGN are seen to fall within discrete ranges of parameter space that have plausible bounds, allowing physical trends with luminosity and redshift to be determined. Based on a fiducial sample of AGN from the catalog of Trump et al. (2009), we find the region occupied by broad-line AGN to be distinct from that of quiescent or star-bursting galaxies. Because this technique relies only on photometry, it will allow us to find AGN at fainter magnitudes than are accessible in spectroscopic surveys, and thus probe a population of less luminous and/or higher redshift objects. With the vast availability of photometric data in large surveys, this technique should have broad applicability and result in large samples that will complement X-ray AGN catalogs.
A novel CT acquisition and analysis technique for breathing motion modeling
NASA Astrophysics Data System (ADS)
Low, Daniel A.; White, Benjamin M.; Lee, Percy P.; Thomas, David H.; Gaudio, Sergio; Jani, Shyam S.; Wu, Xiao; Lamb, James M.
2013-06-01
To report on a novel technique for providing artifact-free quantitative four-dimensional computed tomography (4DCT) image datasets for breathing motion modeling. Commercial clinical 4DCT methods have difficulty managing irregular breathing. The resulting images contain motion-induced artifacts that can distort structures and inaccurately characterize breathing motion. We have developed a novel scanning and analysis method for motion-correlated CT that utilizes standard repeated fast helical acquisitions, a simultaneous breathing surrogate measurement, deformable image registration, and a published breathing motion model. The motion model differs from the CT-measured motion by an average of 0.65 mm, indicating the precision of the motion model. The integral of the divergence of one of the motion model parameters is predicted to be a constant 1.11 and is found in this case to be 1.09, indicating the accuracy of the motion model. The proposed technique shows promise for providing motion-artifact free images at user-selected breathing phases, accurate Hounsfield units, and noise characteristics similar to non-4D CT techniques, at a patient dose similar to or less than current 4DCT techniques.
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network
NASA Astrophysics Data System (ADS)
Li, Yiming; Bhanu, Bir
Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.
2014-01-01
computational and empirical dosimetric tools [31]. For the computational dosimetry, we employed finite-dif- ference time- domain (FDTD) modeling techniques to...temperature-time data collected for a well exposed to THz radiation using finite-difference time- domain (FDTD) modeling techniques and thermocouples... like )). Alter- ation in the expression of such genes underscores the signif- 62 IEEE TRANSACTIONS ON TERAHERTZ SCIENCE AND TECHNOLOGY, VOL. 6, NO. 1
ADOPTION OF NUTRIENT MANAGEMENT TECHNIQUES TO REDUCE HYPOXIA IN THE GULF OF MEXICO. (R825761)
Data were collected from 1011 land owner-operators within three watersheds located in the North Central Region of the USA to examine use of selected water protection practices. A theoretical model developed from selected components of the traditional diffusion paradigm and the...
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
Discriminative least squares regression for multiclass classification and feature selection.
Xiang, Shiming; Nie, Feiping; Meng, Gaofeng; Pan, Chunhong; Zhang, Changshui
2012-11-01
This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method.
Extending the durability of cultivar resistance by limiting epidemic growth rates.
Carolan, Kevin; Helps, Joe; van den Berg, Femke; Bain, Ruairidh; Paveley, Neil; van den Bosch, Frank
2017-09-27
Cultivar resistance is an essential part of disease control programmes in many agricultural systems. The use of resistant cultivars applies a selection pressure on pathogen populations for the evolution of virulence, resulting in loss of disease control. Various techniques for the deployment of host resistance genes have been proposed to reduce the selection for virulence, but these are often difficult to apply in practice. We present a general technique to maintain the effectiveness of cultivar resistance. Derived from classical population genetics theory; any factor that reduces the population growth rates of both the virulent and avirulent strains will reduce selection. We model the specific example of fungicide application to reduce the growth rates of virulent and avirulent strains of a pathogen, demonstrating that appropriate use of fungicides reduces selection for virulence, prolonging cultivar resistance. This specific example of chemical control illustrates a general principle for the development of techniques to manage the evolution of virulence by slowing epidemic growth rates. © 2017 The Author(s).
Application of Multi-Criteria Decision Making (MCDM) Technique for Gradation of Jute Fibres
NASA Astrophysics Data System (ADS)
Choudhuri, P. K.
2014-12-01
Multi-Criteria Decision Making is a branch of Operation Research (OR) having a comparatively short history of about 40 years. It is being popularly used in the field of engineering, banking, fixing policy matters etc. It can also be applied for taking decisions in daily life like selecting a car to purchase, selecting bride or groom and many others. Various MCDM methods namely Weighted Sum Model (WSM), Weighted Product Model (WPM), Analytic Hierarchy Process (AHP), Technique for Order Preference by Similarity to Ideal Solutions (TOPSIS) and Elimination and Choice Translating Reality (ELECTRE) are there to solve many decision making problems, each having its own limitations. However it is very difficult to decide which MCDM method is the best. MCDM methods are prospective quantitative approaches for solving decision problems involving finite number of alternatives and criteria. Very few research works in textiles have been carried out with the help of this technique particularly where decision taking among several alternatives becomes the major problem based on some criteria which are conflicting in nature. Gradation of jute fibres on the basis of the criteria like strength, root content, defects, colour, density, fineness etc. is an important task to perform. The MCDM technique provides enough scope to be applied for the gradation of jute fibres or ranking among several varieties keeping in view a particular object and on the basis of some selection criteria and their relative weightage. The present paper is an attempt to explore the scope of applying the multiplicative AHP method of multi-criteria decision making technique to determine the quality values of selected jute fibres on the basis of some above stated important criteria and ranking them accordingly. A good agreement in ranking is observed between the existing Bureau of Indian Standards (BIS) grading and proposed method.
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-01-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-10-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Casale, M; Oliveri, P; Casolino, C; Sinelli, N; Zunin, P; Armanino, C; Forina, M; Lanteri, S
2012-01-27
An authentication study of the Italian PDO (protected designation of origin) extra virgin olive oil Chianti Classico was performed; UV-visible (UV-vis), Near-Infrared (NIR) and Mid-Infrared (MIR) spectroscopies were applied to a set of samples representative of the whole Chianti Classico production area. The non-selective signals (fingerprints) provided by the three spectroscopic techniques were utilised both individually and jointly, after fusion of the respective profile vectors, in order to build a model for the Chianti Classico PDO olive oil. Moreover, these results were compared with those obtained by the gas chromatographic determination of the fatty acids composition. In order to characterise the olive oils produced in the Chianti Classico PDO area, UNEQ (unequal class models) and SIMCA (soft independent modelling of class analogy) were employed both on the MIR, NIR and UV-vis spectra, individually and jointly, and on the fatty acid composition. Finally, PLS (partial least square) regression was applied on the UV-vis, NIR and MIR spectra, in order to predict the content of oleic and linoleic acids in the extra virgin olive oils. UNEQ, SIMCA and PLS were performed after selection of the relevant predictors, in order to increase the efficiency of both classification and regression models. The non-selective information obtained from UV-vis, NIR and MIR spectroscopy allowed to build reliable models for checking the authenticity of the Italian PDO extra virgin olive oil Chianti Classico. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haddad, Khaled; Rahman, Ataur; A Zaman, Mohammad; Shrestha, Surendra
2013-03-01
SummaryIn regional hydrologic regression analysis, model selection and validation are regarded as important steps. Here, the model selection is usually based on some measurements of goodness-of-fit between the model prediction and observed data. In Regional Flood Frequency Analysis (RFFA), leave-one-out (LOO) validation or a fixed percentage leave out validation (e.g., 10%) is commonly adopted to assess the predictive ability of regression-based prediction equations. This paper develops a Monte Carlo Cross Validation (MCCV) technique (which has widely been adopted in Chemometrics and Econometrics) in RFFA using Generalised Least Squares Regression (GLSR) and compares it with the most commonly adopted LOO validation approach. The study uses simulated and regional flood data from the state of New South Wales in Australia. It is found that when developing hydrologic regression models, application of the MCCV is likely to result in a more parsimonious model than the LOO. It has also been found that the MCCV can provide a more realistic estimate of a model's predictive ability when compared with the LOO.
Development of an automated energy audit protocol for office buildings
NASA Astrophysics Data System (ADS)
Deb, Chirag
This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.
Shape and Reinforcement Optimization of Underground Tunnels
NASA Astrophysics Data System (ADS)
Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang
Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.
Rapid prototyping--when virtual meets reality.
Beguma, Zubeda; Chhedat, Pratik
2014-01-01
Rapid prototyping (RP) describes the customized production of solid models using 3D computer data. Over the past decade, advances in RP have continued to evolve, resulting in the development of new techniques that have been applied to the fabrication of various prostheses. RP fabrication technologies include stereolithography (SLA), fused deposition modeling (FDM), computer numerical controlled (CNC) milling, and, more recently, selective laser sintering (SLS). The applications of RP techniques for dentistry include wax pattern fabrication for dental prostheses, dental (facial) prostheses mold (shell) fabrication, and removable dental prostheses framework fabrication. In the past, a physical plastic shape of the removable partial denture (RPD) framework was produced using an RP machine, and then used as a sacrificial pattern. Yet with the advent of the selective laser melting (SLM) technique, RPD metal frameworks can be directly fabricated, thereby omitting the casting stage. This new approach can also generate the wax pattern for facial prostheses directly, thereby reducing labor-intensive laboratory procedures. Many people stand to benefit from these new RP techniques for producing various forms of dental prostheses, which in the near future could transform traditional prosthodontic practices.
Rahman, Anisur; Faqeerzada, Mohammad A; Cho, Byoung-Kwan
2018-03-14
Allicin and soluble solid content (SSC) in garlic is the responsible for its pungent flavor and odor. However, current conventional methods such as the use of high-pressure liquid chromatography and a refractometer have critical drawbacks in that they are time-consuming, labor-intensive and destructive procedures. The present study aimed to predict allicin and SSC in garlic using hyperspectral imaging in combination with variable selection algorithms and calibration models. Hyperspectral images of 100 garlic cloves were acquired that covered two spectral ranges, from which the mean spectra of each clove were extracted. The calibration models included partial least squares (PLS) and least squares-support vector machine (LS-SVM) regression, as well as different spectral pre-processing techniques, from which the highest performing spectral preprocessing technique and spectral range were selected. Then, variable selection methods, such as regression coefficients, variable importance in projection (VIP) and the successive projections algorithm (SPA), were evaluated for the selection of effective wavelengths (EWs). Furthermore, PLS and LS-SVM regression methods were applied to quantitatively predict the quality attributes of garlic using the selected EWs. Of the established models, the SPA-LS-SVM model obtained an Rpred2 of 0.90 and standard error of prediction (SEP) of 1.01% for SSC prediction, whereas the VIP-LS-SVM model produced the best result with an Rpred2 of 0.83 and SEP of 0.19 mg g -1 for allicin prediction in the range 1000-1700 nm. Furthermore, chemical images of garlic were developed using the best predictive model to facilitate visualization of the spatial distributions of allicin and SSC. The present study clearly demonstrates that hyperspectral imaging combined with an appropriate chemometrics method can potentially be employed as a fast, non-invasive method to predict the allicin and SSC in garlic. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
Program Evaluation: An Overview.
ERIC Educational Resources Information Center
McCluskey, Lawrence
1973-01-01
Various models of educational evaluation are presented. These include: (1) the classical type model, which contains the following guidelines: formulate objectives, classify objectives, define objectives in behavioral terms, suggest situations in which achievement of objectives will be shown, develop or select appraisal techniques, and gather and…
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
Implementation of Structured Inquiry Based Model Learning toward Students' Understanding of Geometry
ERIC Educational Resources Information Center
Salim, Kalbin; Tiawa, Dayang Hjh
2015-01-01
The purpose of this study is implementation of a structured inquiry learning model in instruction of geometry. The model used is a model with a quasi-experimental study amounted to two classes of samples selected from the population of the ten classes with cluster random sampling technique. Data collection tool consists of a test item…
Selected Tether Applications Cost Model
NASA Technical Reports Server (NTRS)
Keeley, Michael G.
1988-01-01
Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.
Analysis of Weibull Grading Test for Solid Tantalum Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2010-01-01
Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This, model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.
Reliability of High-Voltage Tantalum Capacitors. Parts 3 and 4)
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2010-01-01
Weibull grading test is a powerful technique that allows selection and reliability rating of solid tantalum capacitors for military and space applications. However, inaccuracies in the existing method and non-adequate acceleration factors can result in significant, up to three orders of magnitude, errors in the calculated failure rate of capacitors. This paper analyzes deficiencies of the existing technique and recommends more accurate method of calculations. A physical model presenting failures of tantalum capacitors as time-dependent-dielectric-breakdown is used to determine voltage and temperature acceleration factors and select adequate Weibull grading test conditions. This model is verified by highly accelerated life testing (HALT) at different temperature and voltage conditions for three types of solid chip tantalum capacitors. It is shown that parameters of the model and acceleration factors can be calculated using a general log-linear relationship for the characteristic life with two stress levels.
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Selection of Representative Models for Decision Analysis Under Uncertainty
NASA Astrophysics Data System (ADS)
Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.
2016-03-01
The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.
RUC at TREC 2014: Select Resources Using Topic Models
2014-11-01
federated search techniques in a realistic Web setting with a large number of online Web search services. This year the track contains three tasks...Selection. In CIKM 2009, pages 1277-1286. [10] M. Baillie, M. Carmen, and F. Crestani. A Multiple- Collection Latent Topic Model for Federated ... Search . Information Retrieval (2011) 14:390-412. [11] A. Bellogin, G. G. Gebremeskel, J. He, A. Said, T. Samar, A. P. de Vries. CWI and TU Delft at TREC
Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin
2018-04-14
In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.
Mathematical Modelling for Patient Selection in Proton Therapy.
Mee, T; Kirkby, N F; Kirkby, K J
2018-05-01
Proton beam therapy (PBT) is still relatively new in cancer treatment and the clinical evidence base is relatively sparse. Mathematical modelling offers assistance when selecting patients for PBT and predicting the demand for service. Discrete event simulation, normal tissue complication probability, quality-adjusted life-years and Markov Chain models are all mathematical and statistical modelling techniques currently used but none is dominant. As new evidence and outcome data become available from PBT, comprehensive models will emerge that are less dependent on the specific technologies of radiotherapy planning and delivery. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Variable Selection in the Presence of Missing Data: Imputation-based Methods.
Zhao, Yize; Long, Qi
2017-01-01
Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.
The creation of Physiologically Based Pharmacokinetic (PBPK) models for a new chemical requires the selection of an appropriate model structure and the collection of a large amount of data for parameterization. Commonly, a large proportion of the needed information is collected ...
Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)
ERIC Educational Resources Information Center
Yavuz, Guler; Hambleton, Ronald K.
2017-01-01
Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…
Gaussian process regression for tool wear prediction
NASA Astrophysics Data System (ADS)
Kong, Dongdong; Chen, Yongjie; Li, Ning
2018-05-01
To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.
A Developed Meta-model for Selection of Cotton Fabrics Using Design of Experiments and TOPSIS Method
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Chatterjee, Prasenjit
2017-12-01
Selection of cotton fabrics for providing optimal clothing comfort is often considered as a multi-criteria decision making problem consisting of an array of candidate alternatives to be evaluated based of several conflicting properties. In this paper, design of experiments and technique for order preference by similarity to ideal solution (TOPSIS) are integrated so as to develop regression meta-models for identifying the most suitable cotton fabrics with respect to the computed TOPSIS scores. The applicability of the adopted method is demonstrated using two real time examples. These developed models can also identify the statistically significant fabric properties and their interactions affecting the measured TOPSIS scores and final selection decisions. There exists good degree of congruence between the ranking patterns as derived using these meta-models and the existing methods for cotton fabric ranking and subsequent selection.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Speaker-independent phoneme recognition with a binaural auditory image model
NASA Astrophysics Data System (ADS)
Francis, Keith Ivan
1997-09-01
This dissertation presents phoneme recognition techniques based on a binaural fusion of outputs of the auditory image model and subsequent azimuth-selective phoneme recognition in a noisy environment. Background information concerning speech variations, phoneme recognition, current binaural fusion techniques and auditory modeling issues is explained. The research is constrained to sources in the frontal azimuthal plane of a simulated listener. A new method based on coincidence detection of neural activity patterns from the auditory image model of Patterson is used for azimuth-selective phoneme recognition. The method is tested in various levels of noise and the results are reported in contrast to binaural fusion methods based on various forms of correlation to demonstrate the potential of coincidence- based binaural phoneme recognition. This method overcomes smearing of fine speech detail typical of correlation based methods. Nevertheless, coincidence is able to measure similarity of left and right inputs and fuse them into useful feature vectors for phoneme recognition in noise.
Model-based Clustering of High-Dimensional Data in Astrophysics
NASA Astrophysics Data System (ADS)
Bouveyron, C.
2016-05-01
The nature of data in Astrophysics has changed, as in other scientific fields, in the past decades due to the increase of the measurement capabilities. As a consequence, data are nowadays frequently of high dimensionality and available in mass or stream. Model-based techniques for clustering are popular tools which are renowned for their probabilistic foundations and their flexibility. However, classical model-based techniques show a disappointing behavior in high-dimensional spaces which is mainly due to their dramatical over-parametrization. The recent developments in model-based classification overcome these drawbacks and allow to efficiently classify high-dimensional data, even in the "small n / large p" situation. This work presents a comprehensive review of these recent approaches, including regularization-based techniques, parsimonious modeling, subspace classification methods and classification methods based on variable selection. The use of these model-based methods is also illustrated on real-world classification problems in Astrophysics using R packages.
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
Do bioclimate variables improve performance of climate envelope models?
Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2012-01-01
Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.
The role of selection on evolutionary rescue
NASA Astrophysics Data System (ADS)
Amirjanov, Adil
The paper investigates the role of selection on evolutionary rescue of population. The statistical mechanics technique is used to model dynamics of a population experiencing a natural selection and an abrupt change in the environment. The paper assesses the selective pressure produced by two different mechanisms: by strength of resistance and by strength of selection (by intraspecific competition). It is shown that both mechanisms are capable of providing an evolutionary rescue of population in particular conditions. However, for a small level of an extinction rate, the population cannot be rescued without intraspecific competition.
Employment of adaptive learning techniques for the discrimination of acoustic emissions
NASA Astrophysics Data System (ADS)
Erkes, J. W.; McDonald, J. F.; Scarton, H. A.; Tam, K. C.; Kraft, R. P.
1983-11-01
The following aspects of this study on the discrimination of acoustic emissions (AE) were examined: (1) The analytical development and assessment of digital signal processing techniques for AE signal dereverberation, noise reduction, and source characterization; (2) The modeling and verification of some aspects of key selected techniques through a computer-based simulation; and (3) The study of signal propagation physics and their effect on received signal characteristics for relevant physical situations.
Fabrication of Extremely Short Length Fiber Bragg Gratings for Sensor Applications
NASA Technical Reports Server (NTRS)
Wu, Meng-Chou; Rogowski, Robert S.; Tedjojuwono, Ken K.
2002-01-01
A new technique and a physical model for writing extremely short length Bragg gratings in optical fibers have been developed. The model describes the effects of diffraction on the spatial spectra and therefore, the wavelength spectra of the Bragg gratings. Using an interferometric technique and a variable aperture, short gratings of various lengths and center wavelengths were written in optical fibers. By selecting the related parameters, the Bragg gratings with typical length of several hundred microns and bandwidth of several nanometers can be obtained. These short gratings can be apodized with selected diffraction patterns and hence their broadband spectra have a well-defined bell shape. They are suitable for use as miniaturized distributed strain sensors, which have broad applications to aerospace research and industry as well.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Cheng, Shu-Xi; Xie, Chuan-Qi; Wang, Qiao-Nan; He, Yong; Shao, Yong-Ni
2014-05-01
Identification of early blight on tomato leaves by using hyperspectral imaging technique based on different effective wavelengths selection methods (successive projections algorithm, SPA; x-loading weights, x-LW; gram-schmidt orthogonaliza-tion, GSO) was studied in the present paper. Hyperspectral images of seventy healthy and seventy infected tomato leaves were obtained by hyperspectral imaging system across the wavelength range of 380-1023 nm. Reflectance of all pixels in region of interest (ROI) was extracted by ENVI 4. 7 software. Least squares-support vector machine (LS-SVM) model was established based on the full spectral wavelengths. It obtained an excellent result with the highest identification accuracy (100%) in both calibration and prediction sets. Then, EW-LS-SVM and EW-LDA models were established based on the selected wavelengths suggested by SPA, x-LW and GSO, respectively. The results showed that all of the EW-LS-SVM and EW-LDA models performed well with the identification accuracy of 100% in EW-LS-SVM model and 100%, 100% and 97. 83% in EW-LDA model, respectively. Moreover, the number of input wavelengths of SPA-LS-SVM, x-LW-LS-SVM and GSO-LS-SVM models were four (492, 550, 633 and 680 nm), three (631, 719 and 747 nm) and two (533 and 657 nm), respectively. Fewer input variables were beneficial for the development of identification instrument. It demonstrated that it is feasible to identify early blight on tomato leaves by using hyperspectral imaging, and SPA, x-LW and GSO were effective wavelengths selection methods.
Bill, J S; Reuther, J F
2004-05-01
The aim was to define the indications for use of rapid prototyping models based on data of patients treated with this technique. Since 1987 our department has been developing methods of rapid prototyping in surgery planning. During the study, first the statistical and reproducible anatomical precision of rapid prototyping models was determined on pig skull measurements depending on CT parameters and method of rapid prototyping. Measurements on stereolithography models and on selective laser sintered models confirmed an accuracy of +/-0.88 mm or 2.7% (maximum deviation: -3.0 mm to +3.2 mm) independently from CT parameters or method of rapid prototyping, respectively. With the same precision of models multilayer helical CT with a higher rate is the preferable method of data acquisition compared to conventional helical CT. From 1990 to 2002 in atotal of 122 patients, 127 rapid prototyping models were manufactured: in 112 patients stereolithography models, in 2 patients an additional stereolithography model, in 2 patients an additional selective laser sinter model, in 1 patient an additional milled model, and in 10 patients just a selective laser sinter model. Reconstructive surgery, distraction osteogenesis including midface distraction, and dental implantology are proven to be the major indications for rapid prototyping as confirmed in a review of the literature. Surgery planning on rapid prototyping models should only be used in individual cases due to radiation dose and high costs. Routine use of this technique only seems to be indicated in skull reconstruction and distraction osteogenesis.
Selective field evaporation in field-ion microscopy for ordered alloys
NASA Astrophysics Data System (ADS)
Ge, Xi-jin; Chen, Nan-xian; Zhang, Wen-qing; Zhu, Feng-wu
1999-04-01
Semiempirical pair potentials, obtained by applying the Chen-inversion technique to a cohesion equation of Rose et al. [Phys. Rev. B 29, 2963 (1984)], are employed to assess the bonding energies of surface atoms of intermetallic compounds. This provides a new calculational model of selective field evaporation in field-ion microscopy (FIM). Based on this model, a successful interpretation of FIM image contrasts for Fe3Al, PtCo, Pt3Co, Ni4Mo, Ni3Al, and Ni3Fe is given.
Theory and design of variable conductance heat pipes
NASA Technical Reports Server (NTRS)
Marcus, B. D.
1972-01-01
A comprehensive review and analysis of all aspects of heat pipe technology pertinent to the design of self-controlled, variable conductance devices for spacecraft thermal control is presented. Subjects considered include hydrostatics, hydrodynamics, heat transfer into and out of the pipe, fluid selection, materials compatibility and variable conductance control techniques. The report includes a selected bibliography of pertinent literature, analytical formulations of various models and theories describing variable conductance heat pipe behavior, and the results of numerous experiments on the steady state and transient performance of gas controlled variable conductance heat pipes. Also included is a discussion of VCHP design techniques.
A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns
NASA Astrophysics Data System (ADS)
Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng
2009-11-01
Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.
Fisher-Wright model with deterministic seed bank and selection.
Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel
2017-04-01
Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.
A linear model fails to predict orientation selectivity of cells in the cat visual cortex.
Volgushev, M; Vidyasagar, T R; Pei, X
1996-01-01
1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828
Pham-The, H; Casañola-Martin, G; Diéguez-Santana, K; Nguyen-Hai, N; Ngoc, N T; Vu-Duc, L; Le-Thi-Thu, H
2017-03-01
Histone deacetylases (HDAC) are emerging as promising targets in cancer, neuronal diseases and immune disorders. Computational modelling approaches have been widely applied for the virtual screening and rational design of novel HDAC inhibitors. In this study, different machine learning (ML) techniques were applied for the development of models that accurately discriminate HDAC2 inhibitors form non-inhibitors. The obtained models showed encouraging results, with the global accuracy in the external set ranging from 0.83 to 0.90. Various aspects related to the comparison of modelling techniques, applicability domain and descriptor interpretations were discussed. Finally, consensus predictions of these models were used for screening HDAC2 inhibitors from four chemical libraries whose bioactivities against HDAC1, HDAC3, HDAC6 and HDAC8 have been known. According to the results of virtual screening assays, structures of some hits with pair-isoform-selective activity (between HDAC2 and other HDACs) were revealed. This study illustrates the power of ML-based QSAR approaches for the screening and discovery of potent, isoform-selective HDACIs.
Microstructure and Magnetic Properties of Magnetic Material Fabricated by Selective Laser Melting
NASA Astrophysics Data System (ADS)
Jhong, Kai Jyun; Huang, Wei-Chin; Lee, Wen Hsi
Selective Laser Melting (SLM) is a powder-based additive manufacturing which is capable of producing parts layer-by-layer from a 3D CAD model. The aim of this study is to adopt the selective laser melting technique to magnetic material fabrication. [1]For the SLM process to be practical in industrial use, highly specific mechanical properties of the final product must be achieved. The integrity of the manufactured components depend strongly on each single laser-melted track and every single layer, as well as the strength of the connections between them. In this study, effects of the processing parameters, such as the space distance of surface morphology is analyzed. Our hypothesis is that when a magnetic product is made by the selective laser melting techniques instead of traditional techniques, the finished component will have more precise and effective properties. This study analyzed the magnitudes of magnetic properties in comparison with different parameters in the SLM process and compiled a completed product to investigate the efficiency in contrast with products made with existing manufacturing processes.
Afshin Pourmokhtarian; Charles T. Driscoll; John L. Campbell; Katharine Hayhoe; Anne M. K. Stoner
2016-01-01
Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training...
Studies in the demand for short haul air transportation
NASA Technical Reports Server (NTRS)
Kanafani, A.; Gosling, G.; Taghavi, S.
1975-01-01
Demand is analyzed in a short haul air transportation corridor. Emphasis is placed on traveler selection from available routes. Model formulations, estimation techniques, and traffic data handling are included.
Cellular image segmentation using n-agent cooperative game theory
NASA Astrophysics Data System (ADS)
Dimock, Ian B.; Wan, Justin W. L.
2016-03-01
Image segmentation is an important problem in computer vision and has significant applications in the segmentation of cellular images. Many different imaging techniques exist and produce a variety of image properties which pose difficulties to image segmentation routines. Bright-field images are particularly challenging because of the non-uniform shape of the cells, the low contrast between cells and background, and imaging artifacts such as halos and broken edges. Classical segmentation techniques often produce poor results on these challenging images. Previous attempts at bright-field imaging are often limited in scope to the images that they segment. In this paper, we introduce a new algorithm for automatically segmenting cellular images. The algorithm incorporates two game theoretic models which allow each pixel to act as an independent agent with the goal of selecting their best labelling strategy. In the non-cooperative model, the pixels choose strategies greedily based only on local information. In the cooperative model, the pixels can form coalitions, which select labelling strategies that benefit the entire group. Combining these two models produces a method which allows the pixels to balance both local and global information when selecting their label. With the addition of k-means and active contour techniques for initialization and post-processing purposes, we achieve a robust segmentation routine. The algorithm is applied to several cell image datasets including bright-field images, fluorescent images and simulated images. Experiments show that the algorithm produces good segmentation results across the variety of datasets which differ in cell density, cell shape, contrast, and noise levels.
NASA Astrophysics Data System (ADS)
Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad
2014-10-01
Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.
User Selection Criteria of Airspace Designs in Flexible Airspace Management
NASA Technical Reports Server (NTRS)
Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung
2011-01-01
A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.
NLOS Correction/Exclusion for GNSS Measurement Using RAIM and City Building Models.
Hsu, Li-Ta; Gu, Yanlei; Kamijo, Shunsuke
2015-07-17
Currently, global navigation satellite system (GNSS) receivers can provide accurate and reliable positioning service in open-field areas. However, their performance in the downtown areas of cities is still affected by the multipath and none-line-of-sight (NLOS) receptions. This paper proposes a new positioning method using 3D building models and the receiver autonomous integrity monitoring (RAIM) satellite selection method to achieve satisfactory positioning performance in urban area. The 3D building model uses a ray-tracing technique to simulate the line-of-sight (LOS) and NLOS signal travel distance, which is well-known as pseudorange, between the satellite and receiver. The proposed RAIM fault detection and exclusion (FDE) is able to compare the similarity between the raw pseudorange measurement and the simulated pseudorange. The measurement of the satellite will be excluded if the simulated and raw pseudoranges are inconsistent. Because of the assumption of the single reflection in the ray-tracing technique, an inconsistent case indicates it is a double or multiple reflected NLOS signal. According to the experimental results, the RAIM satellite selection technique can reduce by about 8.4% and 36.2% the positioning solutions with large errors (solutions estimated on the wrong side of the road) for the 3D building model method in the middle and deep urban canyon environment, respectively.
Predictive models reduce talent development costs in female gymnastics.
Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle
2017-04-01
This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.
Model selection and assessment for multi-species occupancy models
Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.
2016-01-01
While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.
The B-dot Earth Average Magnetic Field
NASA Technical Reports Server (NTRS)
Capo-Lugo, Pedro A.; Rakoczy, John; Sanders, Devon
2013-01-01
The average Earth's magnetic field is solved with complex mathematical models based on mean square integral. Depending on the selection of the Earth magnetic model, the average Earth's magnetic field can have different solutions. This paper presents a simple technique that takes advantage of the damping effects of the b-dot controller and is not dependent of the Earth magnetic model; but it is dependent on the magnetic torquers of the satellite which is not taken into consideration in the known mathematical models. Also the solution of this new technique can be implemented so easily that the flight software can be updated during flight, and the control system can have current gains for the magnetic torquers. Finally, this technique is verified and validated using flight data from a satellite that it has been in orbit for three years.
Applications Of Measurement Techniques To Develop Small-Diameter, Undersea Fiber Optic Cables
NASA Astrophysics Data System (ADS)
Kamikawa, Neil T.; Nakagawa, Arthur T.
1984-12-01
Attenuation, strain, and optical time domain reflectometer (OTDR) measurement techniques were applied successfully in the development of a minimum-diameter, electro-optic sea floor cable. Temperature and pressure models for excess attenuation in polymer coated, graded-index fibers were investigated analytically and experimentally using these techniques in the laboratory. The results were used to select a suitable fiber for the cable. Measurements also were performed on these cables during predeployment and sea-trial testing to verify laboratory results. Application of the measurement techniques and results are summarized in this paper.
Cancer drug discovery: recent innovative approaches to tumor modeling.
Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M
2016-09-01
Cell culture models have been at the heart of anti-cancer drug discovery programs for over half a century. Advancements in cell culture techniques have seen the rapid evolution of more complex in vitro cell culture models investigated for use in drug discovery. Three-dimensional (3D) cell culture research has become a strong focal point, as this technique permits the recapitulation of the tumor microenvironment. Biologically relevant 3D cellular models have demonstrated significant promise in advancing cancer drug discovery, and will continue to play an increasing role in the future. In this review, recent advances in 3D cell culture techniques and their application in tumor modeling and anti-cancer drug discovery programs are discussed. The topics include selection of cancer cells, 3D cell culture assays (associated endpoint measurements and analysis), 3D microfluidic systems and 3D bio-printing. Although advanced cancer cell culture models and techniques are becoming commonplace in many research groups, the use of these approaches has yet to be fully embraced in anti-cancer drug applications. Furthermore, limitations associated with analyzing information-rich biological data remain unaddressed.
Methods in Molecular Biology Mouse Genetics: Methods and Protocols | Center for Cancer Research
Mouse Genetics: Methods and Protocols provides selected mouse genetic techniques and their application in modeling varieties of human diseases. The chapters are mainly focused on the generation of different transgenic mice to accomplish the manipulation of genes of interest, tracing cell lineages, and modeling human diseases.
Stable cycling in discrete-time genetic models.
Hastings, A
1981-11-01
Examples of stable cycling are discussed for two-locus, two-allele, deterministic, discrete-time models with constant fitnesses. The cases that cycle were found by using numerical techniques to search for stable Hopf bifurcations. One consequence of the results is that apparent cases of directional selection may be due to stable cycling.
Spectral nudging – a scale-selective interior constraint technique – is commonly used in regional climate models to maintain consistency with large-scale forcing while permitting mesoscale features to develop in the downscaled simulations. Several studies have demonst...
Numerical modeling of eastern connecticut's visual resources
Daniel L. Civco
1979-01-01
A numerical model capable of accurately predicting the preference for landscape photographs of selected points in eastern Connecticut is presented. A function of the social attitudes expressed toward thirty-two salient visual landscape features serves as the independent variable in predicting preferences. A technique for objectively assigning adjectives to landscape...
A Statistical Decision Model for Periodical Selection for a Specialized Information Center
ERIC Educational Resources Information Center
Dym, Eleanor D.; Shirey, Donald L.
1973-01-01
An experiment is described which attempts to define a quantitative methodology for the identification and evaluation of all possibly relevant periodical titles containing toxicological-biological information. A statistical decision model was designed and employed, along with yes/no criteria questions, a training technique and a quality control…
The manufacturing of TiAl6V4 implants using selective laser melting technology
NASA Astrophysics Data System (ADS)
Lykov, P. A.; Baitimerov, R. M.; Panfilov, A. V.; Guz, A. O.
2017-10-01
In this article we study the technique for creating medical implants using additive technologies. A plastic skull model was made. The affected part of the skull was identified and removed. An implant was made of titanium alloy. The implant was installed in the model skull.
Computational Modeling and Neuroimaging Techniques for Targeting during Deep Brain Stimulation
Sweet, Jennifer A.; Pace, Jonathan; Girgis, Fady; Miller, Jonathan P.
2016-01-01
Accurate surgical localization of the varied targets for deep brain stimulation (DBS) is a process undergoing constant evolution, with increasingly sophisticated techniques to allow for highly precise targeting. However, despite the fastidious placement of electrodes into specific structures within the brain, there is increasing evidence to suggest that the clinical effects of DBS are likely due to the activation of widespread neuronal networks directly and indirectly influenced by the stimulation of a given target. Selective activation of these complex and inter-connected pathways may further improve the outcomes of currently treated diseases by targeting specific fiber tracts responsible for a particular symptom in a patient-specific manner. Moreover, the delivery of such focused stimulation may aid in the discovery of new targets for electrical stimulation to treat additional neurological, psychiatric, and even cognitive disorders. As such, advancements in surgical targeting, computational modeling, engineering designs, and neuroimaging techniques play a critical role in this process. This article reviews the progress of these applications, discussing the importance of target localization for DBS, and the role of computational modeling and novel neuroimaging in improving our understanding of the pathophysiology of diseases, and thus paving the way for improved selective target localization using DBS. PMID:27445709
Forina, M; Oliveri, P; Bagnasco, L; Simonetti, R; Casolino, M C; Nizzi Grifi, F; Casale, M
2015-11-01
An authentication study of the Italian PDO (Protected Designation of Origin) olive oil Chianti Classico, based on artificial nose, near-infrared and UV-visible spectroscopy, with a set of samples representative of the whole Chianti Classico production area and a considerable number of samples from other Italian PDO regions was performed. The signals provided by the three analytical techniques were used both individually and jointly, after fusion of the respective variables, in order to build a model for the Chianti Classico PDO olive oil. Different signal pre-treatments were performed in order to investigate their importance and their effects in enhancing and extracting information from experimental data, correcting backgrounds or removing baseline variations. Stepwise-Linear Discriminant Analysis (STEP-LDA) was used as a feature selection technique and, afterward, Linear Discriminant Analysis (LDA) and the class-modelling technique Quadratic Discriminant Analysis-UNEQual dispersed classes (QDA-UNEQ) were applied to sub-sets of selected variables, in order to obtain efficient models capable of characterising the extra virgin olive oils produced in the Chianti Classico PDO area. Copyright © 2015 Elsevier B.V. All rights reserved.
Xie, Chuanqi; Xu, Ning; Shao, Yongni; He, Yong
2015-01-01
This research investigated the feasibility of using Fourier transform near-infrared (FT-NIR) spectral technique for determining arginine content in fermented Cordyceps sinensis (C. sinensis) mycelium. Three different models were carried out to predict the arginine content. Wavenumber selection methods such as competitive adaptive reweighted sampling (CARS) and successive projections algorithm (SPA) were used to identify the most important wavenumbers and reduce the high dimensionality of the raw spectral data. Only a few wavenumbers were selected by CARS and CARS-SPA as the optimal wavenumbers, respectively. Among the prediction models, CARS-least squares-support vector machine (CARS-LS-SVM) model performed best with the highest values of the coefficient of determination of prediction (Rp(2)=0.8370) and residual predictive deviation (RPD=2.4741), the lowest value of root mean square error of prediction (RMSEP=0.0841). Moreover, the number of the input variables was forty-five, which only accounts for 2.04% of that of the full wavenumbers. The results showed that FT-NIR spectral technique has the potential to be an objective and non-destructive method to detect arginine content in fermented C. sinensis mycelium. Copyright © 2015 Elsevier B.V. All rights reserved.
Longobardi, F; Ventrella, A; Bianco, A; Catucci, L; Cafagna, I; Gallo, V; Mastrorilli, P; Agostiano, A
2013-12-01
In this study, non-targeted (1)H NMR fingerprinting was used in combination with multivariate statistical techniques for the classification of Italian sweet cherries based on their different geographical origins (Emilia Romagna and Puglia). As classification techniques, Soft Independent Modelling of Class Analogy (SIMCA), Partial Least Squares Discriminant Analysis (PLS-DA), and Linear Discriminant Analysis (LDA) were carried out and the results were compared. For LDA, before performing a refined selection of the number/combination of variables, two different strategies for a preliminary reduction of the variable number were tested. The best average recognition and CV prediction abilities (both 100.0%) were obtained for all the LDA models, although PLS-DA also showed remarkable performances (94.6%). All the statistical models were validated by observing the prediction abilities with respect to an external set of cherry samples. The best result (94.9%) was obtained with LDA by performing a best subset selection procedure on a set of 30 principal components previously selected by a stepwise decorrelation. The metabolites that mostly contributed to the classification performances of such LDA model, were found to be malate, glucose, fructose, glutamine and succinate. Copyright © 2013 Elsevier Ltd. All rights reserved.
Liquid membrane purification of biogas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.; Guha, A.K.; Lee, Y.T.
1991-03-01
Conventional gas purification technologies are highly energy intensive. They are not suitable for economic removal of CO{sub 2} from methane obtained in biogas due to the small scale of gas production. Membrane separation techniques on the other hand are ideally suited for low gas production rate applications due to their modular nature. Although liquid membranes possess a high species permeability and selectivity, they have not been used for industrial applications due to the problems of membrane stability, membrane flooding and poor operational flexibility, etc. A new hollow-fiber-contained liquid membrane (HFCLM) technique has been developed recently. This technique overcomes the shortcomingsmore » of the traditional immobilized liquid membrane technology. A new technique uses two sets of hydrophobic, microporous hollow fine fibers, packed tightly in a permeator shell. The inter-fiber space is filled with an aqueous liquid acting as the membrane. The feed gas mixture is separated by selective permeation of a species through the liquid from one fiber set to the other. The second fiber set carries a sweep stream, gas or liquid, or simply the permeated gas stream. The objectives (which were met) of the present investigation were as follows. To study the selective removal of CO{sub 2} from a model biogas mixture containing 40% CO{sub 2} (the rest being N{sub 2} or CH{sub 4}) using a HFCLM permeator under various operating modes that include sweep gas, sweep liquid, vacuum and conventional permeation; to develop a mathematical model for each mode of operation; to build a large-scale purification loop and large-scale permeators for model biogas separation and to show stable performance over a period of one month.« less
NASA Astrophysics Data System (ADS)
Sutrisno, Agung; Gunawan, Indra; Vanany, Iwan
2017-11-01
In spite of being integral part in risk - based quality improvement effort, studies improving quality of selection of corrective action priority using FMEA technique are still limited in literature. If any, none is considering robustness and risk in selecting competing improvement initiatives. This study proposed a theoretical model to select risk - based competing corrective action by considering robustness and risk of competing corrective actions. We incorporated the principle of robust design in counting the preference score among corrective action candidates. Along with considering cost and benefit of competing corrective actions, we also incorporate the risk and robustness of corrective actions. An example is provided to represent the applicability of the proposed model.
Methods in Molecular Biology: Germline Stem Cells | Center for Cancer Research
The protocols in Germline Stem Cells are intended to present selected genetic, molecular, and cellular techniques used in germline stem cell research. The book is divided into two parts. Part I covers germline stem cell identification and regulation in model organisms. Part II covers current techniques used in in vitro culture and applications of germline stem cells.
Multi-dimensional tunnelling and complex momentum
NASA Technical Reports Server (NTRS)
Bowcock, Peter; Gregory, Ruth
1991-01-01
The problem of modeling tunneling phenomena in more than one dimension is examined. It is found that existing techniques are inadequate in a wide class of situations, due to their inability to deal with concurrent classical motion. The generalization of these methods to allow for complex momenta is shown, and improved techniques are demonstrated with a selection of illustrative examples. Possible applications are presented.
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
NASA Astrophysics Data System (ADS)
D, Meena; Francis, Fredy; T, Sarath K.; E, Dipin; Srinivas, T.; K, Jayasree V.
2014-10-01
Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.
NASA Astrophysics Data System (ADS)
Anna, I. D.; Cahyadi, I.; Yakin, A.
2018-01-01
Selection of marketing strategy is a prominent competitive advantage for small and medium enterprises business development. The selection process is is a multiple criteria decision-making problem, which includes evaluation of various attributes or criteria in a process of strategy formulation. The objective of this paper is to develop a model for the selection of a marketing strategy in Batik Madura industry. The current study proposes an integrated approach based on analytic network process (ANP) and technique for order preference by similarity to ideal solution (TOPSIS) to determine the best strategy for Batik Madura marketing problems. Based on the results of group decision-making technique, this study selected fourteen criteria, including consistency, cost, trend following, customer loyalty, business volume, uniqueness manpower, customer numbers, promotion, branding, bussiness network, outlet location, credibility and the inovation as Batik Madura marketing strategy evaluation criteria. A survey questionnaire developed from literature review was distributed to a sample frame of Batik Madura SMEs in Pamekasan. In the decision procedure step, expert evaluators were asked to establish the decision matrix by comparing the marketing strategy alternatives under each of the individual criteria. Then, considerations obtained from ANP and TOPSIS methods were applied to build the specific criteria constraints and range of the launch strategy in the model. The model in this study demonstrates that, under current business situation, Straight-focus marketing strategy is the best marketing strategy for Batik Madura SMEs in Pamekasan.
A real time Pegasus propulsion system model for VSTOL piloted simulation evaluation
NASA Technical Reports Server (NTRS)
Mihaloew, J. R.; Roth, S. P.; Creekmore, R.
1981-01-01
A real time propulsion system modeling technique suitable for use in man-in-the-loop simulator studies was developd. This technique provides the system accuracy, stability, and transient response required for integrated aircraft and propulsion control system studies. A Pegasus-Harrier propulsion system was selected as a baseline for developing mathematical modeling and simulation techniques for VSTOL. Initially, static and dynamic propulsion system characteristics were modeled in detail to form a nonlinear aerothermodynamic digital computer simulation of a Pegasus engine. From this high fidelity simulation, a real time propulsion model was formulated by applying a piece-wise linear state variable methodology. A hydromechanical and water injection control system was also simulated. The real time dynamic model includes the detail and flexibility required for the evaluation of critical control parameters and propulsion component limits over a limited flight envelope. The model was programmed for interfacing with a Harrier aircraft simulation. Typical propulsion system simulation results are presented.
Navas, Juan Moreno; Telfer, Trevor C; Ross, Lindsay G
2011-08-01
Combining GIS with neuro-fuzzy modeling has the advantage that expert scientific knowledge in coastal aquaculture activities can be incorporated into a geospatial model to classify areas particularly vulnerable to pollutants. Data on the physical environment and its suitability for aquaculture in an Irish fjard, which is host to a number of different aquaculture activities, were derived from a three-dimensional hydrodynamic and GIS models. Subsequent incorporation into environmental vulnerability models, based on neuro-fuzzy techniques, highlighted localities particularly vulnerable to aquaculture development. The models produced an overall classification accuracy of 85.71%, with a Kappa coefficient of agreement of 81%, and were sensitive to different input parameters. A statistical comparison between vulnerability scores and nitrogen concentrations in sediment associated with salmon cages showed good correlation. Neuro-fuzzy techniques within GIS modeling classify vulnerability of coastal regions appropriately and have a role in policy decisions for aquaculture site selection. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Collins, Curtis Andrew
Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.
Model selection as a science driver for dark energy surveys
NASA Astrophysics Data System (ADS)
Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin
2006-07-01
A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.
Torres Astorga, Romina; de Los Santos Villalobos, Sergio; Velasco, Hugo; Domínguez-Quintero, Olgioly; Pereira Cardoso, Renan; Meigikos Dos Anjos, Roberto; Diawara, Yacouba; Dercon, Gerd; Mabit, Lionel
2018-05-15
Identification of hot spots of land degradation is strongly related with the selection of soil tracers for sediment pathways. This research proposes the complementary and integrated application of two analytical techniques to select the most suitable fingerprint tracers for identifying the main sources of sediments in an agricultural catchment located in Central Argentina with erosive loess soils. Diffuse reflectance Fourier transformed in the mid-infrared range (DRIFT-MIR) spectroscopy and energy-dispersive X-ray fluorescence (EDXRF) were used for a suitable fingerprint selection. For using DRIFT-MIR spectroscopy as fingerprinting technique, calibration through quantitative parameters is needed to link and correlate DRIFT-MIR spectra with soil tracers. EDXRF was used in this context for determining the concentrations of geochemical elements in soil samples. The selected tracers were confirmed using two artificial mixtures composed of known proportions of soil collected in different sites with distinctive soil uses. These fingerprint elements were used as parameters to build a predictive model with the whole set of DRIFT-MIR spectra. Fingerprint elements such as phosphorus, iron, calcium, barium, and titanium were identified for obtaining a suitable reconstruction of the source proportions in the artificial mixtures. Mid-infrared spectra produced successful prediction models (R 2 = 0.91) for Fe content and moderate useful prediction (R 2 = 0.72) for Ti content. For Ca, P, and Ba, the R 2 were 0.44, 0.58, and 0.59 respectively.
Visible near-diffraction-limited lucky imaging with full-sky laser-assisted adaptive optics
NASA Astrophysics Data System (ADS)
Basden, A. G.
2014-08-01
Both lucky imaging techniques and adaptive optics require natural guide stars, limiting sky-coverage, even when laser guide stars are used. Lucky imaging techniques become less successful on larger telescopes unless adaptive optics is used, as the fraction of images obtained with well-behaved turbulence across the whole telescope pupil becomes vanishingly small. Here, we introduce a technique combining lucky imaging techniques with tomographic laser guide star adaptive optics systems on large telescopes. This technique does not require any natural guide star for the adaptive optics, and hence offers full sky-coverage adaptive optics correction. In addition, we introduce a new method for lucky image selection based on residual wavefront phase measurements from the adaptive optics wavefront sensors. We perform Monte Carlo modelling of this technique, and demonstrate I-band Strehl ratios of up to 35 per cent in 0.7 arcsec mean seeing conditions with 0.5 m deformable mirror pitch and full adaptive optics sky-coverage. We show that this technique is suitable for use with lucky imaging reference stars as faint as magnitude 18, and fainter if more advanced image selection and centring techniques are used.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
M1 transitions in the (sdg) boson model
NASA Astrophysics Data System (ADS)
Kuyucak, S.; Morrison, I.
1988-03-01
Using the {1}/{N} expansion technique we derive expressions for β→g, γ→g and γ→γ M1 transitions in a general boson model. The M1 matrix elements in the sdg-boson model are similar in form to those in the neutron-proton IBM. Comparisons are made to some selected M1 data exhibiting collective character.
1993-02-10
new technology is to have sufficient control of processing to *- describable by an appropriate elecromagnetic model . build useful devices. For example...3. W aveguide Modulators .................................. 7 B. Integrated Optical Device and Circuit Modeling ... ................... .. 10 C...following categories: A. Integrated Optical Devices and Technology B. Integrated Optical Device and Circuit Modeling C. Cryogenic Etching for Low
Multiresolution texture models for brain tumor segmentation in MRI.
Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir
2011-01-01
In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1991-01-01
A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Wang, Yen-Ling
2014-01-01
Checkpoint kinase 2 (Chk2) has a great effect on DNA-damage and plays an important role in response to DNA double-strand breaks and related lesions. In this study, we will concentrate on Chk2 and the purpose is to find the potential inhibitors by the pharmacophore hypotheses (PhModels), combinatorial fusion, and virtual screening techniques. Applying combinatorial fusion into PhModels and virtual screening techniques is a novel design strategy for drug design. We used combinatorial fusion to analyze the prediction results and then obtained the best correlation coefficient of the testing set (r test) with the value 0.816 by combining the BesttrainBesttest and FasttrainFasttest prediction results. The potential inhibitors were selected from NCI database by screening according to BesttrainBesttest + FasttrainFasttest prediction results and molecular docking with CDOCKER docking program. Finally, the selected compounds have high interaction energy between a ligand and a receptor. Through these approaches, 23 potential inhibitors for Chk2 are retrieved for further study. PMID:24864236
Approximation, abstraction and decomposition in search and optimization
NASA Technical Reports Server (NTRS)
Ellman, Thomas
1992-01-01
In this paper, I discuss four different areas of my research. One portion of my research has focused on automatic synthesis of search control heuristics for constraint satisfaction problems (CSPs). I have developed techniques for automatically synthesizing two types of heuristics for CSPs: Filtering functions are used to remove portions of a search space from consideration. Another portion of my research is focused on automatic synthesis of hierarchic algorithms for solving constraint satisfaction problems (CSPs). I have developed a technique for constructing hierarchic problem solvers based on numeric interval algebra. Another portion of my research is focused on automatic decomposition of design optimization problems. We are using the design of racing yacht hulls as a testbed domain for this research. Decomposition is especially important in the design of complex physical shapes such as yacht hulls. Another portion of my research is focused on intelligent model selection in design optimization. The model selection problem results from the difficulty of using exact models to analyze the performance of candidate designs.
NASA Technical Reports Server (NTRS)
Horton, F. E.
1970-01-01
The utility of remote sensing techniques to urban data acquisition problems in several distinct areas was identified. This endeavor included a comparison of remote sensing systems for urban data collection, the extraction of housing quality data from aerial photography, utilization of photographic sensors in urban transportation studies, urban change detection, space photography utilization, and an application of remote sensing techniques to the acquisition of data concerning intra-urban commercial centers. The systematic evaluation of variable extraction for urban modeling and planning at several different scales, and the model derivation for identifying and predicting economic growth and change within a regional system of cities are also studied.
Diagnostic reasoning techniques for selective monitoring
NASA Technical Reports Server (NTRS)
Homem-De-mello, L. S.; Doyle, R. J.
1991-01-01
An architecture for using diagnostic reasoning techniques in selective monitoring is presented. Given the sensor readings and a model of the physical system, a number of assertions are generated and expressed as Boolean equations. The resulting system of Boolean equations is solved symbolically. Using a priori probabilities of component failure and Bayes' rule, revised probabilities of failure can be computed. These will indicate what components have failed or are the most likely to have failed. This approach is suitable for systems that are well understood and for which the correctness of the assertions can be guaranteed. Also, the system must be such that changes are slow enough to allow the computation.
Risk Assessment Methodology for Hazardous Waste Management (1998)
A methodology is described for systematically assessing and comparing the risks to human health and the environment of hazardous waste management alternatives. The methodology selects and links appropriate models and techniques for performing the process.
Conjoint Analysis: A Study of the Effects of Using Person Variables.
ERIC Educational Resources Information Center
Fraas, John W.; Newman, Isadore
Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
NASA Technical Reports Server (NTRS)
Ryan, Margaret A.; Shevade, A. V.; Taylor, C. J.; Homer, M. L.; Jewell, A. D.; Kisor, A.; Manatt, K. S .; Yen, S. P. S.; Blanco, M.; Goddard, W. A., III
2006-01-01
An array-based sensing system based on polymer/carbon composite conductometric sensors is under development at JPL for use as an environmental monitor in the International Space Station. Sulfur dioxide has been added to the analyte set for this phase of development. Using molecular modeling techniques, the interaction energy between SO2 and polymer functional groups has been calculated, and polymers selected as potential SO2 sensors. Experiment has validated the model and two selected polymers have been shown to be promising materials for SO2 detection.
NASA Astrophysics Data System (ADS)
Prasad, M. N.; Brown, M. S.; Ahmad, S.; Abtin, F.; Allen, J.; da Costa, I.; Kim, H. J.; McNitt-Gray, M. F.; Goldin, J. G.
2008-03-01
Segmentation of lungs in the setting of scleroderma is a major challenge in medical image analysis. Threshold based techniques tend to leave out lung regions that have increased attenuation, for example in the presence of interstitial lung disease or in noisy low dose CT scans. The purpose of this work is to perform segmentation of the lungs using a technique that selects an optimal threshold for a given scleroderma patient by comparing the curvature of the lung boundary to that of the ribs. Our approach is based on adaptive thresholding and it tries to exploit the fact that the curvature of the ribs and the curvature of the lung boundary are closely matched. At first, the ribs are segmented and a polynomial is used to represent the ribs' curvature. A threshold value to segment the lungs is selected iteratively such that the deviation of the lung boundary from the polynomial is minimized. A Naive Bayes classifier is used to build the model for selection of the best fitting lung boundary. The performance of the new technique was compared against a standard approach using a simple fixed threshold of -400HU followed by regiongrowing. The two techniques were evaluated against manual reference segmentations using a volumetric overlap fraction (VOF) and the adaptive threshold technique was found to be significantly better than the fixed threshold technique.
Weak Value Amplification of a Post-Selected Single Photon
NASA Astrophysics Data System (ADS)
Hallaji, Matin
Weak value amplification (WVA) is a measurement technique in which the effect of a pre- and post-selected system on a weakly interacting probe is magnified. In this thesis, I present the first experimental observation of WVA of a single photon. We observed that a signal photon --- sent through a polarization interferometer and post-selected by photodetection in the almost-dark port --- can act like eight photons. The effect of this single photon is measured as a nonlinear phase shift on a separate laser beam. The interaction between the two is mediated by a sample of laser- cooled 85Rb atoms. Electromagnetically induced transparency (EIT) is used to enhance the nonlinearity and overcome resonant absorption. I believe this work to be the first demonstration of WVA where a deterministic interaction is used to entangle two distinct optical systems. In WVA, the amplification is contingent on discarding a large portion of the original data set. While amplification increases measurement sensitivity, discarding data worsens it. Questioning whether these competing effects conspire to improve or diminish measurement accuracy has resulted recently in controversy. I address this question by calculating the maximum amount of information achievable with the WVA technique. By comparing this information to that achievable by the standard technique, where no post-selection is employed, I show that the WVA technique can be advantageous under a certain class of noise models. Finally, I propose a way to optimally apply the WVA technique.
NASA Astrophysics Data System (ADS)
Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang
2017-01-01
Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.
Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang
2017-01-01
Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883
Mushkudiani, Nino A; Hukkelhoven, Chantal W P M; Hernández, Adrián V; Murray, Gordon D; Choi, Sung C; Maas, Andrew I R; Steyerberg, Ewout W
2008-04-01
To describe the modeling techniques used for early prediction of outcome in traumatic brain injury (TBI) and to identify aspects for potential improvements. We reviewed key methodological aspects of studies published between 1970 and 2005 that proposed a prognostic model for the Glasgow Outcome Scale of TBI based on admission data. We included 31 papers. Twenty-four were single-center studies, and 22 reported on fewer than 500 patients. The median of the number of initially considered predictors was eight, and on average five of these were selected for the prognostic model, generally including age, Glasgow Coma Score (or only motor score), and pupillary reactivity. The most common statistical technique was logistic regression with stepwise selection of predictors. Model performance was often quantified by accuracy rate rather than by more appropriate measures such as the area under the receiver-operating characteristic curve. Model validity was addressed in 15 studies, but mostly used a simple split-sample approach, and external validation was performed in only four studies. Although most models agree on the three most important predictors, many were developed on small sample sizes within single centers and hence lack generalizability. Modeling strategies have to be improved, and include external validation.
Gebauer, Petr; Malá, Zdena; Boček, Petr
2014-03-01
This contribution is the third part of the project on strategies used in the selection and tuning of electrolyte systems for anionic ITP with ESI-MS detection. The strategy presented here is based on the creation of self-maintained ITP subsystems in moving-boundary systems and describes two new principal approaches offering physical separation of analyte zones from their common ITP stack and/or simultaneous selective stacking of two different analyte groups. Both strategic directions are based on extending the number of components forming the electrolyte system by adding a third suitable anion. The first method is the application of the spacer technique to moving-boundary anionic ITP systems, the second method is a technique utilizing a moving-boundary ITP system in which two ITP subsystems exist and move with mutually different velocities. It is essential for ESI detection that both methods can be based on electrolyte systems containing only several simple chemicals, such as simple volatile organic acids (formic and acetic) and their ammonium salts. The properties of both techniques are defined theoretically and discussed from the viewpoint of their applicability to trace analysis by ITP-ESI-MS. Examples of system design for selected model separations of preservatives and pharmaceuticals illustrate the validity of the theoretical model and application potential of the proposed techniques by both computer simulations and experiments. Both new methods enhance the application range of ITP-MS and may be beneficial particularly for complex multicomponent samples or for analytes with identical molecular mass. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
NASA Astrophysics Data System (ADS)
Krawczyk, Piotr; Badyda, Krzysztof
2011-12-01
The paper presents key assumptions of the mathematical model which describes heat and mass transfer phenomena in a solar sewage drying process, as well as techniques used for solving this model with the Fluent computational fluid dynamics (CFD) software. Special attention was paid to implementation of boundary conditions on the sludge surface, which is a physical boundary between the gaseous phase - air, and solid phase - dried matter. Those conditions allow to model heat and mass transfer between the media during first and second drying stages. Selection of the computational geometry is also discussed - it is a fragment of the entire drying facility. Selected modelling results are presented in the final part of the paper.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
ERIC Educational Resources Information Center
Dirks, Melanie A.; De Los Reyes, Andres; Briggs-Gowan, Margaret; Cella, David; Wakschlag, Lauren S.
2012-01-01
This paper examines the selection and use of multiple methods and informants for the assessment of disruptive behavior syndromes and attention deficit/hyperactivity disorder, providing a critical discussion of (a) the bidirectional linkages between theoretical models of childhood psychopathology and current assessment techniques; and (b) current…
The 2.4 μm Galaxy Luminosity Function As Measured Using WISE. I. Measurement Techniques
NASA Astrophysics Data System (ADS)
Lake, S. E.; Wright, E. L.; Tsai, C.-W.; Lam, A.
2017-04-01
The astronomy community has at its disposal a large back catalog of public spectroscopic galaxy redshift surveys that can be used for the measurement of luminosity functions (LFs). Utilizing the back catalog with new photometric surveys to maximum efficiency requires modeling the color selection bias imposed on the selection of target galaxies by flux limits at multiple wavelengths. The likelihood derived herein can address, in principle, all possible color selection biases through the use of a generalization of the LF, {{Φ }}(L), over the space of all spectra: the spectro-luminosity functional, {{\\Psi }}[{L}ν ]. It is, therefore, the first estimator capable of simultaneously analyzing multiple redshift surveys in a consistent way. We also propose a new way of parametrizing the evolution of the classic Schechter function parameters, L ⋆ and ϕ ⋆, that improves both the physical realism and statistical performance of the model. The techniques derived in this paper are used in a companion paper by Lake et al. to measure the LF of galaxies at the rest-frame wavelength of 2.4 μ {{m}} using the Widefield Infrared Survey Explorer (WISE).
Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi
2007-01-01
The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
A Unified Data Assimilation Strategy for Regional Coupled Atmosphere-Ocean Prediction Systems
NASA Astrophysics Data System (ADS)
Xie, Lian; Liu, Bin; Zhang, Fuqing; Weng, Yonghui
2014-05-01
Improving tropical cyclone (TC) forecasts is a top priority in weather forecasting. Assimilating various observational data to produce better initial conditions for numerical models using advanced data assimilation techniques has been shown to benefit TC intensity forecasts, whereas assimilating large-scale environmental circulation into regional models by spectral nudging or Scale-Selective Data Assimilation (SSDA) has been demonstrated to improve TC track forecasts. Meanwhile, taking into account various air-sea interaction processes by high-resolution coupled air-sea modelling systems has also been shown to improve TC intensity forecasts. Despite the advances in data assimilation and air-sea coupled models, large errors in TC intensity and track forecasting remain. For example, Hurricane Nate (2011) has brought considerable challenge for the TC operational forecasting community, with very large intensity forecast errors (27, 25, and 40 kts for 48, 72, and 96 h, respectively) for the official forecasts. Considering the slow-moving nature of Hurricane Nate, it is reasonable to hypothesize that air-sea interaction processes played a critical role in the intensity change of the storm, and accurate representation of the upper ocean dynamics and thermodynamics is necessary to quantitatively describe the air-sea interaction processes. Currently, data assimilation techniques are generally only applied to hurricane forecasting in stand-alone atmospheric or oceanic model. In fact, most of the regional hurricane forecasting models only included data assimilation techniques for improving the initial condition of the atmospheric model. In such a situation, the benefit of adjustments in one model (atmospheric or oceanic) by assimilating observational data can be compromised by errors from the other model. Thus, unified data assimilation techniques for coupled air-sea modelling systems, which not only simultaneously assimilate atmospheric and oceanic observations into the coupled air-sea modelling system, but also nudging the large-scale environmental flow in the regional model towards global model forecasts are of increasing necessity. In this presentation, we will outline a strategy for an integrated approach in air-sea coupled data assimilation and discuss its benefits and feasibility from incremental results for select historical hurricane cases.
NLOS Correction/Exclusion for GNSS Measurement Using RAIM and City Building Models
Hsu, Li-Ta; Gu, Yanlei; Kamijo, Shunsuke
2015-01-01
Currently, global navigation satellite system (GNSS) receivers can provide accurate and reliable positioning service in open-field areas. However, their performance in the downtown areas of cities is still affected by the multipath and none-line-of-sight (NLOS) receptions. This paper proposes a new positioning method using 3D building models and the receiver autonomous integrity monitoring (RAIM) satellite selection method to achieve satisfactory positioning performance in urban area. The 3D building model uses a ray-tracing technique to simulate the line-of-sight (LOS) and NLOS signal travel distance, which is well-known as pseudorange, between the satellite and receiver. The proposed RAIM fault detection and exclusion (FDE) is able to compare the similarity between the raw pseudorange measurement and the simulated pseudorange. The measurement of the satellite will be excluded if the simulated and raw pseudoranges are inconsistent. Because of the assumption of the single reflection in the ray-tracing technique, an inconsistent case indicates it is a double or multiple reflected NLOS signal. According to the experimental results, the RAIM satellite selection technique can reduce by about 8.4% and 36.2% the positioning solutions with large errors (solutions estimated on the wrong side of the road) for the 3D building model method in the middle and deep urban canyon environment, respectively. PMID:26193278
Label fusion based brain MR image segmentation via a latent selective model
NASA Astrophysics Data System (ADS)
Liu, Gang; Guo, Xiantang; Zhu, Kai; Liao, Hengxu
2018-04-01
Multi-atlas segmentation is an effective approach and increasingly popular for automatically labeling objects of interest in medical images. Recently, segmentation methods based on generative models and patch-based techniques have become the two principal branches of label fusion. However, these generative models and patch-based techniques are only loosely related, and the requirement for higher accuracy, faster segmentation, and robustness is always a great challenge. In this paper, we propose novel algorithm that combines the two branches using global weighted fusion strategy based on a patch latent selective model to perform segmentation of specific anatomical structures for human brain magnetic resonance (MR) images. In establishing this probabilistic model of label fusion between the target patch and patch dictionary, we explored the Kronecker delta function in the label prior, which is more suitable than other models, and designed a latent selective model as a membership prior to determine from which training patch the intensity and label of the target patch are generated at each spatial location. Because the image background is an equally important factor for segmentation, it is analyzed in label fusion procedure and we regard it as an isolated label to keep the same privilege between the background and the regions of interest. During label fusion with the global weighted fusion scheme, we use Bayesian inference and expectation maximization algorithm to estimate the labels of the target scan to produce the segmentation map. Experimental results indicate that the proposed algorithm is more accurate and robust than the other segmentation methods.
Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De
2016-01-01
The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modelling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion which may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear aeroelastic systems. The LASSO minimises the residual sum of squares by the addition of an l(sub 1) penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 Active Aeroelastic Wing using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, P.; Schultz, C.; Larsen, S.
1997-07-15
Monitoring of a CTBT will require transportable seismic identification techniques, especially in regions where there is limited data. Unfortunately, most existing techniques are empirical and can not be used reliably in new regions. Our goal is to help develop transportable regional identification techniques by improving our ability to predict the behavior of regional phases and discriminants in diverse geologic regions and in regions with little or no data. Our approach is to use numerical modeling to understand the physical basis for regional wave propagation phenomena and to use this understanding to help explain observed behavior of regional phases and discriminants.more » In this paper, we focus on results from simulations of data in selected regions and investigate the sensitivity of these regional simulations to various features of the crustal structure. Our initial models use teleseismically estimated source locations, mechanisms, and durations and seismological structures that have been determined by others. We model the Mb 5.9, October 1992, Cairo Egypt earthquake at a station at Ankara Turkey (ANTO) using a two-dimensional crustal model consisting of a water layer over a deep sedimentary basin with a thinning crust beneath the basin. Despite the complex tectonics of the Eastern Mediterranean region, we find surprisingly good agreement between the observed data and synthetics based on this relatively smooth two-dimensional model.« less
Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao
2015-01-01
Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802
Use of system identification techniques for improving airframe finite element models using test data
NASA Technical Reports Server (NTRS)
Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.
1993-01-01
A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.
Discrimination of dynamical system models for biological and chemical processes.
Lorenz, Sönke; Diederichs, Elmar; Telgmann, Regina; Schütte, Christof
2007-06-01
In technical chemistry, systems biology and biotechnology, the construction of predictive models has become an essential step in process design and product optimization. Accurate modelling of the reactions requires detailed knowledge about the processes involved. However, when concerned with the development of new products and production techniques for example, this knowledge often is not available due to the lack of experimental data. Thus, when one has to work with a selection of proposed models, the main tasks of early development is to discriminate these models. In this article, a new statistical approach to model discrimination is described that ranks models wrt. the probability with which they reproduce the given data. The article introduces the new approach, discusses its statistical background, presents numerical techniques for its implementation and illustrates the application to examples from biokinetics.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Murrell, Daniel S; Cortes-Ciriano, Isidro; van Westen, Gerard J P; Stott, Ian P; Bender, Andreas; Malliavin, Thérèse E; Glen, Robert C
2015-01-01
In silico predictive models have proved to be valuable for the optimisation of compound potency, selectivity and safety profiles in the drug discovery process. camb is an R package that provides an environment for the rapid generation of quantitative Structure-Property and Structure-Activity models for small molecules (including QSAR, QSPR, QSAM, PCM) and is aimed at both advanced and beginner R users. camb's capabilities include the standardisation of chemical structure representation, computation of 905 one-dimensional and 14 fingerprint type descriptors for small molecules, 8 types of amino acid descriptors, 13 whole protein sequence descriptors, filtering methods for feature selection, generation of predictive models (using an interface to the R package caret), as well as techniques to create model ensembles using techniques from the R package caretEnsemble). Results can be visualised through high-quality, customisable plots (R package ggplot2). Overall, camb constitutes an open-source framework to perform the following steps: (1) compound standardisation, (2) molecular and protein descriptor calculation, (3) descriptor pre-processing and model training, visualisation and validation, and (4) bioactivity/property prediction for new molecules. camb aims to speed model generation, in order to provide reproducibility and tests of robustness. QSPR and proteochemometric case studies are included which demonstrate camb's application.Graphical abstractFrom compounds and data to models: a complete model building workflow in one package.
Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.
Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A
2017-04-15
Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.
Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing
NASA Astrophysics Data System (ADS)
Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel
Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.
Researching Mental Health Disorders in the Era of Social Media: Systematic Review
Vadillo, Miguel A; Curcin, Vasa
2017-01-01
Background Mental illness is quickly becoming one of the most prevalent public health problems worldwide. Social network platforms, where users can express their emotions, feelings, and thoughts, are a valuable source of data for researching mental health, and techniques based on machine learning are increasingly used for this purpose. Objective The objective of this review was to explore the scope and limits of cutting-edge techniques that researchers are using for predictive analytics in mental health and to review associated issues, such as ethical concerns, in this area of research. Methods We performed a systematic literature review in March 2017, using keywords to search articles on data mining of social network data in the context of common mental health disorders, published between 2010 and March 8, 2017 in medical and computer science journals. Results The initial search returned a total of 5386 articles. Following a careful analysis of the titles, abstracts, and main texts, we selected 48 articles for review. We coded the articles according to key characteristics, techniques used for data collection, data preprocessing, feature extraction, feature selection, model construction, and model verification. The most common analytical method was text analysis, with several studies using different flavors of image analysis and social interaction graph analysis. Conclusions Despite an increasing number of studies investigating mental health issues using social network data, some common problems persist. Assembling large, high-quality datasets of social media users with mental disorder is problematic, not only due to biases associated with the collection methods, but also with regard to managing consent and selecting appropriate analytics techniques. PMID:28663166
R symmetries and a heterotic MSSM
NASA Astrophysics Data System (ADS)
Kappl, Rolf; Nilles, Hans Peter; Schmitz, Matthias
2015-02-01
We employ powerful techniques based on Hilbert and Gröbner bases to analyze particle physics models derived from string theory. Individual models are shown to have a huge landscape of vacua that differ in their phenomenological properties. We explore the (discrete) symmetries of these vacua, the new R symmetry selection rules and their consequences for moduli stabilization.
Modeling the Stress Complexities of Teaching and Learning of School Physics in Nigeria
ERIC Educational Resources Information Center
Emetere, Moses E.
2014-01-01
This study was designed to investigate the validity of the stress complexity model (SCM) to teaching and learning of school physics in Abuja municipal area council of Abuja, North. About two hundred students were randomly selected by a simple random sampling technique from some schools within the Abuja municipal area council. A survey research…
Spatio-temporal behaviour of medium-range ensemble forecasts
NASA Astrophysics Data System (ADS)
Kipling, Zak; Primo, Cristina; Charlton-Perez, Andrew
2010-05-01
Using the recently-developed mean-variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, we present an analysis of the spatio-temporal dynamics of their perturbations, and show how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. We also consider the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. We conclude by looking at how the MVL technique might assist in selecting models for inclusion in a multi-model ensemble, and suggest an experiment to test its potential in this context.
Spitzer Imaging of Planck-Herschel Dusty Proto-Clusters at z=2-3
NASA Astrophysics Data System (ADS)
Cooray, Asantha; Ma, Jingzhe; Greenslade, Joshua; Kubo, Mariko; Nayyeri, Hooshang; Clements, David; Cheng, Tai-An
2018-05-01
We have recently introduced a new proto-cluster selection technique by combing Herschel/SPIRE imaging data and Planck/HFIk all-sky survey point source catalog. These sources are identified as Planck point sources with clumps of Herschel source over-densities with far-IR colors comparable to z=0 ULIRGS redshifted to z=2 to 3. The selection is sensitive to dusty starbursts and obscured QSOs and we have recovered couple of the known proto-clusters and close to 30 new proto-clusters. The candidate proto-clusters selected from this technique have far-IR flux densities several times higher than those that are optically selected, such as using LBG selection, implying that the member galaxies are in a special phase of heightened dusty starburst and dusty QSO activity. This far-IR luminous phase may be short but likely to be necessary piece to understand the whole stellar mass assembly history of clusters. Moreover, our photo-clusters are missed in optical selections, suggesting that optically selected proto-clusters alone do not provide adequate statistics and a comparison of the far-IR and optical selected clusters may reveal the importance of the dusty stellar mass assembly. Here, we propose IRAC observations of six of the highest priority new proto-clusters, to establish the validity of the technique and to determine the total stellar mass through SED models. For a modest observing time the science program will have a substantial impact on an upcoming science topic in cosmology with implications for observations with JWST and WFIRST to understand the mass assembly in the universe.
Estimation of end point foot clearance points from inertial sensor data.
Santhiranayagam, Braveena K; Lai, Daniel T H; Begg, Rezaul K; Palaniswami, Marimuthu
2011-01-01
Foot clearance parameters provide useful insight into tripping risks during walking. This paper proposes a technique for the estimate of key foot clearance parameters using inertial sensor (accelerometers and gyroscopes) data. Fifteen features were extracted from raw inertial sensor measurements, and a regression model was used to estimate two key foot clearance parameters: First maximum vertical clearance (m x 1) after toe-off and the Minimum Toe Clearance (MTC) of the swing foot. Comparisons are made against measurements obtained using an optoelectronic motion capture system (Optotrak), at 4 different walking speeds. General Regression Neural Networks (GRNN) were used to estimate the desired parameters from the sensor features. Eight subjects foot clearance data were examined and a Leave-one-subject-out (LOSO) method was used to select the best model. The best average Root Mean Square Errors (RMSE) across all subjects obtained using all sensor features at the maximum speed for m x 1 was 5.32 mm and for MTC was 4.04 mm. Further application of a hill-climbing feature selection technique resulted in 0.54-21.93% improvement in RMSE and required fewer input features. The results demonstrated that using raw inertial sensor data with regression models and feature selection could accurately estimate key foot clearance parameters.
Linear genetic programming application for successive-station monthly streamflow prediction
NASA Astrophysics Data System (ADS)
Danandeh Mehr, Ali; Kahya, Ercan; Yerdelen, Cahit
2014-09-01
In recent decades, artificial intelligence (AI) techniques have been pronounced as a branch of computer science to model wide range of hydrological phenomena. A number of researches have been still comparing these techniques in order to find more effective approaches in terms of accuracy and applicability. In this study, we examined the ability of linear genetic programming (LGP) technique to model successive-station monthly streamflow process, as an applied alternative for streamflow prediction. A comparative efficiency study between LGP and three different artificial neural network algorithms, namely feed forward back propagation (FFBP), generalized regression neural networks (GRNN), and radial basis function (RBF), has also been presented in this study. For this aim, firstly, we put forward six different successive-station monthly streamflow prediction scenarios subjected to training by LGP and FFBP using the field data recorded at two gauging stations on Çoruh River, Turkey. Based on Nash-Sutcliffe and root mean squared error measures, we then compared the efficiency of these techniques and selected the best prediction scenario. Eventually, GRNN and RBF algorithms were utilized to restructure the selected scenario and to compare with corresponding FFBP and LGP. Our results indicated the promising role of LGP for successive-station monthly streamflow prediction providing more accurate results than those of all the ANN algorithms. We found an explicit LGP-based expression evolved by only the basic arithmetic functions as the best prediction model for the river, which uses the records of the both target and upstream stations.
NASA Technical Reports Server (NTRS)
Jennings, W. P.; Olsen, N. L.; Walter, M. J.
1976-01-01
The development of testing techniques useful in airplane ground resonance testing, wind tunnel aeroelastic model testing, and airplane flight flutter testing is presented. Included is the consideration of impulsive excitation, steady-state sinusoidal excitation, and random and pseudorandom excitation. Reasons for the selection of fast sine sweeps for transient excitation are given. The use of the fast fourier transform dynamic analyzer (HP-5451B) is presented, together with a curve fitting data process in the Laplace domain to experimentally evaluate values of generalized mass, model frequencies, dampings, and mode shapes. The effects of poor signal to noise ratios due to turbulence creating data variance are discussed. Data manipulation techniques used to overcome variance problems are also included. The experience is described that was gained by using these techniques since the early stages of the SST program. Data measured during 747 flight flutter tests, and SST, YC-14, and 727 empennage flutter model tests are included.
A data-driven multi-model methodology with deep feature selection for short-term wind forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias
With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by firstmore » layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.« less
NASA Technical Reports Server (NTRS)
Porro, A. R.; Hingst, W. R.; Davis, D. O.; Blair, A. B., Jr.
1991-01-01
The feasibility of using a contoured honeycomb model to generate a thick boundary layer in high-speed, compressible flow was investigated. The contour of the honeycomb was tailored to selectively remove momentum in a minimum of streamwise distance to create an artificially thickened turbulent boundary layer. Three wind tunnel experiments were conducted to verify the concept. Results indicate that this technique is a viable concept, especially for high-speed inlet testing applications. In addition, the compactness of the honeycomb boundary layer simulator allows relatively easy integration into existing wind tunnel model hardware.
NASA Astrophysics Data System (ADS)
Joglekar, Prasad; Lim, Lawrence; Kalaskar, Sushant; Shastry, Karthik; Satyal, Suman; Weiss, Alexander
2010-10-01
Time of Flight Positron Annihilation Induced Auger Electron Spectroscopy (TOF PAES) is a surface analytical technique with high surface selectivity. Almost 95% of the PAES signal originates from the sample's topmost layer due to the trapping of positrons just above the surface in an image-potential well before annihilation. This talk presents a description of the TOF technique as the results of modeling of the charged particle transport used in the design of the 2 meter TOF-PAES system currently under construction at UTA.
ERIC Educational Resources Information Center
Elliott, Shannon Snyder
2007-01-01
The purpose of this study is to first develop an 8-week college teaching module based on root competition literature. The split-root technique is adapted for the teaching laboratory, and the Sugar Ann English pea (Pisum sativum var. Sugar Ann English) is selected as the species of interest prior to designing experiments, either original or…
2006-11-26
with controlled micro and nanostructure for highly selective, high sensitivity assays. The process was modeled and a procedure for fabricating SERS...small volumes with controlled micro and nanostructure for highly selective, high sensitivity assays. We proved the feasibility of the technique and...films templated by colloidal crystals. The control over the film structure allowed optimizing their performance for potential sensor applications. The
NASA Technical Reports Server (NTRS)
Reid, G. F.
1976-01-01
A technique is presented for determining state variable feedback gains that will place both the poles and zeros of a selected transfer function of a dual-input control system at pre-determined locations in the s-plane. Leverrier's algorithm is used to determine the numerator and denominator coefficients of the closed-loop transfer function as functions of the feedback gains. The values of gain that match these coefficients to those of a pre-selected model are found by solving two systems of linear simultaneous equations. The algorithm has been used in a computer simulation of the CH-47 helicopter to control longitudinal dynamics.
Study of curved and planar frequency-selective surfaces with nonplanar illumination
NASA Technical Reports Server (NTRS)
Caroglanian, Armen; Webb, Kevin J.
1991-01-01
A locally planar technique (LPT) is investigated for determining the forward-scattered field from a generally shaped inductive frequency-selective surface (FSS) with nonplanar illumination. The results of an experimental study are presented to assess the LPT accuracy. The effects of a nonplanar incident field are determined by comparing the LPT numerical results with a series of experiments with the feed source placed at varying distances from the planar FSS. The limitations of the LPT model due to surface curvature are investigated in an experimental study of the scattered fields from a set of hyperbolic cylinders of different curvatures. From these comparisons, guidelines for applying the locally planar technique are developed.
Ötvös, Sándor B; Mándity, István M; Fülöp, Ferenc
2011-08-01
A simple and efficient flow-based technique is reported for the catalytic deuteration of several model nitrogen-containing heterocyclic compounds which are important building blocks of pharmacologically active materials. A continuous flow reactor was used in combination with on-demand pressure-controlled electrolytic D(2) production. The D(2) source was D(2)O, the consumption of which was very low. The experimental set-up allows the fine-tuning of pressure, temperature, and flow rate so as to determine the optimal conditions for the deuteration reactions. The described procedure lacks most of the drawbacks of the conventional batch deuteration techniques, and additionally is highly selective and reproducible.
NASA Astrophysics Data System (ADS)
Goudarzi, Nasser
2016-04-01
In this work, two new and powerful chemometrics methods are applied for the modeling and prediction of the 19F chemical shift values of some fluorinated organic compounds. The radial basis function-partial least square (RBF-PLS) and random forest (RF) are employed to construct the models to predict the 19F chemical shifts. In this study, we didn't used from any variable selection method and RF method can be used as variable selection and modeling technique. Effects of the important parameters affecting the ability of the RF prediction power such as the number of trees (nt) and the number of randomly selected variables to split each node (m) were investigated. The root-mean-square errors of prediction (RMSEP) for the training set and the prediction set for the RBF-PLS and RF models were 44.70, 23.86, 29.77, and 23.69, respectively. Also, the correlation coefficients of the prediction set for the RBF-PLS and RF models were 0.8684 and 0.9313, respectively. The results obtained reveal that the RF model can be used as a powerful chemometrics tool for the quantitative structure-property relationship (QSPR) studies.
Jang, Dae -Heung; Anderson-Cook, Christine Michaela
2016-11-22
With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Dae -Heung; Anderson-Cook, Christine Michaela
With many predictors in regression, fitting the full model can induce multicollinearity problems. Least Absolute Shrinkage and Selection Operation (LASSO) is useful when the effects of many explanatory variables are sparse in a high-dimensional dataset. Influential points can have a disproportionate impact on the estimated values of model parameters. Here, this paper describes a new influence plot that can be used to increase understanding of the contributions of individual observations and the robustness of results. This can serve as a complement to other regression diagnostics techniques in the LASSO regression setting. Using this influence plot, we can find influential pointsmore » and their impact on shrinkage of model parameters and model selection. Lastly, we provide two examples to illustrate the methods.« less
NASA Astrophysics Data System (ADS)
Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.
2017-03-01
In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.
Targeted versus statistical approaches to selecting parameters for modelling sediment provenance
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick
2017-04-01
One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.
Allen, R J; Rieger, T R; Musante, C J
2016-03-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed "virtual patients." In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations.
An integrated model for adolescent inpatient group therapy.
Garrick, D; Ewashen, C
2001-04-01
This paper proposes an integrated group therapy model to be utilized by psychiatric and mental health nurses; one innovatively designed to meet the therapeutic needs of adolescents admitted to inpatient psychiatric programs. The writers suggest a model of group therapy primarily comprised of interpersonal approaches within a feminist perspective. The proposed group focus is on active therapeutic engagement with adolescents to further interpersonal learning and to critically examine their contextualized lived experiences. Specific client and setting factors relevant to the selection of therapeutic techniques are reviewed. Selected theoretical models of group therapy are critiqued in relation to group therapy with adolescents. This integrated model of group therapy provides a safe and therapeutic forum that enriches clients' personal and interpersonal experiences as well as promotes healthy exploration, change, and empowerment.
ANALYSIS OF SAMPLING TECHNIQUES FOR IMBALANCED DATA: AN N=648 ADNI STUDY
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M.; Ye, Jieping
2013-01-01
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer’s disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and under sampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1). a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2). sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. PMID:24176869
Analysis of sampling techniques for imbalanced data: An n = 648 ADNI study.
Dubey, Rashmi; Zhou, Jiayu; Wang, Yalin; Thompson, Paul M; Ye, Jieping
2014-02-15
Many neuroimaging applications deal with imbalanced imaging data. For example, in Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the mild cognitive impairment (MCI) cases eligible for the study are nearly two times the Alzheimer's disease (AD) patients for structural magnetic resonance imaging (MRI) modality and six times the control cases for proteomics modality. Constructing an accurate classifier from imbalanced data is a challenging task. Traditional classifiers that aim to maximize the overall prediction accuracy tend to classify all data into the majority class. In this paper, we study an ensemble system of feature selection and data sampling for the class imbalance problem. We systematically analyze various sampling techniques by examining the efficacy of different rates and types of undersampling, oversampling, and a combination of over and undersampling approaches. We thoroughly examine six widely used feature selection algorithms to identify significant biomarkers and thereby reduce the complexity of the data. The efficacy of the ensemble techniques is evaluated using two different classifiers including Random Forest and Support Vector Machines based on classification accuracy, area under the receiver operating characteristic curve (AUC), sensitivity, and specificity measures. Our extensive experimental results show that for various problem settings in ADNI, (1) a balanced training set obtained with K-Medoids technique based undersampling gives the best overall performance among different data sampling techniques and no sampling approach; and (2) sparse logistic regression with stability selection achieves competitive performance among various feature selection algorithms. Comprehensive experiments with various settings show that our proposed ensemble model of multiple undersampled datasets yields stable and promising results. © 2013 Elsevier Inc. All rights reserved.
A randomised approach for NARX model identification based on a multivariate Bernoulli distribution
NASA Astrophysics Data System (ADS)
Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.
2017-04-01
The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.
Wolcott, Stephen W.; Snow, Robert F.
1995-01-01
An empirical technique was used to calculate the recharge to bedrock aquifers in northern Westchester County. This method requires delineation of ground-water divides within the aquifer area and values for (1) the extent of till and exposed bedrock within the aquifer area, and (2) mean annual runoff. This report contains maps and data needed for calculation of recharge in any given area within the 165square-mile study area. Recharge was computed by this technique for a 93-square-mile part of the study area and used a ground-water-flow model to evaluate the reliability of the method. A two-layer, steady-state model of the selected area was calibrated. The area consists predominantly of bedrock overlain by small localized deposits of till and stratified drill Ground-water-level and streamflow data collected in mid-November 1987 were used for model calibration. The data set approximates average annual conditions. The model was calibrated from (1) estimates of recharge as computed through the empirical technique, and (2) a range of values for hydrologic properties derived from aquifer tests and published literature. Recharge values used for model simulation appear to be reasonable for average steady-state conditions. Water-quality data were collected from 53 selected bedrock wells throughout northern Westchester County to define the background ground-water quality. The constituents and properties for which samples were analyzed included major cations and anions, temperature, pH, specific conductance, and hardness. Results indicate little difference in water quality among the bedrock aquifers within the study area. Ground water is mainly the calcium-bicarbonate type and is moderately hard. Average concentrations of sodium, sulfate, chloride, nitrate, iron, and manganese were within acceptable limits established by the U.S. Environmental Protection Agency for domestic water supply.
NASA Astrophysics Data System (ADS)
Szafranek, K.; Schillak, S.; Araszkiewicz, A.; Figurski, M.; Lehmann, M.; Lejba, P.
2012-04-01
Up-to-date investigations concerning space geodesy are mostly aimed at data of various techniques joint processing. The poster presents solutions (North, East, Up components) of selected stations (McDonald, Yarragadee, Greenbelt, Monument Peak, Zimmerwald, Borowiec, Mt.Stromlo-Orroral, Potsdam, Graz, Herstmonceux and Wettzell), which adopted Satellite Laser Ranging (SLR) and Global Navigation Satellite System (GNSS) techniques and which were gathering the data in the same time (from 1994 to 2010). Processing of both types of data was made according to Global Geodetic Observing System (GGOS) recommendations, the same models and parameters from IERS Conventions 2010 were used in both processing strategies (if it was possible). The main goal was to obtain coordinates and their changes in time (velocities) basing on both techniques and to compare the results. The station coordinates were determined for the common reference epoch of both techniques - for first day of each month. Monthly orbital arcs for laser observations were created basing on solutions from several SLR sites (observations to LAGEOS-1 and LAGEOS-2 satellites) with the best solutions quality and the highest amount of observations. For GNSS coordinates determination about 130 sites belonging to International GNSS Service (IGS) were selected: 30 with local ties to SLR sites and others basing on their geolocalization (length of the baselines) and solutions time series analysis. Mainly, core IGS stations were used. Solutions of both techniques were analyzed in order to verify agreement of both techniques and for independent control of local ties.
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
Strong Selection at MHC in Mexicans since Admixture
Zhou, Quan; Zhao, Liang; Guan, Yongtao
2016-01-01
Mexicans are a recent admixture of Amerindians, Europeans, and Africans. We performed local ancestry analysis of Mexican samples from two genome-wide association studies obtained from dbGaP, and discovered that at the MHC region Mexicans have excessive African ancestral alleles compared to the rest of the genome, which is the hallmark of recent selection for admixed samples. The estimated selection coefficients are 0.05 and 0.07 for two datasets, which put our finding among the strongest known selections observed in humans, namely, lactase selection in northern Europeans and sickle-cell trait in Africans. Using inaccurate Amerindian training samples was a major concern for the credibility of previously reported selection signals in Latinos. Taking advantage of the flexibility of our statistical model, we devised a model fitting technique that can learn Amerindian ancestral haplotype from the admixed samples, which allows us to infer local ancestries for Mexicans using only European and African training samples. The strong selection signal at the MHC remains without Amerindian training samples. Finally, we note that medical history studies suggest such a strong selection at MHC is plausible in Mexicans. PMID:26863142
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Remote sensing impact on corridor selection and placement
NASA Technical Reports Server (NTRS)
Thomson, F. J.; Sellman, A. N.
1975-01-01
Computer-aided corridor selection techniques, utilizing digitized data bases of socio-economic, census, and cadastral data, and developed for highway corridor routing are considered. Land resource data generated from various remote sensing data sources were successfully merged with the ancillary data files of a corridor selection model and prototype highway corridors were designed using the combined data set. Remote sensing derived information considered useful for highway corridor location, special considerations in geometric correction of remote sensing data to facilitate merging it with ancillary data files, and special interface requirements are briefly discussed.
Binder, Harald; Porzelius, Christine; Schumacher, Martin
2011-03-01
Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems
NASA Astrophysics Data System (ADS)
Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.
2016-04-01
Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).
Optimization Of Engine Heat Transfer Mechanisms For Ground Combat Vehicle Signature Models
NASA Astrophysics Data System (ADS)
Gonda, T.; Rogers, P.; Gerhart, G.; Reynolds, W. R.
1988-08-01
A thermodynamic model for predicting the behavior of selected internal thermal sources of an M2 Bradley Infantry Fighting Vehicle is described. The modeling methodology is expressed in terms of first principle heat transfer equations along with a brief history of TACOM's experience with thermal signature modeling techniques. The dynamic operation of the internal thermal sources is presented along with limited test data and an examination of their effect on the vehicle signature.
Derivative Trade Optimizing Model Utilizing GP Based on Behavioral Finance Theory
NASA Astrophysics Data System (ADS)
Matsumura, Koki; Kawamoto, Masaru
This paper proposed a new technique which makes the strategy trees for the derivative (option) trading investment decision based on the behavioral finance theory and optimizes it using evolutionary computation, in order to achieve high profitability. The strategy tree uses a technical analysis based on a statistical, experienced technique for the investment decision. The trading model is represented by various technical indexes, and the strategy tree is optimized by the genetic programming(GP) which is one of the evolutionary computations. Moreover, this paper proposed a method using the prospect theory based on the behavioral finance theory to set psychological bias for profit and deficit and attempted to select the appropriate strike price of option for the higher investment efficiency. As a result, this technique produced a good result and found the effectiveness of this trading model by the optimized dealings strategy.
Large-scale model quality assessment for improving protein tertiary structure prediction.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-06-15
Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.
Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco
2012-10-01
Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.
Microscale patterning of thermoplastic polymer surfaces by selective solvent swelling.
Rahmanian, Omid; Chen, Chien-Fu; DeVoe, Don L
2012-09-04
A new method for the fabrication of microscale features in thermoplastic substrates is presented. Unlike traditional thermoplastic microfabrication techniques, in which bulk polymer is displaced from the substrate by machining or embossing, a unique process termed orogenic microfabrication has been developed in which selected regions of a thermoplastic surface are raised from the substrate by an irreversible solvent swelling mechanism. The orogenic technique allows thermoplastic surfaces to be patterned using a variety of masking methods, resulting in three-dimensional features that would be difficult to achieve through traditional microfabrication methods. Using cyclic olefin copolymer as a model thermoplastic material, several variations of this process are described to realize growth heights ranging from several nanometers to tens of micrometers, with patterning techniques include direct photoresist masking, patterned UV/ozone surface passivation, elastomeric stamping, and noncontact spotting. Orogenic microfabrication is also demonstrated by direct inkjet printing as a facile photolithography-free masking method for rapid desktop thermoplastic microfabrication.
Chung, Eun-Sung; Lee, Kil Seong
2009-03-01
The objective of this study is to develop an alternative evaluation index (AEI) in order to determine the priorities of a range of alternatives using both the hydrological simulation program in FORTRAN (HSPF) and multicriteria decision making (MCDM) techniques. In order to formulate the HSPF model, sensitivity analyses of water quantity (peak discharge and total volume) and quality (BOD peak concentrations and total loads) are conducted and a number of critical parameters were selected. To achieve a more precise simulation, the study watershed is divided into four regions for calibration and verification according to landuse, location, slope, and climate data. All evaluation criteria were selected using the Driver-Pressure-State-Impact-Response (DPSIR) model, a sustainability evaluation concept. The Analytic Hierarchy Process is used to estimate the weights of the criteria and the effects of water quantity and quality were quantified by HSPF simulation. In addition, AEIs that reflected residents' preferences for management objectives are proposed in order to induce the stakeholder to participate in the decision making process.
Lin, Chun-Yuan; Wang, Yen-Ling
2014-01-01
Checkpoint kinase 2 (Chk2) has a great effect on DNA-damage and plays an important role in response to DNA double-strand breaks and related lesions. In this study, we will concentrate on Chk2 and the purpose is to find the potential inhibitors by the pharmacophore hypotheses (PhModels), combinatorial fusion, and virtual screening techniques. Applying combinatorial fusion into PhModels and virtual screening techniques is a novel design strategy for drug design. We used combinatorial fusion to analyze the prediction results and then obtained the best correlation coefficient of the testing set (r test) with the value 0.816 by combining the Best(train)Best(test) and Fast(train)Fast(test) prediction results. The potential inhibitors were selected from NCI database by screening according to Best(train)Best(test) + Fast(train)Fast(test) prediction results and molecular docking with CDOCKER docking program. Finally, the selected compounds have high interaction energy between a ligand and a receptor. Through these approaches, 23 potential inhibitors for Chk2 are retrieved for further study.
Wavelet regression model in forecasting crude oil price
NASA Astrophysics Data System (ADS)
Hamid, Mohd Helmie; Shabri, Ani
2017-05-01
This study presents the performance of wavelet multiple linear regression (WMLR) technique in daily crude oil forecasting. WMLR model was developed by integrating the discrete wavelet transform (DWT) and multiple linear regression (MLR) model. The original time series was decomposed to sub-time series with different scales by wavelet theory. Correlation analysis was conducted to assist in the selection of optimal decomposed components as inputs for the WMLR model. The daily WTI crude oil price series has been used in this study to test the prediction capability of the proposed model. The forecasting performance of WMLR model were also compared with regular multiple linear regression (MLR), Autoregressive Moving Average (ARIMA) and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) using root mean square errors (RMSE) and mean absolute errors (MAE). Based on the experimental results, it appears that the WMLR model performs better than the other forecasting technique tested in this study.
Deeb, Omar; Shaik, Basheerulla; Agrawal, Vijay K
2014-10-01
Quantitative Structure-Activity Relationship (QSAR) models for binding affinity constants (log Ki) of 78 flavonoid ligands towards the benzodiazepine site of GABA (A) receptor complex were calculated using the machine learning methods: artificial neural network (ANN) and support vector machine (SVM) techniques. The models obtained were compared with those obtained using multiple linear regression (MLR) analysis. The descriptor selection and model building were performed with 10-fold cross-validation using the training data set. The SVM and MLR coefficient of determination values are 0.944 and 0.879, respectively, for the training set and are higher than those of ANN models. Though the SVM model shows improvement of training set fitting, the ANN model was superior to SVM and MLR in predicting the test set. Randomization test is employed to check the suitability of the models.
An evaluation of hardwood fuel models for planning prescribed fires in oak shelterwood stands
Patrick H. Brose
2017-01-01
The shelterwood burn technique is becoming more accepted and used as a means of regenerating eastern mixed-oak (Quercus spp.) forests on productive upland sites. Preparation is important to successfully implement this method; part of that preparation is selecting the proper fuel model (FM) for the prescribed fire. Because of the mix of leaf litter...
ERIC Educational Resources Information Center
Martin, Nancy
Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…
Al-Balas, Qosay; Hassan, Mohammad; Al-Oudat, Buthina; Alzoubi, Hassan; Mhaidat, Nizar; Almaaytah, Ammar
2012-11-22
Within this study, a unique 3D structure-based pharmacophore model of the enzyme glyoxalase-1 (Glo-1) has been revealed. Glo-1 is considered a zinc metalloenzyme in which the inhibitor binding with zinc atom at the active site is crucial. To our knowledge, this is the first pharmacophore model that has a selective feature for a "zinc binding group" which has been customized within the structure-based pharmacophore model of Glo-1 to extract ligands that possess functional groups able to bind zinc atom solely from database screening. In addition, an extensive 2D similarity search using three diverse similarity techniques (Tanimoto, Dice, Cosine) has been performed over the commercially available "Zinc Clean Drug-Like Database" that contains around 10 million compounds to help find suitable inhibitors for this enzyme based on known inhibitors from the literature. The resultant hits were mapped over the structure based pharmacophore and the successful hits were further docked using three docking programs with different pose fitting and scoring techniques (GOLD, LibDock, CDOCKER). Nine candidates were suggested to be novel Glo-1 inhibitors containing the "zinc binding group" with the highest consensus scoring from docking.
Kim, Hyong Nyun; Liu, Xiao Ning; Noh, Kyu Cheol
2015-06-10
Open reduction and plate fixation is the standard operative treatment for displaced midshaft clavicle fracture. However, sometimes it is difficult to achieve anatomic reduction by open reduction technique in cases with comminution. We describe a novel technique using a real-size three dimensionally (3D)-printed clavicle model as a preoperative and intraoperative tool for minimally invasive plating of displaced comminuted midshaft clavicle fractures. A computed tomography (CT) scan is taken of both clavicles in patients with a unilateral displaced comminuted midshaft clavicle fracture. Both clavicles are 3D printed into a real-size clavicle model. Using the mirror imaging technique, the uninjured side clavicle is 3D printed into the opposite side model to produce a suitable replica of the fractured side clavicle pre-injury. The 3D-printed fractured clavicle model allows the surgeon to observe and manipulate accurate anatomical replicas of the fractured bone to assist in fracture reduction prior to surgery. The 3D-printed uninjured clavicle model can be utilized as a template to select the anatomically precontoured locking plate which best fits the model. The plate can be inserted through a small incision and fixed with locking screws without exposing the fracture site. Seven comminuted clavicle fractures treated with this technique achieved good bone union. This technique can be used for a unilateral displaced comminuted midshaft clavicle fracture when it is difficult to achieve anatomic reduction by open reduction technique. Level of evidence V.
Jin, Wen-Yan; Ma, Ying; Li, Wei-Ya; Li, Hong-Lian; Wang, Run-Ling
2018-04-01
SHP2 is a potential target for the development of novel therapies for SHP2-dependent cancers. In our research, with the aid of the 'Receptor-Ligand Pharmacophore' technique, a 3D-QSAR method was carried out to explore structure activity relationship of SHP2 allosteric inhibitors. Structure-based drug design was employed to optimize SHP099, an efficacious, potent, and selective SHP2 allosteric inhibitor. A novel class of selective SHP2 allosteric inhibitors was discovered by using the powerful 'SBP', 'ADMET' and 'CDOCKER' techniques. By means of molecular dynamics simulations, it was observed that these novel inhibitors not only had the same function as SHP099 did in inhibiting SHP2, but also had more favorable conformation for binding to the receptor. Thus, this report may provide a new method in discovering novel and selective SHP2 allosteric inhibitors. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.
2017-08-01
The development of technology for a directional soil compaction of tank foundations for oil and oil products storage is a relevant problem which solution will enable simultaneously provide required operational characteristics of a soil foundation and reduce time and material costs to prepare the foundation. The impact dynamics of rammers' operating elements on the soil foundation is planned to specify in the course of laboratory studies. A specialized technique is developed to justify the parameters and select the equipment for laboratory researches. The usage of this technique enabled us to calculate dimensions of the models, of a test bench and specifications of the recording equipment, and a lighting system. The necessary equipment for laboratory studies was selected. Preliminary laboratory tests were carried out. The estimate of accuracy for planned laboratory studies was given.
2003-03-01
organizations . Reducing attrition rates through optimal selection decisions can “reduce training cost, improve job performance, and enhance...capturing the weights for use in the SNR method is not straightforward. A special VBA application had to be written to capture and organize the network...before the VBA application can be used. Appendix D provides the VBA code used to import and organize the network weights and input standardization
Feature Selection Methods for Zero-Shot Learning of Neural Activity
Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Su, Chao-Ton; Wang, Pa-Chun; Chen, Yan-Cheng; Chen, Li-Fei
2012-08-01
Pressure ulcer is a serious problem during patient care processes. The high risk factors in the development of pressure ulcer remain unclear during long surgery. Moreover, past preventive policies are hard to implement in a busy operation room. The objective of this study is to use data mining techniques to construct the prediction model for pressure ulcers. Four data mining techniques, namely, Mahalanobis Taguchi System (MTS), Support Vector Machines (SVMs), decision tree (DT), and logistic regression (LR), are used to select the important attributes from the data to predict the incidence of pressure ulcers. Measurements of sensitivity, specificity, F(1), and g-means were used to compare the performance of four classifiers on the pressure ulcer data set. The results show that data mining techniques obtain good results in predicting the incidence of pressure ulcer. We can conclude that data mining techniques can help identify the important factors and provide a feasible model to predict pressure ulcer development.
DOT National Transportation Integrated Search
2009-04-01
"The primary umbrella method used by the Oregon Department of Transportation (ODOT) to ensure on-time performance in standard construction contracting is liquidated damages. The assessment value is usually a matter of some judgment. In practice...
Exploring possibilities of band gap measurement with off-axis EELS in TEM.
Korneychuk, Svetlana; Partoens, Bart; Guzzinati, Giulio; Ramaneti, Rajesh; Derluyn, Joff; Haenen, Ken; Verbeeck, Jo
2018-06-01
A technique to measure the band gap of dielectric materials with high refractive index by means of energy electron loss spectroscopy (EELS) is presented. The technique relies on the use of a circular (Bessel) aperture and suppresses Cherenkov losses and surface-guided light modes by enforcing a momentum transfer selection. The technique also strongly suppresses the elastic zero loss peak, making the acquisition, interpretation and signal to noise ratio of low loss spectra considerably better, especially for excitations in the first few eV of the EELS spectrum. Simulations of the low loss inelastic electron scattering probabilities demonstrate the beneficial influence of the Bessel aperture in this setup even for high accelerating voltages. The importance of selecting the optimal experimental convergence and collection angles is highlighted. The effect of the created off-axis acquisition conditions on the selection of the transitions from valence to conduction bands is discussed in detail on a simplified isotropic two band model. This opens the opportunity for deliberately selecting certain transitions by carefully tuning the microscope parameters. The suggested approach is experimentally demonstrated and provides good signal to noise ratio and interpretable band gap signals on reference samples of diamond, GaN and AlN while offering spatial resolution in the nm range. Copyright © 2018 Elsevier B.V. All rights reserved.
Designing and benchmarking the MULTICOM protein structure prediction system
2013-01-01
Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819
NASA Technical Reports Server (NTRS)
Tedesco, Marco; Kim, Edward J.
2005-01-01
In this paper, GA-based techniques are used to invert the equations of an electromagnetic model based on Dense Medium Radiative Transfer Theory (DMRT) under the Quasi Crystalline Approximation with Coherent Potential to retrieve snow depth, mean grain size and fractional volume from microwave brightness temperatures. The technique is initially tested on both noisy and not-noisy simulated data. During this phase, different configurations of genetic algorithm parameters are considered to quantify how their change can affect the algorithm performance. A configuration of GA parameters is then selected and the algorithm is applied to experimental data acquired during the NASA Cold Land Process Experiment. Snow parameters retrieved with the GA-DMRT technique are then compared with snow parameters measured on field.
Aeroelastic Model Structure Computation for Envelope Expansion
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2007-01-01
Structure detection is a procedure for selecting a subset of candidate terms, from a full model description, that best describes the observed output. This is a necessary procedure to compute an efficient system description which may afford greater insight into the functionality of the system or a simpler controller design. Structure computation as a tool for black-box modeling may be of critical importance in the development of robust, parsimonious models for the flight-test community. Moreover, this approach may lead to efficient strategies for rapid envelope expansion that may save significant development time and costs. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of non-linear aeroelastic systems. The LASSO minimises the residual sum of squares with the addition of an l(Sub 1) penalty term on the parameter vector of the traditional l(sub 2) minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudo-linear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. Applicability of this technique for model structure computation for the F/A-18 (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) Active Aeroelastic Wing project using flight test data is shown for several flight conditions (Mach numbers) by identifying a parsimonious system description with a high percent fit for cross-validated data.
Liu, Y F; Yu, H; Wang, W N; Gao, B
2017-06-09
Objective: To evaluate the processing accuracy, internal quality and suitability of the titanium alloy frameworks of removable partial denture (RPD) fabricated by selective laser melting (SLM) technique, and to provide reference for clinical application. Methods: The plaster model of one clinical patient was used as the working model, and was scanned and reconstructed into a digital working model. A RPD framework was designed on it. Then, eight corresponding RPD frameworks were fabricated using SLM technique. Three-dimensional (3D) optical scanner was used to scan and obtain the 3D data of the frameworks and the data was compared with the original computer aided design (CAD) model to evaluate their processing precision. The traditional casting pure titanium frameworks was used as the control group, and the internal quality was analyzed by X-ray examination. Finally, the fitness of the frameworks was examined on the plaster model. Results: The overall average deviation of the titanium alloy RPD framework fabricated by SLM technology was (0.089±0.076) mm, the root mean square error was 0.103 mm. No visible pores, cracks and other internal defects was detected in the frameworks. The framework fits on the plaster model completely, and its tissue surface fitted on the plaster model well. There was no obvious movement. Conclusions: The titanium alloy RPD framework fabricated by SLM technology is of good quality.
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2014-05-01
Satellite instruments are nowadays successfully utilised for measuring atmospheric aerosol in many applications as well as in research. Therefore, there is a growing need for rigorous error characterisation of the measurements. Here, we introduce a methodology for quantifying the uncertainty in the retrieval of aerosol optical thickness (AOT). In particular, we concentrate on two aspects: uncertainty due to aerosol microphysical model selection and uncertainty due to imperfect forward modelling. We apply the introduced methodology for aerosol optical thickness retrieval of the Ozone Monitoring Instrument (OMI) on board NASA's Earth Observing System (EOS) Aura satellite, launched in 2004. We apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness retrieval by propagating aerosol microphysical model selection and forward model error more realistically. For the microphysical model selection problem, we utilise Bayesian model selection and model averaging methods. Gaussian processes are utilised to characterise the smooth systematic discrepancies between the measured and modelled reflectances (i.e. residuals). The spectral correlation is composed empirically by exploring a set of residuals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud-free, over-land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques introduced here. The method and improved uncertainty characterisation is demonstrated by several examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara desert dust. The statistical methodology presented is general; it is not restricted to this particular satellite retrieval application.
Optimizing Requirements Decisions with KEYS
NASA Technical Reports Server (NTRS)
Jalali, Omid; Menzies, Tim; Feather, Martin
2008-01-01
Recent work with NASA's Jet Propulsion Laboratory has allowed for external access to five of JPL's real-world requirements models, anonymized to conceal proprietary information, but retaining their computational nature. Experimentation with these models, reported herein, demonstrates a dramatic speedup in the computations performed on them. These models have a well defined goal: select mitigations that retire risks which, in turn, increases the number of attainable requirements. Such a non-linear optimization is a well-studied problem. However identification of not only (a) the optimal solution(s) but also (b) the key factors leading to them is less well studied. Our technique, called KEYS, shows a rapid way of simultaneously identifying the solutions and their key factors. KEYS improves on prior work by several orders of magnitude. Prior experiments with simulated annealing or treatment learning took tens of minutes to hours to terminate. KEYS runs much faster than that; e.g for one model, KEYS ran 13,000 times faster than treatment learning (40 minutes versus 0.18 seconds). Processing these JPL models is a non-linear optimization problem: the fewest mitigations must be selected while achieving the most requirements. Non-linear optimization is a well studied problem. With this paper, we challenge other members of the PROMISE community to improve on our results with other techniques.
Researching Mental Health Disorders in the Era of Social Media: Systematic Review.
Wongkoblap, Akkapon; Vadillo, Miguel A; Curcin, Vasa
2017-06-29
Mental illness is quickly becoming one of the most prevalent public health problems worldwide. Social network platforms, where users can express their emotions, feelings, and thoughts, are a valuable source of data for researching mental health, and techniques based on machine learning are increasingly used for this purpose. The objective of this review was to explore the scope and limits of cutting-edge techniques that researchers are using for predictive analytics in mental health and to review associated issues, such as ethical concerns, in this area of research. We performed a systematic literature review in March 2017, using keywords to search articles on data mining of social network data in the context of common mental health disorders, published between 2010 and March 8, 2017 in medical and computer science journals. The initial search returned a total of 5386 articles. Following a careful analysis of the titles, abstracts, and main texts, we selected 48 articles for review. We coded the articles according to key characteristics, techniques used for data collection, data preprocessing, feature extraction, feature selection, model construction, and model verification. The most common analytical method was text analysis, with several studies using different flavors of image analysis and social interaction graph analysis. Despite an increasing number of studies investigating mental health issues using social network data, some common problems persist. Assembling large, high-quality datasets of social media users with mental disorder is problematic, not only due to biases associated with the collection methods, but also with regard to managing consent and selecting appropriate analytics techniques. ©Akkapon Wongkoblap, Miguel A Vadillo, Vasa Curcin. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.06.2017.
NASA Astrophysics Data System (ADS)
Escalante, George
2017-05-01
Weak Value Measurements (WVMs) with pre- and post-selected quantum mechanical ensembles were proposed by Aharonov, Albert, and Vaidman in 1988 and have found numerous applications in both theoretical and applied physics. In the field of precision metrology, WVM techniques have been demonstrated and proven valuable as a means to shift, amplify, and detect signals and to make precise measurements of small effects in both quantum and classical systems, including: particle spin, the Spin-Hall effect of light, optical beam deflections, frequency shifts, field gradients, and many others. In principal, WVM amplification techniques are also possible in radar and could be a valuable tool for precision measurements. However, relatively limited research has been done in this area. This article presents a quantum-inspired model of radar range and range-rate measurements of arbitrary strength, including standard and pre- and post-selected measurements. The model is used to extend WVM amplification theory to radar, with the receive filter performing the post-selection role. It is shown that the description of range and range-rate measurements based on the quantum-mechanical measurement model and formalism produces the same results as the conventional approach used in radar based on signal processing and filtering of the reflected signal at the radar receiver. Numerical simulation results using simple point scatterrer configurations are presented, applying the quantum-inspired model of radar range and range-rate measurements that occur in the weak measurement regime. Potential applications and benefits of the quantum inspired approach to radar measurements are presented, including improved range and Doppler measurement resolution.
Nowlin, Jon O.; Brown, W.M.; Smith, L.H.; Hoffman, R.J.
1980-01-01
The objectives of the Geological Survey 's river-quality assessment in the Truckee and Carson River basins in California and Nevada are to identify the significant resource management problems; to develop techniques to assess the problems; and to effectively communicate results to responsible managers. Six major elements of the assessment to be completed by October 1981 are (1) a detailing of the legal, institutional, and structural development of water resources in the basins and the current problems and conflicts; (2) a compilation and synthesis of the physical hydrology of the basins; (3) development of a special workshop approach to involve local management in the direction and results of the study; (4) development of a comprehensive streamflow model emcompassing both basins to provide a quantitative hydrologic framework for water-quality analysis; (5) development of a water-quality transport model for selected constituents and characteristics on selected reaches of the Truckee River; and (6) a detailed examination of selected fish habitats for specified reaches of the Truckee River. Progress will be periodically reported in reports, maps, computer data files, mathematical models, a bibliography, and public presentations. In building a basic framework to develop techniques, the basins were viewed as a single hydrologic unit because of interconnecting diversion structures. The framework comprises 13 hydrographic subunits to facilitate modeling and sampling. Several significant issues beyond the scope of the assessment were considered as supplementary proposals; water-quality loadings in Truckee and Carson Rivers, urban runoff in Reno and management alternatives, and a model of limnological processes in Lahontan Reservoir. (USGS)
NASA Astrophysics Data System (ADS)
Gómez, Breogán; Miguez-Macho, Gonzalo
2017-04-01
Nudging techniques are commonly used to constrain the evolution of numerical models to a reference dataset that is typically of a lower resolution. The nudged model retains some of the features of the reference field while incorporating its own dynamics to the solution. These characteristics have made nudging very popular in dynamic downscaling applications that cover from shot range, single case studies, to multi-decadal regional climate simulations. Recently, a variation of this approach called Spectral Nudging, has gained popularity for its ability to maintain the higher temporal and spatial variability of the model results, while forcing the large scales in the solution with a coarser resolution field. In this work, we focus on a not much explored aspect of this technique: the impact of selecting different cut-off wave numbers and spin-up times. We perform four-day long simulations with the WRF model, daily for three different one-month periods that include a free run and several Spectral Nudging experiments with cut-off wave numbers ranging from the smallest to the largest possible (full Grid Nudging). Results show that Spectral Nudging is very effective at imposing the selected scales onto the solution, while allowing the limited area model to incorporate finer scale features. The model error diminishes rapidly as the nudging expands over broader parts of the spectrum, but this decreasing trend ceases sharply at cut-off wave numbers equivalent to a length scale of about 1000 km, and the error magnitude changes minimally thereafter. This scale corresponds to the Rossby Radius of deformation, separating synoptic from convective scales in the flow. When nudging above this value is applied, a shifting of the synoptic patterns can occur in the solution, yielding large model errors. However, when selecting smaller scales, the fine scale contribution of the model is damped, thus making 1000 km the appropriate scale threshold to nudge in order to balance both effects. Finally, we note that longer spin-up times are needed for model errors to stabilize when using Spectral Nudging than with Grid Nudging. Our results suggest that this time is between 36 and 48 hours.
Embree, William N.; Wiltshire, Denise A.
1978-01-01
Abstracts of 177 selected publications on water movement in estuaries, particularly the Hudson River estuary, are compiled for reference in Hudson River studies. Subjects represented are the hydraulic, chemical, and physical characteristics of estuarine waters, estuarine modeling techniques, and methods of water-data collection and analysis. Summaries are presented in five categories: Hudson River estuary studies; hydrodynamic-model studies; water-quality-model studies; reports on data-collection equipment and methods; and bibliographies, literature reviews, conference proceedings, and textbooks. An author index is included. Omitted are most works published before 1965, environmental-impact statements, theses and dissertations, policy or planning reports, regional or economic reports, ocean studies, studies based on physical models, and foreign studies. (Woodard-USGS)
Prediction of solvation enthalpy of gaseous organic compounds in propanol
NASA Astrophysics Data System (ADS)
Golmohammadi, Hassan; Dashtbozorgi, Zahra
2016-09-01
The purpose of this paper is to present a novel way for developing quantitative structure-property relationship (QSPR) models to predict the gas-to-propanol solvation enthalpy (Δ H solv) of 95 organic compounds. Different kinds of descriptors were calculated for each compound using the Dragon software package. The variable selection technique of replacement method (RM) was employed to select the optimal subset of solute descriptors. Our investigation reveals that the dependence of physical chemistry properties of solution on solvation enthalpy is nonlinear and that the RM method is unable to model the solvation enthalpy accurately. The results established that the calculated Δ H solv values by SVM were in good agreement with the experimental ones, and the performances of the SVM models were superior to those obtained by RM model.
Suchting, Robert; Gowin, Joshua L; Green, Charles E; Walss-Bass, Consuelo; Lane, Scott D
2018-01-01
Rationale : Given datasets with a large or diverse set of predictors of aggression, machine learning (ML) provides efficient tools for identifying the most salient variables and building a parsimonious statistical model. ML techniques permit efficient exploration of data, have not been widely used in aggression research, and may have utility for those seeking prediction of aggressive behavior. Objectives : The present study examined predictors of aggression and constructed an optimized model using ML techniques. Predictors were derived from a dataset that included demographic, psychometric and genetic predictors, specifically FK506 binding protein 5 (FKBP5) polymorphisms, which have been shown to alter response to threatening stimuli, but have not been tested as predictors of aggressive behavior in adults. Methods : The data analysis approach utilized component-wise gradient boosting and model reduction via backward elimination to: (a) select variables from an initial set of 20 to build a model of trait aggression; and then (b) reduce that model to maximize parsimony and generalizability. Results : From a dataset of N = 47 participants, component-wise gradient boosting selected 8 of 20 possible predictors to model Buss-Perry Aggression Questionnaire (BPAQ) total score, with R 2 = 0.66. This model was simplified using backward elimination, retaining six predictors: smoking status, psychopathy (interpersonal manipulation and callous affect), childhood trauma (physical abuse and neglect), and the FKBP5_13 gene (rs1360780). The six-factor model approximated the initial eight-factor model at 99.4% of R 2 . Conclusions : Using an inductive data science approach, the gradient boosting model identified predictors consistent with previous experimental work in aggression; specifically psychopathy and trauma exposure. Additionally, allelic variants in FKBP5 were identified for the first time, but the relatively small sample size limits generality of results and calls for replication. This approach provides utility for the prediction of aggression behavior, particularly in the context of large multivariate datasets.
Application of remote sensing to hydrology. [for the formulation of watershed behavior models
NASA Technical Reports Server (NTRS)
Ambaruch, R.; Simmons, J. W.
1973-01-01
Streamflow forecasting and hydrologic modelling are considered in a feasibility assessment of using the data produced by remote observation from space and/or aircraft to reduce the time and expense normally involved in achieving the ability to predict the hydrological behavior of an ungaged watershed. Existing watershed models are described, and both stochastic and parametric techniques are discussed towards the selection of a suitable simulation model. Technical progress and applications are reported and recommendations are made for additional research.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.
Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters
NASA Technical Reports Server (NTRS)
Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.
1989-01-01
The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.
Rieger, TR; Musante, CJ
2016-01-01
Quantitative systems pharmacology models mechanistically describe a biological system and the effect of drug treatment on system behavior. Because these models rarely are identifiable from the available data, the uncertainty in physiological parameters may be sampled to create alternative parameterizations of the model, sometimes termed “virtual patients.” In order to reproduce the statistics of a clinical population, virtual patients are often weighted to form a virtual population that reflects the baseline characteristics of the clinical cohort. Here we introduce a novel technique to efficiently generate virtual patients and, from this ensemble, demonstrate how to select a virtual population that matches the observed data without the need for weighting. This approach improves confidence in model predictions by mitigating the risk that spurious virtual patients become overrepresented in virtual populations. PMID:27069777
Materials and techniques for model construction
NASA Technical Reports Server (NTRS)
Wigley, D. A.
1985-01-01
The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Dae -Heung; Anderson-Cook, Christine Michaela
When there are constraints on resources, an unreplicated factorial or fractional factorial design can allow efficient exploration of numerous factor and interaction effects. A half-normal plot is a common graphical tool used to compare the relative magnitude of effects and to identify important effects from these experiments when no estimate of error from the experiment is available. An alternative is to use a least absolute shrinkage and selection operation plot to examine the pattern of model selection terms from an experiment. We examine how both the half-normal and least absolute shrinkage and selection operation plots are impacted by the absencemore » of individual observations or an outlier, and the robustness of conclusions obtained from these 2 techniques for identifying important effects from factorial experiments. As a result, the methods are illustrated with 2 examples from the literature.« less
Image statistics underlying natural texture selectivity of neurons in macaque V4
Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko
2015-01-01
Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362
Jang, Dae -Heung; Anderson-Cook, Christine Michaela
2017-04-12
When there are constraints on resources, an unreplicated factorial or fractional factorial design can allow efficient exploration of numerous factor and interaction effects. A half-normal plot is a common graphical tool used to compare the relative magnitude of effects and to identify important effects from these experiments when no estimate of error from the experiment is available. An alternative is to use a least absolute shrinkage and selection operation plot to examine the pattern of model selection terms from an experiment. We examine how both the half-normal and least absolute shrinkage and selection operation plots are impacted by the absencemore » of individual observations or an outlier, and the robustness of conclusions obtained from these 2 techniques for identifying important effects from factorial experiments. As a result, the methods are illustrated with 2 examples from the literature.« less
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Thermal stress effects in intermetallic matrix composites
NASA Technical Reports Server (NTRS)
Wright, P. K.; Sensmeier, M. D.; Kupperman, D. S.; Wadley, H. N. G.
1993-01-01
Intermetallic matrix composites develop residual stresses from the large thermal expansion mismatch (delta-alpha) between the fibers and matrix. This work was undertaken to: establish improved techniques to measure these thermal stresses in IMC's; determine residual stresses in a variety of IMC systems by experiments and modeling; and, determine the effect of residual stresses on selected mechanical properties of an IMC. X ray diffraction (XRD), neutron diffraction (ND), synchrotron XRD (SXRD), and ultrasonics (US) techniques for measuring thermal stresses in IMC were examined and ND was selected as the most promising technique. ND was demonstrated on a variety of IMC systems encompassing Ti- and Ni-base matrices, SiC, W, and Al2O3 fibers, and different fiber fractions (Vf). Experimental results on these systems agreed with predictions of a concentric cylinder model. In SiC/Ti-base systems, little yielding was found and stresses were controlled primarily by delta-alpha and Vf. In Ni-base matrix systems, yield strength of the matrix and Vf controlled stress levels. The longitudinal residual stresses in SCS-6/Ti-24Al-llNb composite were modified by thermomechanical processing. Increasing residual stress decreased ultimate tensile strength in agreement with model predictions. Fiber pushout strength showed an unexpected inverse correlation with residual stress. In-plane shear yield strength showed no dependence on residual stress. Higher levels of residual tension led to higher fatigue crack growth rates, as suggested by matrix mean stress effects.
Alós, Josep; Palmer, Miquel; Arlinghaus, Robert
2012-01-01
Together with life-history and underlying physiology, the behavioural variability among fish is one of the three main trait axes that determines the vulnerability to fishing. However, there are only a few studies that have systematically investigated the strength and direction of selection acting on behavioural traits. Using in situ fish behaviour revealed by telemetry techniques as input, we developed an individual-based model (IBM) that simulated the Lagrangian trajectory of prey (fish) moving within a confined home range (HR). Fishers exhibiting various prototypical fishing styles targeted these fish in the model. We initially hypothesised that more active and more explorative individuals would be systematically removed under all fished conditions, in turn creating negative selection differentials on low activity phenotypes and maybe on small HR. Our results partly supported these general predictions. Standardised selection differentials were, on average, more negative on HR than on activity. However, in many simulation runs, positive selection pressures on HR were also identified, which resulted from the stochastic properties of the fishes’ movement and its interaction with the human predator. In contrast, there was a consistent negative selection on activity under all types of fishing styles. Therefore, in situations where catchability depends on spatial encounters between human predators and fish, we would predict a consistent selection towards low activity phenotypes and have less faith in the direction of the selection on HR size. Our study is the first theoretical investigation on the direction of fishery-induced selection of behaviour using passive fishing gears. The few empirical studies where catchability of fish was measured in relation to passive fishing techniques, such as gill-nets, traps or recreational fishing, support our predictions that fish in highly exploited situations are, on average, characterised by low swimming activity, stemming, in part, from negative selection on swimming activity. PMID:23110164
NASA Astrophysics Data System (ADS)
Wang, Mian
This thesis research is consist of four chapters, including biomimetic three-dimensional tissue engineered nanostructured bone model for breast cancer bone metastasis study (Chapter one), cold atmospheric plasma for selectively ablating metastatic breast cancer (Chapter two), design of biomimetic and bioactive cold plasma modified nanostructured scaffolds for enhanced osteogenic differentiation of bone marrow derived mesenchymal stem cells (Chapter three), and enhanced osteoblast and mesenchymal stem cell functions on titanium with hydrothermally treated nanocrystalline hydroxyapatite/magnetically treated carbon nanotubes for orthopedic applications (Chapter four). All the thesis research is focused on nanomaterials and the use of cold plasma technique for various biomedical applications.
NASA Technical Reports Server (NTRS)
Wigley, D. A.
1982-01-01
A proposed configuration for a stepped specimen to be used in the system evaluation of mechanisms that can introduce warpage or dimensional changes in metallic alloys used for cryogenic wind tunnel models is described. Considerations for selecting a standard specimen are presented along with results obtained from an investigation carried out for VASCOMAX 200 maraging steel. Details of the machining and measurement techniques utilized in the investigation are presented. Initial results from the sample of VASCOMAX 200 show that the configuration and measuring techniques are capable of giving quantitative results.
Quality Attribute Techniques Framework
NASA Astrophysics Data System (ADS)
Chiam, Yin Kia; Zhu, Liming; Staples, Mark
The quality of software is achieved during its development. Development teams use various techniques to investigate, evaluate and control potential quality problems in their systems. These “Quality Attribute Techniques” target specific product qualities such as safety or security. This paper proposes a framework to capture important characteristics of these techniques. The framework is intended to support process tailoring, by facilitating the selection of techniques for inclusion into process models that target specific product qualities. We use risk management as a theory to accommodate techniques for many product qualities and lifecycle phases. Safety techniques have motivated the framework, and safety and performance techniques have been used to evaluate the framework. The evaluation demonstrates the ability of quality risk management to cover the development lifecycle and to accommodate two different product qualities. We identify advantages and limitations of the framework, and discuss future research on the framework.
NASA Astrophysics Data System (ADS)
Pôças, Isabel; Gonçalves, João; Costa, Patrícia Malva; Gonçalves, Igor; Pereira, Luís S.; Cunha, Mario
2017-06-01
In this study, hyperspectral reflectance (HySR) data derived from a handheld spectroradiometer were used to assess the water status of three grapevine cultivars in two sub-regions of Douro wine region during two consecutive years. A large set of potential predictors derived from the HySR data were considered for modelling/predicting the predawn leaf water potential (Ψpd) through different statistical and machine learning techniques. Three HySR vegetation indices were selected as final predictors for the computation of the models and the in-season time trend was removed from data by using a time predictor. The vegetation indices selected were the Normalized Reflectance Index for the wavelengths 554 nm and 561 nm (NRI554;561), the water index (WI) for the wavelengths 900 nm and 970 nm, and the D1 index which is associated with the rate of reflectance increase in the wavelengths of 706 nm and 730 nm. These vegetation indices covered the green, red edge and the near infrared domains of the electromagnetic spectrum. A large set of state-of-the-art analysis and statistical and machine-learning modelling techniques were tested. Predictive modelling techniques based on generalized boosted model (GBM), bagged multivariate adaptive regression splines (B-MARS), generalized additive model (GAM), and Bayesian regularized neural networks (BRNN) showed the best performance for predicting Ψpd, with an average determination coefficient (R2) ranging between 0.78 and 0.80 and RMSE varying between 0.11 and 0.12 MPa. When cultivar Touriga Nacional was used for training the models and the cultivars Touriga Franca and Tinta Barroca for testing (independent validation), the models performance was good, particularly for GBM (R2 = 0.85; RMSE = 0.09 MPa). Additionally, the comparison of Ψpd observed and predicted showed an equitable dispersion of data from the various cultivars. The results achieved show a good potential of these predictive models based on vegetation indices to support irrigation scheduling in vineyard.
Synthesis of Speaker Facial Movement to Match Selected Speech Sequences
NASA Technical Reports Server (NTRS)
Scott, K. C.; Kagels, D. S.; Watson, S. H.; Rom, H.; Wright, J. R.; Lee, M.; Hussey, K. J.
1994-01-01
A system is described which allows for the synthesis of a video sequence of a realistic-appearing talking human head. A phonic based approach is used to describe facial motion; image processing rather than physical modeling techniques are used to create video frames.
Experiential Education for Urban African Americans.
ERIC Educational Resources Information Center
Smith, Jennifer G.; McGinnis, J. Randy
1995-01-01
Stresses the importance of experiential educators being prepared to teach environmental education to students in specific contexts. A model for urban African American students includes the introduction and selection of a relevant local environmental issue; teaching strategies to investigate the issue; and techniques for initiating environmental…
Self Improving Methods for Materials and Process Design
1998-08-31
using inductive coupling techniques. The first phase of the work focuses on developing an artificial neural network learning for function approximation...developing an artificial neural network learning algorithm for time-series prediction. The third phase of the work focuses on model selection. We have
Pyrogallol-imprinted polymers with methyl methacrylate via precipitation polymerization
NASA Astrophysics Data System (ADS)
Mehamod, Faizatul Shimal; Othman, Nor Amira; Bulat, Ku Halim Ku; Suah, Faiz Bukhari Mohd
2018-06-01
Molecular simulation techniques are important to study the understanding of chemical and physical properties of any material. Computational modeling is considered as time reducer in finding the best recipes for Molecularly-Imprinted Polymers (MIPs). In this study, Pyrogallol-imprinted polymers (PIP) and non-imprinted polymers (NIPs) were synthesized via precipitation polymerization using Pyrogallol (Py), methyl methacrylate (MMA), divinylbenzene (DVB) as template, functional monomer and cross-linker, respectively. The recipe was according to the results from computational techniques. The synthesized PIP and NIPs were characterized by Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), Brunauer-Emmett-Teller (BET) and UV-visible spectroscopy (UV-vis). Studies on adsorption isotherm showed that PIP and NIPs follow Scatchard isotherm models. Sorption kinetic study found that PIP and NIPs follow pseudo-second order which indicates the rate-limiting step is the surface adsorption. The imprinting factor of PIP was determined by selectivity study and showed the value of k >1, which proved that PIP was selective toward Pyrogallol compared to NIP.
de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino
2018-05-01
This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Pastor, Dena A.; Dodd, Barbara G.; Chang, Hua-Hua
2002-01-01
Studied the impact of using five different exposure control algorithms in two sizes of item pool calibrated using the generalized partial credit model. Simulation results show that the a-stratified design, in comparison to a no-exposure control condition, could be used to reduce item exposure and overlap and increase pool use, while degrading…
Felo, Michael; Christensen, Brandon; Higgins, John
2013-01-01
The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.
Chebouba, Lokmane; Boughaci, Dalila; Guziolowski, Carito
2018-06-04
The use of data issued from high throughput technologies in drug target problems is widely widespread during the last decades. This study proposes a meta-heuristic framework using stochastic local search (SLS) combined with random forest (RF) where the aim is to specify the most important genes and proteins leading to the best classification of Acute Myeloid Leukemia (AML) patients. First we use a stochastic local search meta-heuristic as a feature selection technique to select the most significant proteins to be used in the classification task step. Then we apply RF to classify new patients into their corresponding classes. The evaluation technique is to run the RF classifier on the training data to get a model. Then, we apply this model on the test data to find the appropriate class. We use as metrics the balanced accuracy (BAC) and the area under the receiver operating characteristic curve (AUROC) to measure the performance of our model. The proposed method is evaluated on the dataset issued from DREAM 9 challenge. The comparison is done with a pure random forest (without feature selection), and with the two best ranked results of the DREAM 9 challenge. We used three types of data: only clinical data, only proteomics data, and finally clinical and proteomics data combined. The numerical results show that the highest scores are obtained when using clinical data alone, and the lowest is obtained when using proteomics data alone. Further, our method succeeds in finding promising results compared to the methods presented in the DREAM challenge.
'Enzyme Test Bench': A biochemical application of the multi-rate modeling
NASA Astrophysics Data System (ADS)
Rachinskiy, K.; Schultze, H.; Boy, M.; Büchs, J.
2008-11-01
In the expanding field of 'white biotechnology' enzymes are frequently applied to catalyze the biochemical reaction from a resource material to a valuable product. Evolutionary designed to catalyze the metabolism in any life form, they selectively accelerate complex reactions under physiological conditions. Modern techniques, such as directed evolution, have been developed to satisfy the increasing demand on enzymes. Applying these techniques together with rational protein design, we aim at improving of enzymes' activity, selectivity and stability. To tap the full potential of these techniques, it is essential to combine them with adequate screening methods. Nowadays a great number of high throughput colorimetric and fluorescent enzyme assays are applied to measure the initial enzyme activity with high throughput. However, the prediction of enzyme long term stability within short experiments is still a challenge. A new high throughput technique for enzyme characterization with specific attention to the long term stability, called 'Enzyme Test Bench', is presented. The concept of the Enzyme Test Bench consists of short term enzyme tests conducted under partly extreme conditions to predict the enzyme long term stability under moderate conditions. The technique is based on the mathematical modeling of temperature dependent enzyme activation and deactivation. Adapting the temperature profiles in sequential experiments by optimum non-linear experimental design, the long term deactivation effects can be purposefully accelerated and detected within hours. During the experiment the enzyme activity is measured online to estimate the model parameters from the obtained data. Thus, the enzyme activity and long term stability can be calculated as a function of temperature. The results of the characterization, based on micro liter format experiments of hours, are in good agreement with the results of long term experiments in 1L format. Thus, the new technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.
Vasconcelos, A G; Almeida, R M; Nobre, F F
2001-08-01
This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.
3D-Pharmacophore Identification for κ-Opioid Agonists Using Ligand-Based Drug-Design Techniques
NASA Astrophysics Data System (ADS)
Yamaotsu, Noriyuki; Hirono, Shuichi
A selective κ-opioid receptor (KOR) agonist might act as a powerful analgesic without the side effects of μ-opioid receptor-selective drugs such as morphine. The eight classes of known KOR agonists have different chemical structures, making it difficult to construct a pharmacophore model that takes them all into account. Here, we summarize previous efforts to identify the pharmacophore for κ-opioid agonists and propose a new three-dimensional pharmacophore model that encompasses the κ-activities of all classes. This utilizes conformational sampling of agonists by high-temperature molecular dynamics and pharmacophore extraction through a series of molecular superpositions.
CHENG, JIANLIN; EICKHOLT, JESSE; WANG, ZHENG; DENG, XIN
2013-01-01
After decades of research, protein structure prediction remains a very challenging problem. In order to address the different levels of complexity of structural modeling, two types of modeling techniques — template-based modeling and template-free modeling — have been developed. Template-based modeling can often generate a moderate- to high-resolution model when a similar, homologous template structure is found for a query protein but fails if no template or only incorrect templates are found. Template-free modeling, such as fragment-based assembly, may generate models of moderate resolution for small proteins of low topological complexity. Seldom have the two techniques been integrated together to improve protein modeling. Here we develop a recursive protein modeling approach to selectively and collaboratively apply template-based and template-free modeling methods to model template-covered (i.e. certain) and template-free (i.e. uncertain) regions of a protein. A preliminary implementation of the approach was tested on a number of hard modeling cases during the 9th Critical Assessment of Techniques for Protein Structure Prediction (CASP9) and successfully improved the quality of modeling in most of these cases. Recursive modeling can signicantly reduce the complexity of protein structure modeling and integrate template-based and template-free modeling to improve the quality and efficiency of protein structure prediction. PMID:22809379
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
Spindle speed variation technique in turning operations: Modeling and real implementation
NASA Astrophysics Data System (ADS)
Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.
2016-11-01
Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests.
Response Surface Modeling Using Multivariate Orthogonal Functions
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; DeLoach, Richard
2001-01-01
A nonlinear modeling technique was used to characterize response surfaces for non-dimensional longitudinal aerodynamic force and moment coefficients, based on wind tunnel data from a commercial jet transport model. Data were collected using two experimental procedures - one based on modem design of experiments (MDOE), and one using a classical one factor at a time (OFAT) approach. The nonlinear modeling technique used multivariate orthogonal functions generated from the independent variable data as modeling functions in a least squares context to characterize the response surfaces. Model terms were selected automatically using a prediction error metric. Prediction error bounds computed from the modeling data alone were found to be- a good measure of actual prediction error for prediction points within the inference space. Root-mean-square model fit error and prediction error were less than 4 percent of the mean response value in all cases. Efficacy and prediction performance of the response surface models identified from both MDOE and OFAT experiments were investigated.
NASA Astrophysics Data System (ADS)
Escalona, Luis; Díaz-Montiel, Paulina; Venkataraman, Satchi
2016-04-01
Laminated carbon fiber reinforced polymer (CFRP) composite materials are increasingly used in aerospace structures due to their superior mechanical properties and reduced weight. Assessing the health and integrity of these structures requires non-destructive evaluation (NDE) techniques to detect and measure interlaminar delamination and intralaminar matrix cracking damage. The electrical resistance change (ERC) based NDE technique uses the inherent changes in conductive properties of the composite to characterize internal damage. Several works that have explored the ERC technique have been limited to thin cross-ply laminates with simple linear or circular electrode arrangements. This paper investigates a method of optimum selection of electrode configurations for delamination detection in thick cross-ply laminates using ERC. Inverse identification of damage requires numerical optimization of the measured response with a model predicted response. Here, the electrical voltage field in the CFRP composite laminate is calculated using finite element analysis (FEA) models for different specified delamination size and locations, and location of ground and current electrodes. Reducing the number of sensor locations and measurements is needed to reduce hardware requirements, and computational effort needed for inverse identification. This paper explores the use of effective independence (EI) measure originally proposed for sensor location optimization in experimental vibration modal analysis. The EI measure is used for selecting the minimum set of resistance measurements among all possible combinations of selecting a pair of electrodes among the n electrodes. To enable use of EI to ERC required, it is proposed in this research a singular value decomposition SVD to obtain a spectral representation of the resistance measurements in the laminate. The effectiveness of EI measure in eliminating redundant electrode pairs is demonstrated by performing inverse identification of damage using the full set of resistance measurements and the reduced set of measurements. The investigation shows that the EI measure is effective for optimally selecting the electrode pairs needed for resistance measurements in ERC based damage detection.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
Three-dimensional spiral CT during arterial portography: comparison of three rendering techniques.
Heath, D G; Soyer, P A; Kuszyk, B S; Bliss, D F; Calhoun, P S; Bluemke, D A; Choti, M A; Fishman, E K
1995-07-01
The three most common techniques for three-dimensional reconstruction are surface rendering, maximum-intensity projection (MIP), and volume rendering. Surface-rendering algorithms model objects as collections of geometric primitives that are displayed with surface shading. The MIP algorithm renders an image by selecting the voxel with the maximum intensity signal along a line extended from the viewer's eye through the data volume. Volume-rendering algorithms sum the weighted contributions of all voxels along the line. Each technique has advantages and shortcomings that must be considered during selection of one for a specific clinical problem and during interpretation of the resulting images. With surface rendering, sharp-edged, clear three-dimensional reconstruction can be completed on modest computer systems; however, overlapping structures cannot be visualized and artifacts are a problem. MIP is computationally a fast technique, but it does not allow depiction of overlapping structures, and its images are three-dimensionally ambiguous unless depth cues are provided. Both surface rendering and MIP use less than 10% of the image data. In contrast, volume rendering uses nearly all of the data, allows demonstration of overlapping structures, and engenders few artifacts, but it requires substantially more computer power than the other techniques.
NASA Astrophysics Data System (ADS)
Jothiprakash, V.; Magar, R. B.
2012-07-01
SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.
A Gaussian Mixture Model-based continuous Boundary Detection for 3D sensor networks.
Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji
2010-01-01
This paper proposes a high precision Gaussian Mixture Model-based novel Boundary Detection 3D (BD3D) scheme with reasonable implementation cost for 3D cases by selecting a minimum number of Boundary sensor Nodes (BNs) in continuous moving objects. It shows apparent advantages in that two classes of boundary and non-boundary sensor nodes can be efficiently classified using the model selection techniques for finite mixture models; furthermore, the set of sensor readings within each sensor node's spatial neighbors is formulated using a Gaussian Mixture Model; different from DECOMO [1] and COBOM [2], we also formatted a BN Array with an additional own sensor reading to benefit selecting Event BNs (EBNs) and non-EBNs from the observations of BNs. In particular, we propose a Thick Section Model (TSM) to solve the problem of transition between 2D and 3D. It is verified by simulations that the BD3D 2D model outperforms DECOMO and COBOM in terms of average residual energy and the number of BNs selected, while the BD3D 3D model demonstrates sound performance even for sensor networks with low densities especially when the value of the sensor transmission range (r) is larger than the value of Section Thickness (d) in TSM. We have also rigorously proved its correctness for continuous geometric domains and full robustness for sensor networks over 3D terrains.
2016-06-01
characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira
Lorenzo-Seva, Urbano; Ferrando, Pere J
2011-03-01
We provide an SPSS program that implements currently recommended techniques and recent developments for selecting variables in multiple linear regression analysis via the relative importance of predictors. The approach consists of: (1) optimally splitting the data for cross-validation, (2) selecting the final set of predictors to be retained in the equation regression, and (3) assessing the behavior of the chosen model using standard indices and procedures. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from brm.psychonomic-journals.org/content/supplemental.
Car, B; Veissier, L; Louchet-Chauvet, A; Le Gouët, J-L; Chanelière, T
2018-05-11
In Er^{3+}:Y_{2}SiO_{5}, we demonstrate the selective optical addressing of the ^{89}Y^{3+} nuclear spins through their superhyperfine coupling with the Er^{3+} electronic spins possessing large Landé g factors. We experimentally probe the electron-nuclear spin mixing with photon echo techniques and validate our model. The site-selective optical addressing of the Y^{3+} nuclear spins is designed by adjusting the magnetic field strength and orientation. This constitutes an important step towards the realization of long-lived solid-state qubits optically addressed by telecom photons.
NASA Astrophysics Data System (ADS)
Car, B.; Veissier, L.; Louchet-Chauvet, A.; Le Gouët, J.-L.; Chanelière, T.
2018-05-01
In Er3 +:Y2SiO5 , we demonstrate the selective optical addressing of the
Initial proposition of kinematics model for selected karate actions analysis
NASA Astrophysics Data System (ADS)
Hachaj, Tomasz; Koptyra, Katarzyna; Ogiela, Marek R.
2017-03-01
The motivation for this paper is to initially propose and evaluate two new kinematics models that were developed to describe motion capture (MoCap) data of karate techniques. We decided to develop this novel proposition to create the model that is capable to handle actions description both from multimedia and professional MoCap hardware. For the evaluation purpose we have used 25-joints data with karate techniques recordings acquired with Kinect version 2. It is consisted of MoCap recordings of two professional sport (black belt) instructors and masters of Oyama Karate. We have selected following actions for initial analysis: left-handed furi-uchi punch, right leg hiza-geri kick, right leg yoko-geri kick and left-handed jodan-uke block. Basing on evaluation we made we can conclude that both proposed kinematics models seems to be convenient method for karate actions description. From two proposed variables models it seems that global might be more useful for further usage. We think that because in case of considered punches variables seems to be less correlated and they might also be easier to interpret because of single reference coordinate system. Also principal components analysis proved to be reliable way to examine the quality of kinematics models and with the plot of the variable in principal components space we can nicely present the dependences between variables.
Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes
NASA Astrophysics Data System (ADS)
Pourtaherian, Arash; Zinger, Svitlana; de With, Peter H. N.; Korsten, Hendrikus H. M.; Mihajlovic, Nenad
2015-03-01
Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.
Efficient live face detection to counter spoof attack in face recognition systems
NASA Astrophysics Data System (ADS)
Biswas, Bikram Kumar; Alam, Mohammad S.
2015-03-01
Face recognition is a critical tool used in almost all major biometrics based security systems. But recognition, authentication and liveness detection of the face of an actual user is a major challenge because an imposter or a non-live face of the actual user can be used to spoof the security system. In this research, a robust technique is proposed which detects liveness of faces in order to counter spoof attacks. The proposed technique uses a three-dimensional (3D) fast Fourier transform to compare spectral energies of a live face and a fake face in a mathematically selective manner. The mathematical model involves evaluation of energies of selective high frequency bands of average power spectra of both live and non-live faces. It also carries out proper recognition and authentication of the face of the actual user using the fringe-adjusted joint transform correlation technique, which has been found to yield the highest correlation output for a match. Experimental tests show that the proposed technique yields excellent results for identifying live faces.
Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection
Moomen, Abdelniser; Ali, Abdulbaset; Ramahi, Omar M.
2016-01-01
Nondestructive Testing (NDT) assessment of materials’ health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies) are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms. PMID:27104533
Entropy-Based Search Algorithm for Experimental Design
NASA Astrophysics Data System (ADS)
Malakar, N. K.; Knuth, K. H.
2011-03-01
The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.
Balabin, Roman M; Smirnov, Sergey V
2011-04-29
During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic techniques application, such as Raman, ultraviolet-visible (UV-vis), or nuclear magnetic resonance (NMR) spectroscopies, can be greatly improved by an appropriate feature selection choice. Copyright © 2011 Elsevier B.V. All rights reserved.
Analytical network process based optimum cluster head selection in wireless sensor network.
Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.
Analytical network process based optimum cluster head selection in wireless sensor network
Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616
DOT National Transportation Integrated Search
2009-04-01
The primary umbrella method used by the Oregon Department of Transportation (ODOT) to ensure on-time performance in standard construction contracting is liquidated damages. The assessment value is usually a matter of some judgment. In practice,...
Executivism and Deanship in Selected South African Universities
ERIC Educational Resources Information Center
Seale, Oliver; Cross, Michael
2018-01-01
It has been argued that traditional governance practices and decision making associated with the collegial model are no longer effective in universities, and business-like management techniques should be adopted. Dwindling resources, external demands for accountability, and increased competition for market share, have resulted in efficiency…
Process-driven selection of information systems for healthcare
NASA Astrophysics Data System (ADS)
Mills, Stephen F.; Yeh, Raymond T.; Giroir, Brett P.; Tanik, Murat M.
1995-05-01
Integration of networking and data management technologies such as PACS, RIS and HIS into a healthcare enterprise in a clinically acceptable manner is a difficult problem. Data within such a facility are generally managed via a combination of manual hardcopy systems and proprietary, special-purpose data processing systems. Process modeling techniques have been successfully applied to engineering and manufacturing enterprises, but have not generally been applied to service-based enterprises such as healthcare facilities. The use of process modeling techniques can provide guidance for the placement, configuration and usage of PACS and other informatics technologies within the healthcare enterprise, and thus improve the quality of healthcare. Initial process modeling activities conducted within the Pediatric ICU at Children's Medical Center in Dallas, Texas are described. The ongoing development of a full enterprise- level model for the Pediatric ICU is also described.
Transshipment site selection using the AHP and TOPSIS approaches under fuzzy environment.
Onüt, Semih; Soner, Selin
2008-01-01
Site selection is an important issue in waste management. Selection of the appropriate solid waste site requires consideration of multiple alternative solutions and evaluation criteria because of system complexity. Evaluation procedures involve several objectives, and it is often necessary to compromise among possibly conflicting tangible and intangible factors. For these reasons, multiple criteria decision-making (MCDM) has been found to be a useful approach to solve this kind of problem. Different MCDM models have been applied to solve this problem. But most of them are basically mathematical and ignore qualitative and often subjective considerations. It is easier for a decision-maker to describe a value for an alternative by using linguistic terms. In the fuzzy-based method, the rating of each alternative is described using linguistic terms, which can also be expressed as triangular fuzzy numbers. Furthermore, there have not been any studies focused on the site selection in waste management using both fuzzy TOPSIS (technique for order preference by similarity to ideal solution) and AHP (analytical hierarchy process) techniques. In this paper, a fuzzy TOPSIS based methodology is applied to solve the solid waste transshipment site selection problem in Istanbul, Turkey. The criteria weights are calculated by using the AHP.
Development of a QFD-based expert system for CNC turning centre selection
NASA Astrophysics Data System (ADS)
Prasad, Kanika; Chakraborty, Shankar
2015-12-01
Computer numerical control (CNC) machine tools are automated devices capable of generating complicated and intricate product shapes in shorter time. Selection of the best CNC machine tool is a critical, complex and time-consuming task due to availability of a wide range of alternatives and conflicting nature of several evaluation criteria. Although, the past researchers had attempted to select the appropriate machining centres using different knowledge-based systems, mathematical models and multi-criteria decision-making methods, none of those approaches has given due importance to the voice of customers. The aforesaid limitation can be overcome using quality function deployment (QFD) technique, which is a systematic approach for integrating customers' needs and designing the product to meet those needs first time and every time. In this paper, the adopted QFD-based methodology helps in selecting CNC turning centres for a manufacturing organization, providing due importance to the voice of customers to meet their requirements. An expert system based on QFD technique is developed in Visual BASIC 6.0 to automate the CNC turning centre selection procedure for different production plans. Three illustrative examples are demonstrated to explain the real-time applicability of the developed expert system.
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...
2016-12-01
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael
Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less
Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.
2015-08-19
Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less
Prediction of Baseflow Index of Catchments using Machine Learning Algorithms
NASA Astrophysics Data System (ADS)
Yadav, B.; Hatfield, K.
2017-12-01
We present the results of eight machine learning techniques for predicting the baseflow index (BFI) of ungauged basins using a surrogate of catchment scale climate and physiographic data. The tested algorithms include ordinary least squares, ridge regression, least absolute shrinkage and selection operator (lasso), elasticnet, support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Our work seeks to identify the dominant controls of BFI that can be readily obtained from ancillary geospatial databases and remote sensing measurements, such that the developed techniques can be extended to ungauged catchments. More than 800 gauged catchments spanning the continental United States were selected to develop the general methodology. The BFI calculation was based on the baseflow separated from daily streamflow hydrograph using HYSEP filter. The surrogate catchment attributes were compiled from multiple sources including digital elevation model, soil, landuse, climate data, other publicly available ancillary and geospatial data. 80% catchments were used to train the ML algorithms, and the remaining 20% of the catchments were used as an independent test set to measure the generalization performance of fitted models. A k-fold cross-validation using exhaustive grid search was used to fit the hyperparameters of each model. Initial model development was based on 19 independent variables, but after variable selection and feature ranking, we generated revised sparse models of BFI prediction that are based on only six catchment attributes. These key predictive variables selected after the careful evaluation of bias-variance tradeoff include average catchment elevation, slope, fraction of sand, permeability, temperature, and precipitation. The most promising algorithms exceeding an accuracy score (r-square) of 0.7 on test data include support vector machine, gradient boosted regression trees, random forests, and extremely randomized trees. Considering both the accuracy and the computational complexity of these algorithms, we identify the extremely randomized trees as the best performing algorithm for BFI prediction in ungauged basins.
A novel feature ranking method for prediction of cancer stages using proteomics data
Saghapour, Ehsan; Sehhati, Mohammadreza
2017-01-01
Proteomic analysis of cancers' stages has provided new opportunities for the development of novel, highly sensitive diagnostic tools which helps early detection of cancer. This paper introduces a new feature ranking approach called FRMT. FRMT is based on the Technique for Order of Preference by Similarity to Ideal Solution method (TOPSIS) which select the most discriminative proteins from proteomics data for cancer staging. In this approach, outcomes of 10 feature selection techniques were combined by TOPSIS method, to select the final discriminative proteins from seven different proteomic databases of protein expression profiles. In the proposed workflow, feature selection methods and protein expressions have been considered as criteria and alternatives in TOPSIS, respectively. The proposed method is tested on seven various classifier models in a 10-fold cross validation procedure that repeated 30 times on the seven cancer datasets. The obtained results proved the higher stability and superior classification performance of method in comparison with other methods, and it is less sensitive to the applied classifier. Moreover, the final introduced proteins are informative and have the potential for application in the real medical practice. PMID:28934234
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Survey of air cargo forecasting techniques
NASA Technical Reports Server (NTRS)
Kuhlthan, A. R.; Vermuri, R. S.
1978-01-01
Forecasting techniques currently in use in estimating or predicting the demand for air cargo in various markets are discussed with emphasis on the fundamentals of the different forecasting approaches. References to specific studies are cited when appropriate. The effectiveness of current methods is evaluated and several prospects for future activities or approaches are suggested. Appendices contain summary type analyses of about 50 specific publications on forecasting, and selected bibliographies on air cargo forecasting, air passenger demand forecasting, and general demand and modalsplit modeling.
Collagenase Santyl ointment: a selective agent for wound debridement.
Shi, Lei; Carson, Dennis
2009-01-01
Enzymatic debridement is a frequently used technique for removal of necrotic tissue from wounds. Proteases with specificity to break down the collagenous materials in necrotic tissues can achieve selective debridement, digesting denatured collagen in eschar while sparing nonnecrotic tissues. This article provides information about the selectivity of a collagenase-based debriding agent, including evidence of its safe and efficacious uses. Recent research has been conducted, investigating the chemical and biological properties of collagenase ointment, including healing in animal models, digestion power on different collagen types, cell migration activity from collagen degradation products, and compatibility with various wound dressings and metal ions. Evidence presented demonstrates that collagenase ointment is an effective, selective, and safe wound debriding agent.
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
NASA Astrophysics Data System (ADS)
Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah
2018-02-01
Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.
Power-constrained supercomputing
NASA Astrophysics Data System (ADS)
Bailey, Peter E.
As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.
Factor weighting in DRASTIC modeling.
Pacheco, F A L; Pires, L M G R; Santos, R M B; Sanches Fernandes, L F
2015-02-01
Evaluation of aquifer vulnerability comprehends the integration of very diverse data, including soil characteristics (texture), hydrologic settings (recharge), aquifer properties (hydraulic conductivity), environmental parameters (relief), and ground water quality (nitrate contamination). It is therefore a multi-geosphere problem to be handled by a multidisciplinary team. The DRASTIC model remains the most popular technique in use for aquifer vulnerability assessments. The algorithm calculates an intrinsic vulnerability index based on a weighted addition of seven factors. In many studies, the method is subject to adjustments, especially in the factor weights, to meet the particularities of the studied regions. However, adjustments made by different techniques may lead to markedly different vulnerabilities and hence to insecurity in the selection of an appropriate technique. This paper reports the comparison of 5 weighting techniques, an enterprise not attempted before. The studied area comprises 26 aquifer systems located in Portugal. The tested approaches include: the Delphi consensus (original DRASTIC, used as reference), Sensitivity Analysis, Spearman correlations, Logistic Regression and Correspondence Analysis (used as adjustment techniques). In all cases but Sensitivity Analysis, adjustment techniques have privileged the factors representing soil characteristics, hydrologic settings, aquifer properties and environmental parameters, by leveling their weights to ≈4.4, and have subordinated the factors describing the aquifer media by downgrading their weights to ≈1.5. Logistic Regression predicts the highest and Sensitivity Analysis the lowest vulnerabilities. Overall, the vulnerability indices may be separated by a maximum value of 51 points. This represents an uncertainty of 2.5 vulnerability classes, because they are 20 points wide. Given this ambiguity, the selection of a weighting technique to integrate a vulnerability index may require additional expertise to be set up satisfactorily. Following a general criterion that weights must be proportional to the range of the ratings, Correspondence Analysis may be recommended as the best adjustment technique. Copyright © 2014 Elsevier B.V. All rights reserved.
Planning for robust reserve networks using uncertainty analysis
Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.
2006-01-01
Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.
Log-linear model based behavior selection method for artificial fish swarm algorithm.
Huang, Zhehuang; Chen, Yidong
2015-01-01
Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.
A grey NGM(1,1, k) self-memory coupling prediction model for energy consumption prediction.
Guo, Xiaojun; Liu, Sifeng; Wu, Lifeng; Tang, Lingling
2014-01-01
Energy consumption prediction is an important issue for governments, energy sector investors, and other related corporations. Although there are several prediction techniques, selection of the most appropriate technique is of vital importance. As for the approximate nonhomogeneous exponential data sequence often emerging in the energy system, a novel grey NGM(1,1, k) self-memory coupling prediction model is put forward in order to promote the predictive performance. It achieves organic integration of the self-memory principle of dynamic system and grey NGM(1,1, k) model. The traditional grey model's weakness as being sensitive to initial value can be overcome by the self-memory principle. In this study, total energy, coal, and electricity consumption of China is adopted for demonstration by using the proposed coupling prediction technique. The results show the superiority of NGM(1,1, k) self-memory coupling prediction model when compared with the results from the literature. Its excellent prediction performance lies in that the proposed coupling model can take full advantage of the systematic multitime historical data and catch the stochastic fluctuation tendency. This work also makes a significant contribution to the enrichment of grey prediction theory and the extension of its application span.
Developing a dengue forecast model using machine learning: A case study in China.
Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun
2017-10-01
In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.
On-line Model Structure Selection for Estimation of Plasma Boundary in a Tokamak
NASA Astrophysics Data System (ADS)
Škvára, Vít; Šmídl, Václav; Urban, Jakub
2015-11-01
Control of the plasma field in the tokamak requires reliable estimation of the plasma boundary. The plasma boundary is given by a complex mathematical model and the only available measurements are responses of induction coils around the plasma. For the purpose of boundary estimation the model can be reduced to simple linear regression with potentially infinitely many elements. The number of elements must be selected manually and this choice significantly influences the resulting shape. In this paper, we investigate the use of formal model structure estimation techniques for the problem. Specifically, we formulate a sparse least squares estimator using the automatic relevance principle. The resulting algorithm is a repetitive evaluation of the least squares problem which could be computed in real time. Performance of the resulting algorithm is illustrated on simulated data and evaluated with respect to a more detailed and computationally costly model FREEBIE.
Preconditioned conjugate gradient technique for the analysis of symmetric anisotropic structures
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1987-01-01
An efficient preconditioned conjugate gradient (PCG) technique and a computational procedure are presented for the analysis of symmetric anisotropic structures. The technique is based on selecting the preconditioning matrix as the orthotropic part of the global stiffness matrix of the structure, with all the nonorthotropic terms set equal to zero. This particular choice of the preconditioning matrix results in reducing the size of the analysis model of the anisotropic structure to that of the corresponding orthotropic structure. The similarities between the proposed PCG technique and a reduction technique previously presented by the authors are identified and exploited to generate from the PCG technique direct measures for the sensitivity of the different response quantities to the nonorthotropic (anisotropic) material coefficients of the structure. The effectiveness of the PCG technique is demonstrated by means of a numerical example of an anisotropic cylindrical panel.
Dalley, B K; Seliger, W G
1980-05-01
A simple and rapid technique is described for the screening of Epon embedded organ slices for the location, isolation, and removal of small specific sites for ultrastructural study with the transmission electron microscope. This procedure consists of perfusion fixation followed by making 1 to 21/2 mm thick slices of relatively large pieces of the organs, control of the degree and evenness of the osmium staining by addition of 3% sodium iodate, and infiltration with a fluorescent dye prior to embedment in Epon. Tissue slices are embedded in wafer-shaped blocks, generally with several slices in one "wafer", and are examined in a controlled manner using a rapid form of serial surface polishing. Each level of the polished wafer is examined using an epi-illuminated fluorescence microscope, and selected sites are chosen at each level for ultrastructural study. Methods are also described for marking each selected site using a conventional slide marker, and for the removal of the selected site in the form of a small disc of Epon, after which the Epon wafer can be further serially polished and the examination continued. Areas to be thin-sectioned are removed using a core drill mounted on a model-maker's drill press. The technique is simple, does not require the destruction of remaining tissues to evaluate more critically a single small site, allows for the easy maintenance of tissue orientation, and the most time-consuming portions of the technique can be quickly taught to a person with no previous histological training.
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
NASA Astrophysics Data System (ADS)
Kukunda, Collins B.; Duque-Lazo, Joaquín; González-Ferreiro, Eduardo; Thaden, Hauke; Kleinn, Christoph
2018-03-01
Distinguishing tree species is relevant in many contexts of remote sensing assisted forest inventory. Accurate tree species maps support management and conservation planning, pest and disease control and biomass estimation. This study evaluated the performance of applying ensemble techniques with the goal of automatically distinguishing Pinus sylvestris L. and Pinus uncinata Mill. Ex Mirb within a 1.3 km2 mountainous area in Barcelonnette (France). Three modelling schemes were examined, based on: (1) high-density LiDAR data (160 returns m-2), (2) Worldview-2 multispectral imagery, and (3) Worldview-2 and LiDAR in combination. Variables related to the crown structure and height of individual trees were extracted from the normalized LiDAR point cloud at individual-tree level, after performing individual tree crown (ITC) delineation. Vegetation indices and the Haralick texture indices were derived from Worldview-2 images and served as independent spectral variables. Selection of the best predictor subset was done after a comparison of three variable selection procedures: (1) Random Forests with cross validation (AUCRFcv), (2) Akaike Information Criterion (AIC) and (3) Bayesian Information Criterion (BIC). To classify the species, 9 regression techniques were combined using ensemble models. Predictions were evaluated using cross validation and an independent dataset. Integration of datasets and models improved individual tree species classification (True Skills Statistic, TSS; from 0.67 to 0.81) over individual techniques and maintained strong predictive power (Relative Operating Characteristic, ROC = 0.91). Assemblage of regression models and integration of the datasets provided more reliable species distribution maps and associated tree-scale mapping uncertainties. Our study highlights the potential of model and data assemblage at improving species classifications needed in present-day forest planning and management.
Air Force Systems Engineering Assessment Model (AF SEAM) Management Guide, Version 2
2010-09-21
gleaned from experienced professionals who assisted with the model’s development. Examples of the references used include the following: • ISO /IEC...Defense Acquisition Guidebook, Chapter 4 • AFI 63-1201, Life Cycle Systems Engineering • IEEE/EIA 12207 , Software Life Cycle Processes • Air...Selection criteria Reference Material: IEEE/EIA 12207 , MIL-HDBK-514 Other Considerations: Modeling, simulation and analysis techniques can be
ACOSS Eight (Active Control of Space Structures), Phase 2
1981-09-01
A-2 A-2 Nominal Model - Equipment Section and Solar Panels ....... A-3 A-3 Nominal Model - Upper Support .-uss ...... ............ A-4 A...sensitivity analysis technique ef selecting critical system parameters is applied tc the Diaper tetrahedral truss structure (See Section 4-2...and solar panels are omitted. The precision section is mounted on isolators to inertially r•" I fixed rigid support. The mode frequencies of this
Solomon Technique Versus Selective Coagulation for Twin-Twin Transfusion Syndrome.
Slaghekke, Femke; Oepkes, Dick
2016-06-01
Monochorionic twin pregnancies can be complicated by twin-to-twin transfusion syndrome (TTTS). The best treatment option for TTTS is fetoscopic laser coagulation of the vascular anastomoses between donor and recipient. After laser therapy, up to 33% residual anastomoses were seen. These residual anastomoses can cause twin anemia polycythemia sequence (TAPS) and recurrent TTTS. In order to reduce the number of residual anastomoses and their complications, a new technique, the Solomon technique, where the whole vascular equator will be coagulated, was introduced. The Solomon technique showed a reduction of recurrent TTS compared to the selective technique. The incidence of recurrent TTTS after the Solomon technique ranged from 0% to 3.9% compared to 5.3-8.5% after the selective technique. The incidence of TAPS after the Solomon technique ranged from 0% to 2.9% compared to 4.2-15.6% after the selective technique. The Solomon technique may improve dual survival rates ranging from 64% to 85% compared to 46-76% for the selective technique. There was no difference reported in procedure-related complications such as intrauterine infection and preterm premature rupture of membranes. The Solomon technique significantly reduced the incidence of TAPS and recurrent TTTS and may improve survival and neonatal outcome, without identifiable adverse outcome or complications; therefore, the Solomon technique is recommended for the treatment of TTTS.
Pargett, Michael; Umulis, David M
2013-07-15
Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Engwirda, Darren
2017-01-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Engwirda, Darren
2017-06-01
An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.
NASA Astrophysics Data System (ADS)
Miyagawa, Chihiro; Kobayashi, Takumi; Taishi, Toshinori; Hoshikawa, Keigo
2014-09-01
Based on the growth of 3-inch diameter c-axis sapphire using the vertical Bridgman (VB) technique, numerical simulations were made and used to guide the growth of a 6-inch diameter sapphire. A 2D model of the VB hot-zone was constructed, the seeding interface shape of the 3-inch diameter sapphire as revealed by green laser scattering was estimated numerically, and the temperature distributions of two VB hot-zone models designed for 6-inch diameter sapphire growth were numerically simulated to achieve the optimal growth of large crystals. The hot-zone model with one heater was selected and prepared, and 6-inch diameter c-axis sapphire boules were actually grown, as predicted by the numerical results.
3D model assisted fully automated scanning laser Doppler vibrometer measurements
NASA Astrophysics Data System (ADS)
Sels, Seppe; Ribbens, Bart; Bogaerts, Boris; Peeters, Jeroen; Vanlanduit, Steve
2017-12-01
In this paper, a new fully automated scanning laser Doppler vibrometer (LDV) measurement technique is presented. In contrast to existing scanning LDV techniques which use a 2D camera for the manual selection of sample points, we use a 3D Time-of-Flight camera in combination with a CAD file of the test object to automatically obtain measurements at pre-defined locations. The proposed procedure allows users to test prototypes in a shorter time because physical measurement locations are determined without user interaction. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. The proposed method is illustrated with vibration measurements of an unmanned aerial vehicle
A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers
USDA-ARS?s Scientific Manuscript database
Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...
Overprompting Science Students Using Adjunct Study Questions.
ERIC Educational Resources Information Center
Holliday, William G.
1983-01-01
The selective attention model was used to explain effects of overprompting students (N=170) provided with study questions adjunct to a complex flow diagram describing scientific cyclical schema. Strongly prompting students to answers of questions was less effective than an unprompted question treatment, suggesting that prompting techniques be used…
Hyperspectral image visualization based on a human visual model
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Peng, Honghong; Fairchild, Mark D.; Montag, Ethan D.
2008-02-01
Hyperspectral image data can provide very fine spectral resolution with more than 200 bands, yet presents challenges for visualization techniques for displaying such rich information on a tristimulus monitor. This study developed a visualization technique by taking advantage of both the consistent natural appearance of a true color image and the feature separation of a PCA image based on a biologically inspired visual attention model. The key part is to extract the informative regions in the scene. The model takes into account human contrast sensitivity functions and generates a topographic saliency map for both images. This is accomplished using a set of linear "center-surround" operations simulating visual receptive fields as the difference between fine and coarse scales. A difference map between the saliency map of the true color image and that of the PCA image is derived and used as a mask on the true color image to select a small number of interesting locations where the PCA image has more salient features than available in the visible bands. The resulting representations preserve hue for vegetation, water, road etc., while the selected attentional locations may be analyzed by more advanced algorithms.
Data-driven discovery of partial differential equations.
Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan
2017-04-01
We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.
Prediction of siRNA potency using sparse logistic regression.
Hu, Wei; Hu, John
2014-06-01
RNA interference (RNAi) can modulate gene expression at post-transcriptional as well as transcriptional levels. Short interfering RNA (siRNA) serves as a trigger for the RNAi gene inhibition mechanism, and therefore is a crucial intermediate step in RNAi. There have been extensive studies to identify the sequence characteristics of potent siRNAs. One such study built a linear model using LASSO (Least Absolute Shrinkage and Selection Operator) to measure the contribution of each siRNA sequence feature. This model is simple and interpretable, but it requires a large number of nonzero weights. We have introduced a novel technique, sparse logistic regression, to build a linear model using single-position specific nucleotide compositions which has the same prediction accuracy of the linear model based on LASSO. The weights in our new model share the same general trend as those in the previous model, but have only 25 nonzero weights out of a total 84 weights, a 54% reduction compared to the previous model. Contrary to the linear model based on LASSO, our model suggests that only a few positions are influential on the efficacy of the siRNA, which are the 5' and 3' ends and the seed region of siRNA sequences. We also employed sparse logistic regression to build a linear model using dual-position specific nucleotide compositions, a task LASSO is not able to accomplish well due to its high dimensional nature. Our results demonstrate the superiority of sparse logistic regression as a technique for both feature selection and regression over LASSO in the context of siRNA design.
Kenny, Joseph P.; Janssen, Curtis L.; Gordon, Mark S.; ...
2008-01-01
Cutting-edge scientific computing software is complex, increasingly involving the coupling of multiple packages to combine advanced algorithms or simulations at multiple physical scales. Component-based software engineering (CBSE) has been advanced as a technique for managing this complexity, and complex component applications have been created in the quantum chemistry domain, as well as several other simulation areas, using the component model advocated by the Common Component Architecture (CCA) Forum. While programming models do indeed enable sound software engineering practices, the selection of programming model is just one building block in a comprehensive approach to large-scale collaborative development which must also addressmore » interface and data standardization, and language and package interoperability. We provide an overview of the development approach utilized within the Quantum Chemistry Science Application Partnership, identifying design challenges, describing the techniques which we have adopted to address these challenges and highlighting the advantages which the CCA approach offers for collaborative development.« less
Animal models of cartilage repair
Cook, J. L.; Hung, C. T.; Kuroki, K.; Stoker, A. M.; Cook, C. R.; Pfeiffer, F. M.; Sherman, S. L.; Stannard, J. P.
2014-01-01
Cartilage repair in terms of replacement, or regeneration of damaged or diseased articular cartilage with functional tissue, is the ‘holy grail’ of joint surgery. A wide spectrum of strategies for cartilage repair currently exists and several of these techniques have been reported to be associated with successful clinical outcomes for appropriately selected indications. However, based on respective advantages, disadvantages, and limitations, no single strategy, or even combination of strategies, provides surgeons with viable options for attaining successful long-term outcomes in the majority of patients. As such, development of novel techniques and optimisation of current techniques need to be, and are, the focus of a great deal of research from the basic science level to clinical trials. Translational research that bridges scientific discoveries to clinical application involves the use of animal models in order to assess safety and efficacy for regulatory approval for human use. This review article provides an overview of animal models for cartilage repair. Cite this article: Bone Joint Res 2014;4:89–94. PMID:24695750
Optimization of investment portfolio weight of stocks affected by market index
NASA Astrophysics Data System (ADS)
Azizah, E.; Rusyaman, E.; Supian, S.
2017-01-01
Stock price assessment, selection of optimum combination, and measure the risk of a portfolio investment is one important issue for investors. In this paper single index model used for the assessment of the stock price, and formulation optimization model developed using Lagrange multiplier technique to determine the proportion of assets to be invested. The level of risk is estimated by using variance. These models are used to analyse the stock price data Lippo Bank and Bumi Putera.
NASA Astrophysics Data System (ADS)
Tautz-Weinert, J.; Watson, S. J.
2016-09-01
Effective condition monitoring techniques for wind turbines are needed to improve maintenance processes and reduce operational costs. Normal behaviour modelling of temperatures with information from other sensors can help to detect wear processes in drive trains. In a case study, modelling of bearing and generator temperatures is investigated with operational data from the SCADA systems of more than 100 turbines. The focus is here on automated training and testing on a farm level to enable an on-line system, which will detect failures without human interpretation. Modelling based on linear combinations, artificial neural networks, adaptive neuro-fuzzy inference systems, support vector machines and Gaussian process regression is compared. The selection of suitable modelling inputs is discussed with cross-correlation analyses and a sensitivity study, which reveals that the investigated modelling techniques react in different ways to an increased number of inputs. The case study highlights advantages of modelling with linear combinations and artificial neural networks in a feedforward configuration.
Microscale Patterning of Thermoplastic Polymer Surfaces by Selective Solvent Swelling
Rahmanian, Omid; Chen, Chien-Fu; DeVoe, Don L.
2012-01-01
A new method for the fabrication of microscale features in thermoplastic substrates is presented. Unlike traditional thermoplastic microfabrication techniques, in which bulk polymer is displaced from the substrate by machining or embossing, a unique process termed orogenic microfabrication has been developed in which selected regions of a thermoplastic surface are raised from the substrate by an irreversible solvent swelling mechanism. The orogenic technique allows thermoplastic surfaces to be patterned using a variety of masking methods, resulting in three-dimensional features that would be difficult to achieve through traditional microfabrication methods. Using cyclic olefin copolymer as a model thermoplastic material, several variations of this process are described to realize growth heights ranging from several nanometers to tens of microns, with patterning techniques include direct photoresist masking, patterned UV/ozone surface passivation, elastomeric stamping, and noncontact spotting. Orogenic microfabrication is also demonstrated by direct inkjet printing as a facile photolithography-free masking method for rapid desktop thermoplastic microfabrication. PMID:22900539
NASA Astrophysics Data System (ADS)
Ishii, Masashi; Crowe, Iain F.; Halsall, Matthew P.; Hamilton, Bruce; Hu, Yongfeng; Sham, Tsun-Kong; Harako, Susumu; Zhao, Xin-Wei; Komuro, Shuji
2013-10-01
The local structure of luminescent Sm dopants was investigated using an X-ray absorption fine-structure technique with X-ray-excited optical luminescence. Because this technique evaluates X-ray absorption from luminescence, only optically active sites are analyzed. The Sm L3 near-edge spectrum contains split 5d states and a shake-up transition that are specific to luminescent Sm. Theoretical calculations using cluster models identified an atomic-scale distortion that can reproduce the split 5d states. The model with C4v local symmetry and compressive bond length of Sm-O of a six-fold oxygen (SmO6) cluster is most consistent with the experimental results.
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
In silico modeling techniques for predicting the tertiary structure of human H4 receptor.
Zaid, Hilal; Raiyn, Jamal; Osman, Midhat; Falah, Mizied; Srouji, Samer; Rayan, Anwar
2016-01-01
First cloned in 2000, the human Histamine H4 Receptor (hH4R) is the last member of the histamine receptors family discovered so far, it belongs to the GPCR super-family and is involved in a wide variety of immunological and inflammatory responses. Potential hH4R antagonists are proposed to have therapeutic potential for the treatment of allergies, inflammation, asthma and colitis. So far, no hH4R ligands have been successfully introduced to the pharmaceutical market, which creates a strong demand for new selective ligands to be developed. in silico techniques and structural based modeling are likely to facilitate the achievement of this goal. In this review paper we attempt to cover the fundamental concepts of hH4R structure modeling and its implementations in drug discovery and development, especially those that have been experimentally tested and to highlight some ideas that are currently being discussed on the dynamic nature of hH4R and GPCRs, in regards to computerized techniques for 3-D structure modeling.
Statistical approach for selection of biologically informative genes.
Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N
2018-05-20
Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.
Collins, Lisa M.; Part, Chérie E.
2013-01-01
Simple Summary In this review paper we discuss the different modeling techniques that have been used in animal welfare research to date. We look at what questions they have been used to answer, the advantages and pitfalls of the methods, and how future research can best use these approaches to answer some of the most important upcoming questions in farm animal welfare. Abstract The use of models in the life sciences has greatly expanded in scope and advanced in technique in recent decades. However, the range, type and complexity of models used in farm animal welfare is comparatively poor, despite the great scope for use of modeling in this field of research. In this paper, we review the different modeling approaches used in farm animal welfare science to date, discussing the types of questions they have been used to answer, the merits and problems associated with the method, and possible future applications of each technique. We find that the most frequently published types of model used in farm animal welfare are conceptual and assessment models; two types of model that are frequently (though not exclusively) based on expert opinion. Simulation, optimization, scenario, and systems modeling approaches are rarer in animal welfare, despite being commonly used in other related fields. Finally, common issues such as a lack of quantitative data to parameterize models, and model selection and validation are discussed throughout the review, with possible solutions and alternative approaches suggested. PMID:26487411
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Modelling and simulation techniques for membrane biology.
Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V
2007-07-01
One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.
Discovering Communicable Models from Earth Science Data
NASA Technical Reports Server (NTRS)
Schwabacher, Mark; Langley, Pat; Potter, Christopher; Klooster, Steven; Torregrosa, Alicia
2002-01-01
This chapter describes how we used regression rules to improve upon results previously published in the Earth science literature. In such a scientific application of machine learning, it is crucially important for the learned models to be understandable and communicable. We recount how we selected a learning algorithm to maximize communicability, and then describe two visualization techniques that we developed to aid in understanding the model by exploiting the spatial nature of the data. We also report how evaluating the learned models across time let us discover an error in the data.
Wab-InSAR: a new wavelet based InSAR time series technique applied to volcanic and tectonic areas
NASA Astrophysics Data System (ADS)
Walter, T. R.; Shirzaei, M.; Nankali, H.; Roustaei, M.
2009-12-01
Modern geodetic techniques such as InSAR and GPS provide valuable observations of the deformation field. Because of the variety of environmental interferences (e.g., atmosphere, topography distortion) and incompleteness of the models (assumption of the linear model for deformation), those observations are usually tainted by various systematic and random errors. Therefore we develop and test new methods to identify and filter unwanted periodic or episodic artifacts to obtain accurate and precise deformation measurements. Here we present and implement a new wavelet based InSAR (Wab-InSAR) time series approach. Because wavelets are excellent tools for identifying hidden patterns and capturing transient signals, we utilize wavelet functions for reducing the effect of atmospheric delay and digital elevation model inaccuracies. Wab-InSAR is a model free technique, reducing digital elevation model errors in individual interferograms using a 2D spatial Legendre polynomial wavelet filter. Atmospheric delays are reduced using a 3D spatio-temporal wavelet transform algorithm and a novel technique for pixel selection. We apply Wab-InSAR to several targets, including volcano deformation processes at Hawaii Island, and mountain building processes in Iran. Both targets are chosen to investigate large and small amplitude signals, variable and complex topography and atmospheric effects. In this presentation we explain different steps of the technique, validate the results by comparison to other high resolution processing methods (GPS, PS-InSAR, SBAS) and discuss the geophysical results.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Evolutionary optimization of radial basis function classifiers for data mining applications.
Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard
2005-10-01
In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.
A top-down approach in control engineering third-level teaching: The case of hydrogen-generation
NASA Astrophysics Data System (ADS)
Setiawan, Eko; Habibi, M. Afnan; Fall, Cheikh; Hodaka, Ichijo
2017-09-01
This paper presents a top-down approach in control engineering third-level teaching. The paper shows the control engineering solution for the issue of practical implementation in order to motivate students. The proposed strategy only focuses on one technique of control engineering to lead student correctly. The proposed teaching steps are 1) defining the problem, 2) listing of acquired knowledge or required skill, 3) selecting of one control engineering technique, 4) arrangement the order of teaching: problem introduction, implementation of control engineering technique, explanation of system block diagram, model derivation, controller design, and 5) enrichment knowledge by the other control techniques. The approach presented highlights hardware implementation and the use of software simulation as a self-learning tool for students.
Sariyar, Murat; Hoffmann, Isabell; Binder, Harald
2014-02-26
Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.
Orbital selective directional conductor in the two-orbital Hubbard model
Mukherjee, Anamitra; Patel, Niravkumar D.; Moreo, Adriana; ...
2016-02-29
Recently, we employed a developed many-body technique that allows for the incorporation of thermal effects, the rich phase diagram of a two-dimensional two-orbital (degenerate d xz and d yz) Hubbard model is presented varying temperature and the repulsion U. The main result is the finding at intermediate U of an antiferromagnetic orbital selective state where an effective dimensional reduction renders one direction insulating and the other metallic. Possible realizations of this state are discussed. Additionally, we also study nematicity above the N eel temperature. After a careful finite-size scaling analysis, the nematicity temperature window appears to survive in the bulkmore » limit, although it is very narrow.« less
Ellsworth C. Dougherty: A Pioneer in the Selection of Caenorhabditis elegans as a Model Organism
Ferris, Howard
2015-01-01
Ellsworth Dougherty (1921–1965) was a man of impressive intellectual dimensions and interests; in a relatively short career he contributed enormously as researcher and scholar to the biological knowledge base for selection of Caenorhabditis elegans as a model organism in neurobiology, genetics, and molecular biology. He helped guide the choice of strains that were eventually used, and, in particular, he developed the methodology and understanding for the nutrition and axenic culture of nematodes and other organisms. Dougherty insisted upon a concise terminology for culture techniques and coined descriptive neologisms that were justified by their linguistic roots. Among other contributions, he refined the classification system for the Protista. PMID:26272995
An Evaluation of Understandability of Patient Journey Models in Mental Health.
Percival, Jennifer; McGregor, Carolyn
2016-07-28
There is a significant trend toward implementing health information technology to reduce administrative costs and improve patient care. Unfortunately, little awareness exists of the challenges of integrating information systems with existing clinical practice. The systematic integration of clinical processes with information system and health information technology can benefit the patients, staff, and the delivery of care. This paper presents a comparison of the degree of understandability of patient journey models. In particular, the authors demonstrate the value of a relatively new patient journey modeling technique called the Patient Journey Modeling Architecture (PaJMa) when compared with traditional manufacturing based process modeling tools. The paper also presents results from a small pilot case study that compared the usability of 5 modeling approaches in a mental health care environment. Five business process modeling techniques were used to represent a selected patient journey. A mix of both qualitative and quantitative methods was used to evaluate these models. Techniques included a focus group and survey to measure usability of the various models. The preliminary evaluation of the usability of the 5 modeling techniques has shown increased staff understanding of the representation of their processes and activities when presented with the models. Improved individual role identification throughout the models was also observed. The extended version of the PaJMa methodology provided the most clarity of information flows for clinicians. The extended version of PaJMa provided a significant improvement in the ease of interpretation for clinicians and increased the engagement with the modeling process. The use of color and its effectiveness in distinguishing the representation of roles was a key feature of the framework not present in other modeling approaches. Future research should focus on extending the pilot case study to a more diversified group of clinicians and health care support workers.
Unified selective sorting approach to analyse multi-electrode extracellular data
NASA Astrophysics Data System (ADS)
Veerabhadrappa, R.; Lim, C. P.; Nguyen, T. T.; Berk, M.; Tye, S. J.; Monaghan, P.; Nahavandi, S.; Bhatti, A.
2016-06-01
Extracellular data analysis has become a quintessential method for understanding the neurophysiological responses to stimuli. This demands stringent techniques owing to the complicated nature of the recording environment. In this paper, we highlight the challenges in extracellular multi-electrode recording and data analysis as well as the limitations pertaining to some of the currently employed methodologies. To address some of the challenges, we present a unified algorithm in the form of selective sorting. Selective sorting is modelled around hypothesized generative model, which addresses the natural phenomena of spikes triggered by an intricate neuronal population. The algorithm incorporates Cepstrum of Bispectrum, ad hoc clustering algorithms, wavelet transforms, least square and correlation concepts which strategically tailors a sequence to characterize and form distinctive clusters. Additionally, we demonstrate the influence of noise modelled wavelets to sort overlapping spikes. The algorithm is evaluated using both raw and synthesized data sets with different levels of complexity and the performances are tabulated for comparison using widely accepted qualitative and quantitative indicators.
Unified selective sorting approach to analyse multi-electrode extracellular data
Veerabhadrappa, R.; Lim, C. P.; Nguyen, T. T.; Berk, M.; Tye, S. J.; Monaghan, P.; Nahavandi, S.; Bhatti, A.
2016-01-01
Extracellular data analysis has become a quintessential method for understanding the neurophysiological responses to stimuli. This demands stringent techniques owing to the complicated nature of the recording environment. In this paper, we highlight the challenges in extracellular multi-electrode recording and data analysis as well as the limitations pertaining to some of the currently employed methodologies. To address some of the challenges, we present a unified algorithm in the form of selective sorting. Selective sorting is modelled around hypothesized generative model, which addresses the natural phenomena of spikes triggered by an intricate neuronal population. The algorithm incorporates Cepstrum of Bispectrum, ad hoc clustering algorithms, wavelet transforms, least square and correlation concepts which strategically tailors a sequence to characterize and form distinctive clusters. Additionally, we demonstrate the influence of noise modelled wavelets to sort overlapping spikes. The algorithm is evaluated using both raw and synthesized data sets with different levels of complexity and the performances are tabulated for comparison using widely accepted qualitative and quantitative indicators. PMID:27339770
NASA Astrophysics Data System (ADS)
Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.
2016-05-01
This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.
Supervised learning for infection risk inference using pathology data.
Hernandez, Bernard; Herrero, Pau; Rawson, Timothy Miles; Moore, Luke S P; Evans, Benjamin; Toumazou, Christofer; Holmes, Alison H; Georgiou, Pantelis
2017-12-08
Antimicrobial Resistance is threatening our ability to treat common infectious diseases and overuse of antimicrobials to treat human infections in hospitals is accelerating this process. Clinical Decision Support Systems (CDSSs) have been proven to enhance quality of care by promoting change in prescription practices through antimicrobial selection advice. However, bypassing an initial assessment to determine the existence of an underlying disease that justifies the need of antimicrobial therapy might lead to indiscriminate and often unnecessary prescriptions. From pathology laboratory tests, six biochemical markers were selected and combined with microbiology outcomes from susceptibility tests to create a unique dataset with over one and a half million daily profiles to perform infection risk inference. Outliers were discarded using the inter-quartile range rule and several sampling techniques were studied to tackle the class imbalance problem. The first phase selects the most effective and robust model during training using ten-fold stratified cross-validation. The second phase evaluates the final model after isotonic calibration in scenarios with missing inputs and imbalanced class distributions. More than 50% of infected profiles have daily requested laboratory tests for the six biochemical markers with very promising infection inference results: area under the receiver operating characteristic curve (0.80-0.83), sensitivity (0.64-0.75) and specificity (0.92-0.97). Standardization consistently outperforms normalization and sensitivity is enhanced by using the SMOTE sampling technique. Furthermore, models operated without noticeable loss in performance if at least four biomarkers were available. The selected biomarkers comprise enough information to perform infection risk inference with a high degree of confidence even in the presence of incomplete and imbalanced data. Since they are commonly available in hospitals, Clinical Decision Support Systems could benefit from these findings to assist clinicians in deciding whether or not to initiate antimicrobial therapy to improve prescription practices.
Spectral ageing in the era of big data: integrated versus resolved models
NASA Astrophysics Data System (ADS)
Harwood, Jeremy J.
2017-04-01
Continuous injection models of spectral ageing have long been used to determine the age of radio galaxies from their integrated spectrum; however, many questions about their reliability remain unanswered. With various large area surveys imminent (e.g. LOw Frequency ARray, MeerKAT, Murchison Widefield Array) and planning for the next generation of radio interferometers are well underway (e.g. next generation VLA, Square Kilometre Array), investigations of radio galaxy physics are set to shift away from studies of individual sources to the population as a whole. Determining if and how integrated models of spectral ageing can be applied in the era of big data is therefore crucial. In this paper, I compare classical integrated models of spectral ageing to recent well-resolved studies that use modern analysis techniques on small spatial scales to determine their robustness and validity as a source selection method. I find that integrated models are unable to recover key parameters and, even when known a priori, provide a poor, frequency-dependent description of a source's spectrum. I show a disparity of up to a factor of 6 in age between the integrated and resolved methods but suggest, even with these inconsistencies, such models still provide a potential method of candidate selection in the search for remnant radio galaxies and in providing a cleaner selection of high redshift radio galaxies in z - α selected samples.
Architectural-level power estimation and experimentation
NASA Astrophysics Data System (ADS)
Ye, Wu
With the emergence of a plethora of embedded and portable applications and ever increasing integration levels, power dissipation of integrated circuits has moved to the forefront as a design constraint. Recent years have also seen a significant trend towards designs starting at the architectural (or RT) level. Those demand accurate yet fast RT level power estimation methodologies and tools. This thesis addresses issues and experiments associate with architectural level power estimation. An execution driven, cycle-accurate RT level power simulator, SimplePower, was developed using transition-sensitive energy models. It is based on the architecture of a five-stage pipelined RISC datapath for both 0.35mum and 0.8mum technology and can execute the integer subset of the instruction set of SimpleScalar . SimplePower measures the energy consumed in the datapath, memory and on-chip buses. During the development of SimplePower , a partitioning power modeling technique was proposed to model the energy consumed in complex functional units. The accuracy of this technique was validated with HSPICE simulation results for a register file and a shifter. A novel, selectively gated pipeline register optimization technique was proposed to reduce the datapath energy consumption. It uses the decoded control signals to selectively gate the data fields of the pipeline registers. Simulation results show that this technique can reduce the datapath energy consumption by 18--36% for a set of benchmarks. A low-level back-end compiler optimization, register relabeling, was applied to reduce the on-chip instruction cache data bus switch activities. Its impact was evaluated by SimplePower. Results show that it can reduce the energy consumed in the instruction data buses by 3.55--16.90%. A quantitative evaluation was conducted for the impact of six state-of-art high-level compilation techniques on both datapath and memory energy consumption. The experimental results provide a valuable insight for designers to develop future power-aware compilation frameworks for embedded systems.
How motivation affects academic performance: a structural equation modelling analysis.
Kusurkar, R A; Ten Cate, Th J; Vos, C M P; Westers, P; Croiset, G
2013-03-01
Few studies in medical education have studied effect of quality of motivation on performance. Self-Determination Theory based on quality of motivation differentiates between Autonomous Motivation (AM) that originates within an individual and Controlled Motivation (CM) that originates from external sources. To determine whether Relative Autonomous Motivation (RAM, a measure of the balance between AM and CM) affects academic performance through good study strategy and higher study effort and compare this model between subgroups: males and females; students selected via two different systems namely qualitative and weighted lottery selection. Data on motivation, study strategy and effort was collected from 383 medical students of VU University Medical Center Amsterdam and their academic performance results were obtained from the student administration. Structural Equation Modelling analysis technique was used to test a hypothesized model in which high RAM would positively affect Good Study Strategy (GSS) and study effort, which in turn would positively affect academic performance in the form of grade point averages. This model fit well with the data, Chi square = 1.095, df = 3, p = 0.778, RMSEA model fit = 0.000. This model also fitted well for all tested subgroups of students. Differences were found in the strength of relationships between the variables for the different subgroups as expected. In conclusion, RAM positively correlated with academic performance through deep strategy towards study and higher study effort. This model seems valid in medical education in subgroups such as males, females, students selected by qualitative and weighted lottery selection.
NASA Astrophysics Data System (ADS)
Mao, Zhiyi; Shan, Ruifeng; Wang, Jiajun; Cai, Wensheng; Shao, Xueguang
2014-07-01
Polyphenols in plant samples have been extensively studied because phenolic compounds are ubiquitous in plants and can be used as antioxidants in promoting human health. A method for rapid determination of three phenolic compounds (chlorogenic acid, scopoletin and rutin) in plant samples using near-infrared diffuse reflectance spectroscopy (NIRDRS) is studied in this work. Partial least squares (PLS) regression was used for building the calibration models, and the effects of spectral preprocessing and variable selection on the models are investigated for optimization of the models. The results show that individual spectral preprocessing and variable selection has no or slight influence on the models, but the combination of the techniques can significantly improve the models. The combination of continuous wavelet transform (CWT) for removing the variant background, multiplicative scatter correction (MSC) for correcting the scattering effect and randomization test (RT) for selecting the informative variables was found to be the best way for building the optimal models. For validation of the models, the polyphenol contents in an independent sample set were predicted. The correlation coefficients between the predicted values and the contents determined by high performance liquid chromatography (HPLC) analysis are as high as 0.964, 0.948 and 0.934 for chlorogenic acid, scopoletin and rutin, respectively.
NASA Astrophysics Data System (ADS)
Ban, Sang-Woo; Lee, Minho
2008-04-01
Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.
A LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR (LASSO) FOR NONLINEAR SYSTEM IDENTIFICATION
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Lofberg, Johan; Brenner, Martin J.
2006-01-01
Identification of parametric nonlinear models involves estimating unknown parameters and detecting its underlying structure. Structure computation is concerned with selecting a subset of parameters to give a parsimonious description of the system which may afford greater insight into the functionality of the system or a simpler controller design. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear systems. The LASSO minimises the residual sum of squares by the addition of a 1 penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. The performance of this LASSO structure detection method was evaluated by using it to estimate the structure of a nonlinear polynomial model. Applicability of the method to more complex systems such as those encountered in aerospace applications was shown by identifying a parsimonious system description of the F/A-18 Active Aeroelastic Wing using flight test data.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
NASA Astrophysics Data System (ADS)
Criales Escobar, Luis Ernesto
One of the most frequently evolving areas of research is the utilization of lasers for micro-manufacturing and additive manufacturing purposes. The use of laser beam as a tool for manufacturing arises from the need for flexible and rapid manufacturing at a low-to-mid cost. Laser micro-machining provides an advantage over mechanical micro-machining due to the faster production times of large batch sizes and the high costs associated with specific tools. Laser based additive manufacturing enables processing of powder metals for direct and rapid fabrication of products. Therefore, laser processing can be viewed as a fast, flexible, and cost-effective approach compared to traditional manufacturing processes. Two types of laser processing techniques are studied: laser ablation of polymers for micro-channel fabrication and selective laser melting of metal powders. Initially, a feasibility study for laser-based micro-channel fabrication of poly(dimethylsiloxane) (PDMS) via experimentation is presented. In particular, the effectiveness of utilizing a nanosecond-pulsed laser as the energy source for laser ablation is studied. The results are analyzed statistically and a relationship between process parameters and micro-channel dimensions is established. Additionally, a process model is introduced for predicting channel depth. Model outputs are compared and analyzed to experimental results. The second part of this research focuses on a physics-based FEM approach for predicting the temperature profile and melt pool geometry in selective laser melting (SLM) of metal powders. Temperature profiles are calculated for a moving laser heat source to understand the temperature rise due to heating during SLM. Based on the predicted temperature distributions, melt pool geometry, i.e. the locations at which melting of the powder material occurs, is determined. Simulation results are compared against data obtained from experimental Inconel 625 test coupons fabricated at the National Institute for Standards & Technology via response surface methodology techniques. The main goal of this research is to develop a comprehensive predictive model with which the effect of powder material properties and laser process parameters on the built quality and integrity of SLM-produced parts can be better understood. By optimizing process parameters, SLM as an additive manufacturing technique is not only possible, but also practical and reproducible.
Hydrologic Model Selection using Markov chain Monte Carlo methods
NASA Astrophysics Data System (ADS)
Marshall, L.; Sharma, A.; Nott, D.
2002-12-01
Estimation of parameter uncertainty (and in turn model uncertainty) allows assessment of the risk in likely applications of hydrological models. Bayesian statistical inference provides an ideal means of assessing parameter uncertainty whereby prior knowledge about the parameter is combined with information from the available data to produce a probability distribution (the posterior distribution) that describes uncertainty about the parameter and serves as a basis for selecting appropriate values for use in modelling applications. Widespread use of Bayesian techniques in hydrology has been hindered by difficulties in summarizing and exploring the posterior distribution. These difficulties have been largely overcome by recent advances in Markov chain Monte Carlo (MCMC) methods that involve random sampling of the posterior distribution. This study presents an adaptive MCMC sampling algorithm which has characteristics that are well suited to model parameters with a high degree of correlation and interdependence, as is often evident in hydrological models. The MCMC sampling technique is used to compare six alternative configurations of a commonly used conceptual rainfall-runoff model, the Australian Water Balance Model (AWBM), using 11 years of daily rainfall runoff data from the Bass river catchment in Australia. The alternative configurations considered fall into two classes - those that consider model errors to be independent of prior values, and those that model the errors as an autoregressive process. Each such class consists of three formulations that represent increasing levels of complexity (and parameterisation) of the original model structure. The results from this study point both to the importance of using Bayesian approaches in evaluating model performance, as well as the simplicity of the MCMC sampling framework that has the ability to bring such approaches within the reach of the applied hydrological community.
Satellite-enhanced dynamical downscaling for the analysis of extreme events
NASA Astrophysics Data System (ADS)
Nunes, Ana M. B.
2016-09-01
The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.
Guo, Doudou; Juan, Jiaxiang; Chang, Liying; Zhang, Jingjin; Huang, Danfeng
2017-08-15
Plant-based sensing on water stress can provide sensitive and direct reference for precision irrigation system in greenhouse. However, plant information acquisition, interpretation, and systematical application remain insufficient. This study developed a discrimination method for plant root zone water status in greenhouse by integrating phenotyping and machine learning techniques. Pakchoi plants were used and treated by three root zone moisture levels, 40%, 60%, and 80% relative water content. Three classification models, Random Forest (RF), Neural Network (NN), and Support Vector Machine (SVM) were developed and validated in different scenarios with overall accuracy over 90% for all. SVM model had the highest value, but it required the longest training time. All models had accuracy over 85% in all scenarios, and more stable performance was observed in RF model. Simplified SVM model developed by the top five most contributing traits had the largest accuracy reduction as 29.5%, while simplified RF and NN model still maintained approximately 80%. For real case application, factors such as operation cost, precision requirement, and system reaction time should be synthetically considered in model selection. Our work shows it is promising to discriminate plant root zone water status by implementing phenotyping and machine learning techniques for precision irrigation management.
Asgari Dastjerdi, Hoori; Khorasani, Elahe; Yarmohammadian, Mohammad Hossein; Ahmadzade, Mahdiye Sadat
2017-01-01
Abstract: Background: Medical errors are one of the greatest problems in any healthcare systems. The best way to prevent such problems is errors identification and their roots. Failure Mode and Effects Analysis (FMEA) technique is a prospective risk analysis method. This study is a review of risk analysis using FMEA technique in different hospital wards and departments. Methods: This paper has systematically investigated the available databases. After selecting inclusion and exclusion criteria, the related studies were found. This selection was made in two steps. First, the abstracts and titles were investigated by the researchers and, after omitting papers which did not meet the inclusion criteria, 22 papers were finally selected and the text was thoroughly examined. At the end, the results were obtained. Results: The examined papers had focused mostly on the process and had been conducted in the pediatric wards and radiology departments, and most participants were nursing staffs. Many of these papers attempted to express almost all the steps of model implementation; and after implementing the strategies and interventions, the Risk Priority Number (RPN) was calculated to determine the degree of the technique’s effect. However, these papers have paid less attention to the identification of risk effects. Conclusions: The study revealed that a small number of studies had failed to show the FMEA technique effects. In general, however, most of the studies recommended this technique and had considered it a useful and efficient method in reducing the number of risks and improving service quality. PMID:28039688
Williams, Jennifer A.; Schmitter-Edgecombe, Maureen; Cook, Diane J.
2016-01-01
Introduction Reducing the amount of testing required to accurately detect cognitive impairment is clinically relevant. The aim of this research was to determine the fewest number of clinical measures required to accurately classify participants as healthy older adult, mild cognitive impairment (MCI) or dementia using a suite of classification techniques. Methods Two variable selection machine learning models (i.e., naive Bayes, decision tree), a logistic regression, and two participant datasets (i.e., clinical diagnosis, clinical dementia rating; CDR) were explored. Participants classified using clinical diagnosis criteria included 52 individuals with dementia, 97 with MCI, and 161 cognitively healthy older adults. Participants classified using CDR included 154 individuals CDR = 0, 93 individuals with CDR = 0.5, and 25 individuals with CDR = 1.0+. Twenty-seven demographic, psychological, and neuropsychological variables were available for variable selection. Results No significant difference was observed between naive Bayes, decision tree, and logistic regression models for classification of both clinical diagnosis and CDR datasets. Participant classification (70.0 – 99.1%), geometric mean (60.9 – 98.1%), sensitivity (44.2 – 100%), and specificity (52.7 – 100%) were generally satisfactory. Unsurprisingly, the MCI/CDR = 0.5 participant group was the most challenging to classify. Through variable selection only 2 – 9 variables were required for classification and varied between datasets in a clinically meaningful way. Conclusions The current study results reveal that machine learning techniques can accurately classifying cognitive impairment and reduce the number of measures required for diagnosis. PMID:26332171
1981-01-01
explanatory variable has been ommitted. Ramsey (1974) has developed a rather interesting test for detecting specification errors using estimates of the...Peter. (1979) A Guide to Econometrics , Cambridge, MA: The MIT Press. Ramsey , J.B. (1974), "Classical Model Selection Through Specification Error... Tests ," in P. Zarembka, Ed. Frontiers in Econometrics , New York: Academia Press. Theil, Henri. (1971), Principles of Econometrics , New York: John Wiley
NASA Technical Reports Server (NTRS)
Krishnan, G. S.
1997-01-01
A cost effective model which uses the artificial intelligence techniques in the selection and approval of parts is presented. The knowledge which is acquired from the specialists for different part types are represented in a knowledge base in the form of rules and objects. The parts information is stored separately in a data base and is isolated from the knowledge base. Validation, verification and performance issues are highlighted.
NASA Astrophysics Data System (ADS)
Various papers on antennas and propagation are presented. The general topics addressed include: phased arrays; reflector antennas; slant path propagation; propagation data for HF radio systems performance; satellite and earth station antennas; radio propagation in the troposphere; propagation data for HF radio systems performance; microstrip antennas; rain radio meteorology; conformal antennas; horns and feed antennas; low elevation slant path propagation; radio millimeter wave propagation; array antennas; propagation effects on satellite mobile, satellite broadcast, and aeronautical systems; ionospheric irregularities and motions; adaptive antennas; transient response; measurement techniques; clear air radio meteorology; ionospheric and propagation modeling; millimeter wave and lens antennas; electromagnetic theory and numerical techniques; VHF propagation modeling, system planning methods; radio propagation theoretical techniques; scattering and diffraction; transhorizon rain scatter effects; ELF-VHF and broadcast antennas; clear air millimeter propagation; scattering and frequency-selective surfaces; antenna technology; clear air transhorizon propagation.
High-precision buffer circuit for suppression of regenerative oscillation
NASA Technical Reports Server (NTRS)
Tripp, John S.; Hare, David A.; Tcheng, Ping
1995-01-01
Precision analog signal conditioning electronics have been developed for wind tunnel model attitude inertial sensors. This application requires low-noise, stable, microvolt-level DC performance and a high-precision buffered output. Capacitive loading of the operational amplifier output stages due to the wind tunnel analog signal distribution facilities caused regenerative oscillation and consequent rectification bias errors. Oscillation suppression techniques commonly used in audio applications were inadequate to maintain the performance requirements for the measurement of attitude for wind tunnel models. Feedback control theory is applied to develop a suppression technique based on a known compensation (snubber) circuit, which provides superior oscillation suppression with high output isolation and preserves the low-noise low-offset performance of the signal conditioning electronics. A practical design technique is developed to select the parameters for the compensation circuit to suppress regenerative oscillation occurring when typical shielded cable loads are driven.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
NASA Astrophysics Data System (ADS)
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
NASA Astrophysics Data System (ADS)
Various papers on electromagnetic compatibility are presented. Some of the optics considered include: field-to-wire coupling 1 to 18 GHz, SHF/EHF field-to-wire coupling model, numerical method for the analysis of coupling to thin wire structures, spread-spectrum system with an adaptive array for combating interference, technique to select the optimum modulation indices for suppression of undesired signals for simultaneous range and data operations, development of a MHz RF leak detector technique for aircraft harness surveillance, and performance of standard aperture shielding techniques at microwave frequncies. Also discussed are: spectrum efficiency of spread-spectrum systems, control of power supply ripple produced sidebands in microwave transistor amplifiers, an intership SATCOM versus radar electromagnetic interference prediction model, considerations in the design of a broadband E-field sensing system, unique bonding methods for spacecraft, and review of EMC practice for launch vehicle systems.
NASA Astrophysics Data System (ADS)
Stas, Michiel; Dong, Qinghan; Heremans, Stien; Zhang, Beier; Van Orshoven, Jos
2016-08-01
This paper compares two machine learning techniques to predict regional winter wheat yields. The models, based on Boosted Regression Trees (BRT) and Support Vector Machines (SVM), are constructed of Normalized Difference Vegetation Indices (NDVI) derived from low resolution SPOT VEGETATION satellite imagery. Three types of NDVI-related predictors were used: Single NDVI, Incremental NDVI and Targeted NDVI. BRT and SVM were first used to select features with high relevance for predicting the yield. Although the exact selections differed between the prefectures, certain periods with high influence scores for multiple prefectures could be identified. The same period of high influence stretching from March to June was detected by both machine learning methods. After feature selection, BRT and SVM models were applied to the subset of selected features for actual yield forecasting. Whereas both machine learning methods returned very low prediction errors, BRT seems to slightly but consistently outperform SVM.
Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping
2013-10-01
Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhang, Xuan; Li, Wei; Yin, Bin; Chen, Weizhong; Kelly, Declan P.; Wang, Xiaoxin; Zheng, Kaiyi; Du, Yiping
2013-10-01
Coffee is the most heavily consumed beverage in the world after water, for which quality is a key consideration in commercial trade. Therefore, caffeine content which has a significant effect on the final quality of the coffee products requires to be determined fast and reliably by new analytical techniques. The main purpose of this work was to establish a powerful and practical analytical method based on near infrared spectroscopy (NIRS) and chemometrics for quantitative determination of caffeine content in roasted Arabica coffees. Ground coffee samples within a wide range of roasted levels were analyzed by NIR, meanwhile, in which the caffeine contents were quantitative determined by the most commonly used HPLC-UV method as the reference values. Then calibration models based on chemometric analyses of the NIR spectral data and reference concentrations of coffee samples were developed. Partial least squares (PLS) regression was used to construct the models. Furthermore, diverse spectra pretreatment and variable selection techniques were applied in order to obtain robust and reliable reduced-spectrum regression models. Comparing the respective quality of the different models constructed, the application of second derivative pretreatment and stability competitive adaptive reweighted sampling (SCARS) variable selection provided a notably improved regression model, with root mean square error of cross validation (RMSECV) of 0.375 mg/g and correlation coefficient (R) of 0.918 at PLS factor of 7. An independent test set was used to assess the model, with the root mean square error of prediction (RMSEP) of 0.378 mg/g, mean relative error of 1.976% and mean relative standard deviation (RSD) of 1.707%. Thus, the results provided by the high-quality calibration model revealed the feasibility of NIR spectroscopy for at-line application to predict the caffeine content of unknown roasted coffee samples, thanks to the short analysis time of a few seconds and non-destructive advantages of NIRS.
NASA Astrophysics Data System (ADS)
Yang, J.; Astitha, M.; Delle Monache, L.; Alessandrini, S.
2016-12-01
Accuracy of weather forecasts in Northeast U.S. has become very important in recent years, given the serious and devastating effects of extreme weather events. Despite the use of evolved forecasting tools and techniques strengthened by increased super-computing resources, the weather forecasting systems still have their limitations in predicting extreme events. In this study, we examine the combination of analog ensemble and Bayesian regression techniques to improve the prediction of storms that have impacted NE U.S., mostly defined by the occurrence of high wind speeds (i.e. blizzards, winter storms, hurricanes and thunderstorms). The predicted wind speed, wind direction and temperature by two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) are combined using the mentioned techniques, exploring various ways that those variables influence the minimization of the prediction error (systematic and random). This study is focused on retrospective simulations of 146 storms that affected the NE U.S. in the period 2005-2016. In order to evaluate the techniques, leave-one-out cross validation procedure was implemented regarding 145 storms as the training dataset. The analog ensemble method selects a set of past observations that corresponded to the best analogs of the numerical weather prediction and provides a set of ensemble members of the selected observation dataset. The set of ensemble members can then be used in a deterministic or probabilistic way. In the Bayesian regression framework, optimal variances are estimated for the training partition by minimizing the root mean square error and are applied to the out-of-sample storm. The preliminary results indicate a significant improvement in the statistical metrics of 10-m wind speed for 146 storms using both techniques (20-30% bias and error reduction in all observation-model pairs). In this presentation, we discuss the various combinations of atmospheric predictors and techniques and illustrate how the long record of predicted storms is valuable in the improvement of wind speed prediction.
Vivekanandan, T; Sriman Narayana Iyengar, N Ch
2017-11-01
Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Selectivity/Specificity Improvement Strategies in Surface-Enhanced Raman Spectroscopy Analysis
Wang, Feng; Cao, Shiyu; Yan, Ruxia; Wang, Zewei; Wang, Dan; Yang, Haifeng
2017-01-01
Surface-enhanced Raman spectroscopy (SERS) is a powerful technique for the discrimination, identification, and potential quantification of certain compounds/organisms. However, its real application is challenging due to the multiple interference from the complicated detection matrix. Therefore, selective/specific detection is crucial for the real application of SERS technique. We summarize in this review five selective/specific detection techniques (chemical reaction, antibody, aptamer, molecularly imprinted polymers and microfluidics), which can be applied for the rapid and reliable selective/specific detection when coupled with SERS technique. PMID:29160798
A hybrid feature selection method using multiclass SVM for diagnosis of erythemato-squamous disease
NASA Astrophysics Data System (ADS)
Maryam, Setiawan, Noor Akhmad; Wahyunggoro, Oyas
2017-08-01
The diagnosis of erythemato-squamous disease is a complex problem and difficult to detect in dermatology. Besides that, it is a major cause of skin cancer. Data mining implementation in the medical field helps expert to diagnose precisely, accurately, and inexpensively. In this research, we use data mining technique to developed a diagnosis model based on multiclass SVM with a novel hybrid feature selection method to diagnose erythemato-squamous disease. Our hybrid feature selection method, named ChiGA (Chi Square and Genetic Algorithm), uses the advantages from filter and wrapper methods to select the optimal feature subset from original feature. Chi square used as filter method to remove redundant features and GA as wrapper method to select the ideal feature subset with SVM used as classifier. Experiment performed with 10 fold cross validation on erythemato-squamous diseases dataset taken from University of California Irvine (UCI) machine learning database. The experimental result shows that the proposed model based multiclass SVM with Chi Square and GA can give an optimum feature subset. There are 18 optimum features with 99.18% accuracy.
Morales, Dinora Araceli; Bengoetxea, Endika; Larrañaga, Pedro; García, Miguel; Franco, Yosu; Fresnada, Mónica; Merino, Marisa
2008-05-01
In vitro fertilization (IVF) is a medically assisted reproduction technique that enables infertile couples to achieve successful pregnancy. Given the uncertainty of the treatment, we propose an intelligent decision support system based on supervised classification by Bayesian classifiers to aid to the selection of the most promising embryos that will form the batch to be transferred to the woman's uterus. The aim of the supervised classification system is to improve overall success rate of each IVF treatment in which a batch of embryos is transferred each time, where the success is achieved when implantation (i.e. pregnancy) is obtained. Due to ethical reasons, different legislative restrictions apply in every country on this technique. In Spain, legislation allows a maximum of three embryos to form each transfer batch. As a result, clinicians prefer to select the embryos by non-invasive embryo examination based on simple methods and observation focused on morphology and dynamics of embryo development after fertilization. This paper proposes the application of Bayesian classifiers to this embryo selection problem in order to provide a decision support system that allows a more accurate selection than with the actual procedures which fully rely on the expertise and experience of embryologists. For this, we propose to take into consideration a reduced subset of feature variables related to embryo morphology and clinical data of patients, and from this data to induce Bayesian classification models. Results obtained applying a filter technique to choose the subset of variables, and the performance of Bayesian classifiers using them, are presented.
NASA Technical Reports Server (NTRS)
Seldner, K.
1976-01-01
The development of control systems for jet engines requires a real-time computer simulation. The simulation provides an effective tool for evaluating control concepts and problem areas prior to actual engine testing. The development and use of a real-time simulation of the Pratt and Whitney F100-PW100 turbofan engine is described. The simulation was used in a multi-variable optimal controls research program using linear quadratic regulator theory. The simulation is used to generate linear engine models at selected operating points and evaluate the control algorithm. To reduce the complexity of the design, it is desirable to reduce the order of the linear model. A technique to reduce the order of the model; is discussed. Selected results between high and low order models are compared. The LQR control algorithms can be programmed on digital computer. This computer will control the engine simulation over the desired flight envelope.
A review of active learning approaches to experimental design for uncovering biological networks
2017-01-01
Various types of biological knowledge describe networks of interactions among elementary entities. For example, transcriptional regulatory networks consist of interactions among proteins and genes. Current knowledge about the exact structure of such networks is highly incomplete, and laboratory experiments that manipulate the entities involved are conducted to test hypotheses about these networks. In recent years, various automated approaches to experiment selection have been proposed. Many of these approaches can be characterized as active machine learning algorithms. Active learning is an iterative process in which a model is learned from data, hypotheses are generated from the model to propose informative experiments, and the experiments yield new data that is used to update the model. This review describes the various models, experiment selection strategies, validation techniques, and successful applications described in the literature; highlights common themes and notable distinctions among methods; and identifies likely directions of future research and open problems in the area. PMID:28570593
Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering
NASA Astrophysics Data System (ADS)
Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.
2016-06-01
This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.
Imaging brain microstructure with diffusion MRI: practicality and applications.
Alexander, Daniel C; Dyrby, Tim B; Nilsson, Markus; Zhang, Hui
2017-11-29
This article gives an overview of microstructure imaging of the brain with diffusion MRI and reviews the state of the art. The microstructure-imaging paradigm aims to estimate and map microscopic properties of tissue using a model that links these properties to the voxel scale MR signal. Imaging techniques of this type are just starting to make the transition from the technical research domain to wide application in biomedical studies. We focus here on the practicalities of both implementing such techniques and using them in applications. Specifically, the article summarizes the relevant aspects of brain microanatomy and the range of diffusion-weighted MR measurements that provide sensitivity to them. It then reviews the evolution of mathematical and computational models that relate the diffusion MR signal to brain tissue microstructure, as well as the expanding areas of application. Next we focus on practicalities of designing a working microstructure imaging technique: model selection, experiment design, parameter estimation, validation, and the pipeline of development of this class of technique. The article concludes with some future perspectives on opportunities in this topic and expectations on how the field will evolve in the short-to-medium term. Copyright © 2017 John Wiley & Sons, Ltd.
McClintock, B.T.; White, Gary C.; Burnham, K.P.; Pryde, M.A.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
In recent years, the mark-resight method for estimating abundance when the number of marked individuals is known has become increasingly popular. By using field-readable bands that may be resighted from a distance, these techniques can be applied to many species, and are particularly useful for relatively small, closed populations. However, due to the different assumptions and general rigidity of the available estimators, researchers must often commit to a particular model without rigorous quantitative justification for model selection based on the data. Here we introduce a nonlinear logit-normal mixed effects model addressing this need for a more generalized framework. Similar to models available for mark-recapture studies, the estimator allows a wide variety of sampling conditions to be parameterized efficiently under a robust sampling design. Resighting rates may be modeled simply or with more complexity by including fixed temporal and random individual heterogeneity effects. Using information theory, the model(s) best supported by the data may be selected from the candidate models proposed. Under this generalized framework, we hope the uncertainty associated with mark-resight model selection will be reduced substantially. We compare our model to other mark-resight abundance estimators when applied to mainland New Zealand robin (Petroica australis) data recently collected in Eglinton Valley, Fiordland National Park and summarize its performance in simulation experiments.
Wei, Xuelei; Dong, Fuhui
2011-12-01
To review recent advance in the research and application of computer aided forming techniques for constructing bone tissue engineering scaffolds. The literature concerning computer aided forming techniques for constructing bone tissue engineering scaffolds in recent years was reviewed extensively and summarized. Several studies over last decade have focused on computer aided forming techniques for bone scaffold construction using various scaffold materials, which is based on computer aided design (CAD) and bone scaffold rapid prototyping (RP). CAD include medical CAD, STL, and reverse design. Reverse design can fully simulate normal bone tissue and could be very useful for the CAD. RP techniques include fused deposition modeling, three dimensional printing, selected laser sintering, three dimensional bioplotting, and low-temperature deposition manufacturing. These techniques provide a new way to construct bone tissue engineering scaffolds with complex internal structures. With rapid development of molding and forming techniques, computer aided forming techniques are expected to provide ideal bone tissue engineering scaffolds.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports.
Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali
2017-01-01
Objectives Widespread implementation of electronic databases has improved the accessibility of plaintext clinical information for supplementary use. Numerous machine learning techniques, such as supervised machine learning approaches or ontology-based approaches, have been employed to obtain useful information from plaintext clinical data. This study proposes an automatic multi-class classification system to predict accident-related causes of death from plaintext autopsy reports through expert-driven feature selection with supervised automatic text classification decision models. Methods Accident-related autopsy reports were obtained from one of the largest hospital in Kuala Lumpur. These reports belong to nine different accident-related causes of death. Master feature vector was prepared by extracting features from the collected autopsy reports by using unigram with lexical categorization. This master feature vector was used to detect cause of death [according to internal classification of disease version 10 (ICD-10) classification system] through five automated feature selection schemes, proposed expert-driven approach, five subset sizes of features, and five machine learning classifiers. Model performance was evaluated using precisionM, recallM, F-measureM, accuracy, and area under ROC curve. Four baselines were used to compare the results with the proposed system. Results Random forest and J48 decision models parameterized using expert-driven feature selection yielded the highest evaluation measure approaching (85% to 90%) for most metrics by using a feature subset size of 30. The proposed system also showed approximately 14% to 16% improvement in the overall accuracy compared with the existing techniques and four baselines. Conclusion The proposed system is feasible and practical to use for automatic classification of ICD-10-related cause of death from autopsy reports. The proposed system assists pathologists to accurately and rapidly determine underlying cause of death based on autopsy findings. Furthermore, the proposed expert-driven feature selection approach and the findings are generally applicable to other kinds of plaintext clinical reports. PMID:28166263
Capillary gas chromatography with mass spectrometric detection is the most commonly used technique for analyzing samples from Superfund sites. While the U.S. EPA has developed target lists of compounds for which library mass spectra are available on most mass spectrometer data s...
Statistics and Style. Mathematical Linguistics and Automatic Language Processing No. 6.
ERIC Educational Resources Information Center
Dolezel, Lubomir, Ed.; Bailey, Richard W., Ed.
This collection of 17 articles concerning the application of mathematical models and techniques to the study of literary style is an attempt to overcome the communication barriers that exist between scholars in the various fields that find their meeting ground in statistical stylistics. The articles selected were chosen to represent the best…
Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT
ERIC Educational Resources Information Center
Cheng, Ying; Chang, Hua-Hua; Yi, Qing
2007-01-01
Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
ERIC Educational Resources Information Center
Cavallo, Ann Liberatore
This 1-week study explored the extent to which high school students (n=140) acquired meaningful understanding of selected biological topics (meiosis and the Punnett square method) and the relationship between these topics. This study: (1) examined "mental modeling" as a technique for measuring students' meaningful understanding of the…
The Effects of Two Types of Assertion Training on Self-Assertion, Anxiety and Self Actualization.
ERIC Educational Resources Information Center
Langelier, Regis
The standard assertion training package includes a selection of techniques from behavior therapy such as modeling, behavior rehearsal, and role-playing along with lectures and discussion, bibliotherapy, and audiovisual feedback. The effects of a standard assertion training package with and without videotape feedback on self-report measures of…
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Riddlesworth, Tonya D.; Kollman, Craig; Lass, Jonathan H.; Patel, Sanjay V.; Stulting, R. Doyle; Benetz, Beth Ann; Gal, Robin L.; Beck, Roy W.
2014-01-01
Purpose. We constructed several mathematical models that predict endothelial cell density (ECD) for patients after penetrating keratoplasty (PK) for a moderate-risk condition (principally Fuchs' dystrophy or pseudophakic/aphakic corneal edema). Methods. In a subset (n = 591) of Cornea Donor Study participants, postoperative ECD was determined by a central reading center. Various statistical models were considered to estimate the ECD trend longitudinally over 10 years of follow-up. A biexponential model with and without a logarithm transformation was fit using the Gauss-Newton nonlinear least squares algorithm. To account for correlated data, a log-polynomial model was fit using the restricted maximum likelihood method. A sensitivity analysis for the potential bias due to selective dropout was performed using Bayesian analysis techniques. Results. The three models using a logarithm transformation yield similar trends, whereas the model without the transform predicts higher ECD values. The adjustment for selective dropout turns out to be negligible. However, this is possibly due to the relatively low rate of graft failure in this cohort (19% at 10 years). Fuchs' dystrophy and pseudophakic/aphakic corneal edema (PACE) patients had similar ECD decay curves, with the PACE group having slightly higher cell densities by 10 years. Conclusions. Endothelial cell loss after PK can be modeled via a log-polynomial model, which accounts for the correlated data from repeated measures on the same subject. This model is not significantly affected by the selective dropout due to graft failure. Our findings warrant further study on how this may extend to ECD following endothelial keratoplasty. PMID:25425307
Evans, T M; LeChevallier, M W; Waarvick, C E; Seidler, R J
1981-01-01
The species of total coliform bacteria isolated from drinking water and untreated surface water by the membrane filter (MF), the standard most-probable-number (S-MPN), and modified most-probable-number (M-MPN) techniques were compared. Each coliform detection technique selected for a different profile of coliform species from both types of water samples. The MF technique indicated that Citrobacter freundii was the most common coliform species in water samples. However, the fermentation tube techniques displayed selectivity towards the isolation of Escherichia coli and Klebsiella. The M-MPN technique selected for more C. freundii and Enterobacter spp. from untreated surface water samples and for more Enterobacter and Klebsiella spp. from drinking water samples than did the S-MPN technique. The lack of agreement between the number of coliforms detected in a water sample by the S-MPN, M-MPN, and MF techniques was a result of the selection for different coliform species by the various techniques. PMID:7013706
Comparison of stream invertebrate response models for bioassessment metric
Waite, Ian R.; Kennen, Jonathan G.; May, Jason T.; Brown, Larry R.; Cuffney, Thomas F.; Jones, Kimberly A.; Orlando, James L.
2012-01-01
We aggregated invertebrate data from various sources to assemble data for modeling in two ecoregions in Oregon and one in California. Our goal was to compare the performance of models developed using multiple linear regression (MLR) techniques with models developed using three relatively new techniques: classification and regression trees (CART), random forest (RF), and boosted regression trees (BRT). We used tolerance of taxa based on richness (RICHTOL) and ratio of observed to expected taxa (O/E) as response variables and land use/land cover as explanatory variables. Responses were generally linear; therefore, there was little improvement to the MLR models when compared to models using CART and RF. In general, the four modeling techniques (MLR, CART, RF, and BRT) consistently selected the same primary explanatory variables for each region. However, results from the BRT models showed significant improvement over the MLR models for each region; increases in R2 from 0.09 to 0.20. The O/E metric that was derived from models specifically calibrated for Oregon consistently had lower R2 values than RICHTOL for the two regions tested. Modeled O/E R2 values were between 0.06 and 0.10 lower for each of the four modeling methods applied in the Willamette Valley and were between 0.19 and 0.36 points lower for the Blue Mountains. As a result, BRT models may indeed represent a good alternative to MLR for modeling species distribution relative to environmental variables.
Treatment of atomic and molecular line blanketing by opacity sampling
NASA Technical Reports Server (NTRS)
Johnson, H. R.; Krupp, B. M.
1976-01-01
A sampling technique for treating the radiative opacity of large numbers of atomic and molecular lines in cool stellar atmospheres is subjected to several tests. In this opacity sampling (OS) technique, the global opacity is sampled at only a selected set of frequencies, and at each of these frequencies the total monochromatic opacity is obtained by summing the contribution of every relevant atomic and molecular line. In accord with previous results, we find that the structure of atmospheric models is accurately fixed by the use of 1000 frequency points, and 100 frequency points are adequate for many purposes. The effects of atomic and molecular lines are separately studied. A test model computed using the OS method agrees very well with a model having identical atmospheric parameters, but computed with the giant line (opacity distribution function) method.
Mode Selection Techniques in Variable Mass Flexible Body Modeling
NASA Technical Reports Server (NTRS)
Quiocho, Leslie J.; Ghosh, Tushar K.; Frenkel, David; Huynh, An
2010-01-01
In developing a flexible body spacecraft simulation for the Launch Abort System of the Orion vehicle, when a rapid mass depletion takes place, the dynamics problem with time varying eigenmodes had to be addressed. Three different techniques were implemented, with different trade-offs made between performance and fidelity. A number of technical issues had to be solved in the process. This paper covers the background of the variable mass flexibility problem, the three approaches to simulating it, and the technical issues that were solved in formulating and implementing them.
Development of a Tabletop Model for the Generation of Amorphous/ Microcrystalline Metal Powders
1980-04-30
Voltage Characteristics for Wetting (si) and Non-wetting (AZ 4.5% Cu ) EHD Spray 2-57 28 Schematic of the Process of Electrohydrodynamic Droplet...Microscope Image of a Deposit , Fine Powders and "Matrix" Film of Fe-Ni-B-P Metallic Glass Alloy Produced by the EHD Technique 3-9 45 Selected Area...Transmission Electron Microscope Image of a Deposit , Fine Powders and "Matrix" Film of Fe-Ni-B-P Metallic Glass Alloy Produced by the EHD Technique 3-11 xi
Dong, Pei-Pei; Ge, Guang-Bo; Zhang, Yan-Yan; Ai, Chun-Zhi; Li, Guo-Hui; Zhu, Liang-Liang; Luan, Hong-Wei; Liu, Xing-Bao; Yang, Ling
2009-10-16
Seven pairs of epimers and one pair of isomeric metabolites of taxanes, each pair of which have similar structures but different retention behaviors, together with additional 13 taxanes with different substitutions were chosen to investigate the quantitative structure-retention relationship (QSRR) of taxanes in ultra fast liquid chromatography (UFLC). Monte Carlo variable selection (MCVS) method was adopted to choose descriptors. The selected four descriptors were used to build QSRR model with multi-linear regression (MLR) and artificial neural network (ANN) modeling techniques. Both linear and nonlinear models show good predictive ability, of which ANN model was better with the determination coefficient R(2) for training, validation and test set being 0.9892, 0.9747 and 0.9840, respectively. The results of 100 times' leave-12-out cross validation showed the robustness of this model. All the isomers can be correctly differentiated by this model. According to the selected descriptors, the three dimensional structural information was critical for recognition of epimers. Hydrophobic interaction was the uppermost factor for retention in UFLC. Molecules' polarizability and polarity properties were also closely correlated with retention behaviors. This QSRR model will be useful for separation and identification of taxanes including epimers and metabolites from botanical or biological samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Shiju; Qian, Wei; Guan, Yubao
2016-06-15
Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initiallymore » computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.« less
Dual-band frequency selective surface with large band separation and stable performance
NASA Astrophysics Data System (ADS)
Zhou, Hang; Qu, Shao-Bo; Peng, Wei-Dong; Lin, Bao-Qin; Wang, Jia-Fu; Ma, Hua; Zhang, Jie-Qiu; Bai, Peng; Wang, Xu-Hua; Xu, Zhuo
2012-05-01
A new technique of designing a dual-band frequency selective surface with large band separation is presented. This technique is based on a delicately designed topology of L- and Ku-band microwave filters. The two band-pass responses are generated by a capacitively-loaded square-loop frequency selective surface and an aperture-coupled frequency selective surface, respectively. A Faraday cage is located between the two frequency selective surface structures to eliminate undesired couplings. Based on this technique, a dual-band frequency selective surface with large band separation is designed, which possesses large band separation, high selectivity, and stable performance under various incident angles and different polarizations.
NASA Astrophysics Data System (ADS)
Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.
2018-05-01
An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.
Pisanello, Marco; Della Patria, Andrea; Sileo, Leonardo; Sabatini, Bernardo L; De Vittorio, Massimo; Pisanello, Ferruccio
2015-10-01
Optogenetic approaches to manipulate neural activity have revolutionized the ability of neuroscientists to uncover the functional connectivity underlying brain function. At the same time, the increasing complexity of in vivo optogenetic experiments has increased the demand for new techniques to precisely deliver light into the brain, in particular to illuminate selected portions of the neural tissue. Tapered and nanopatterned gold-coated optical fibers were recently proposed as minimally invasive multipoint light delivery devices, allowing for site-selective optogenetic stimulation in the mammalian brain [Pisanello , Neuron82, 1245 (2014)]. Here we demonstrate that the working principle behind these devices is based on the mode-selective photonic properties of the fiber taper. Using analytical and ray tracing models we model the finite conductance of the metal coating, and show that single or multiple optical windows located at specific taper sections can outcouple only specific subsets of guided modes injected into the fiber.
Optimisation algorithms for ECG data compression.
Haugland, D; Heber, J G; Husøy, J H
1997-07-01
The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.
NASA Technical Reports Server (NTRS)
Kattan, Michael W.; Hess, Kenneth R.; Kattan, Michael W.
1998-01-01
New computationally intensive tools for medical survival analyses include recursive partitioning (also called CART) and artificial neural networks. A challenge that remains is to better understand the behavior of these techniques in effort to know when they will be effective tools. Theoretically they may overcome limitations of the traditional multivariable survival technique, the Cox proportional hazards regression model. Experiments were designed to test whether the new tools would, in practice, overcome these limitations. Two datasets in which theory suggests CART and the neural network should outperform the Cox model were selected. The first was a published leukemia dataset manipulated to have a strong interaction that CART should detect. The second was a published cirrhosis dataset with pronounced nonlinear effects that a neural network should fit. Repeated sampling of 50 training and testing subsets was applied to each technique. The concordance index C was calculated as a measure of predictive accuracy by each technique on the testing dataset. In the interaction dataset, CART outperformed Cox (P less than 0.05) with a C improvement of 0.1 (95% Cl, 0.08 to 0.12). In the nonlinear dataset, the neural network outperformed the Cox model (P less than 0.05), but by a very slight amount (0.015). As predicted by theory, CART and the neural network were able to overcome limitations of the Cox model. Experiments like these are important to increase our understanding of when one of these new techniques will outperform the standard Cox model. Further research is necessary to predict which technique will do best a priori and to assess the magnitude of superiority.
NASA Astrophysics Data System (ADS)
Nazri, Engku Muhammad; Yusof, Nur Ai'Syah; Ahmad, Norazura; Shariffuddin, Mohd Dino Khairri; Khan, Shazida Jan Mohd
2017-11-01
Prioritizing and making decisions on what student activities to be selected and conducted to fulfill the aspiration of a university as translated in its strategic plan must be executed with transparency and accountability. It is becoming even more crucial, particularly for universities in Malaysia with the recent budget cut imposed by the Malaysian government. In this paper, we illustrated how 0-1 integer programming (0-1 IP) model was implemented to select which activities among the forty activities proposed by the student body of Universiti Utara Malaysia (UUM) to be implemented for the 2017/2018 academic year. Two different models were constructed. The first model was developed to determine the minimum total budget that should be given to the student body by the UUM management to conduct all the activities that can fulfill the minimum targeted number of activities as stated in its strategic plan. On the other hand, the second model was developed to determine which activities to be selected based on the total budget already allocated beforehand by the UUM management towards fulfilling the requirements as set in its strategic plan. The selection of activities for the second model, was also based on the preference of the members of the student body whereby the preference value for each activity was determined using Compromised-Analytical Hierarchy Process. The outputs from both models were compared and discussed. The technique used in this study will be useful and suitable to be implemented by organizations with key performance indicator-oriented programs and having limited budget allocation issues.
Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Yamaguchi, Satoshi; Zhang, Bo; Tomonaga, Takeshi; Seino, Utako; Kanagawa, Akiko; Segawa, Masaru; Nagasaka, Hironori; Suzuki, Akira; Miida, Takashi; Yamada, Sohsuke; Sasaguri, Yasuyuki; Doi, Takefumi; Saku, Keijiro; Okazaki, Mitsuyo; Tochino, Yoshihiro; Hirano, Ken-ichi
2014-01-01
The small intestine (SI) is the second-greatest source of HDL in mice. However, the selective evaluation of SI-derived HDL (SI-HDL) has been difficult because even the origin of HDL obtained in vivo from the intestinal lymph duct of anesthetized rodents is doubtful. To shed light on this question, we have developed a novel in situ perfusion technique using surgically isolated mouse SI, with which the possible filtration of plasma HDL into the SI lymph duct can be prevented. With the developed method, we studied the characteristics of and mechanism for the production and regulation of SI-HDL. Nascent HDL particles were detected in SI lymph perfusates in WT mice, but not in ABCA1 KO mice. SI-HDL had a high protein content and was smaller than plasma HDL. SI-HDL was rich in TG and apo AIV compared with HDL in liver perfusates. SI-HDL was increased by high-fat diets and reduced in apo E KO mice. In conclusion, with our in situ perfusion model that enables the selective evaluation of SI-HDL, we demonstrated that ABCA1 plays an important role in intestinal HDL production, and SI-HDL is small, dense, rich in apo AIV, and regulated by nutritional and genetic factors. PMID:24569139
Knick, Steven T.; Rotenberry, J.T.
1998-01-01
We tested the potential of a GIS mapping technique, using a resource selection model developed for black-tailed jackrabbits (Lepus californicus) and based on the Mahalanobis distance statistic, to track changes in shrubsteppe habitats in southwestern Idaho. If successful, the technique could be used to predict animal use areas, or those undergoing change, in different regions from the same selection function and variables without additional sampling. We determined the multivariate mean vector of 7 GIS variables that described habitats used by jackrabbits. We then ranked the similarity of all cells in the GIS coverage from their Mahalanobis distance to the mean habitat vector. The resulting map accurately depicted areas where we sighted jackrabbits on verification surveys. We then simulated an increase in shrublands (which are important habitats). Contrary to expectation, the new configurations were classified as lower similarity relative to the original mean habitat vector. Because the selection function is based on a unimodal mean, any deviation, even if biologically positive, creates larger Malanobis distances and lower similarity values. We recommend the Mahalanobis distance technique for mapping animal use areas when animals are distributed optimally, the landscape is well-sampled to determine the mean habitat vector, and distributions of the habitat variables does not change.
Removal of virus from boar semen spiked with porcine circovirus type 2.
Blomqvist, Gunilla; Persson, Maria; Wallgren, Margareta; Wallgren, Per; Morrell, Jane M
2011-06-01
The virus porcine circovirus type 2 (PCV2) is associated with different disease entities, including reproductive failure. The objective of this study was to investigate the use of a semen processing technique for the elimination of infectious PCV2 in semen. PCV2 was chosen as a model virus because of its small size, high resistance to inactivation and as a known risk factor for boar semen contamination. Aliquots of ejaculates were spiked with PCV2 and processed by a double processing technique, consisting of Single Layer Centrifugation on Androcoll™-P followed by a "swim-up" procedure. Samples were collected from the resulting fractions during the selection process and analyzed for the presence of infectious PCV2. Virus titres were determined by performing a 50% tissue culture infective dose assay (TCID(50)) by end point dilution and with the use of an indirect peroxidise monolayer assay technique. With an initial infectious virus titre of 3.25-3.82 (TCID(50))/50μL the two-step sperm selection method eliminated 2.92±0.23 logs of infectious PCV2, corresponding to more than 99% reduction. Sperm quality was not affected by the selection procedure. Copyright © 2011 Elsevier B.V. All rights reserved.
Fechter, Dominik; Storch, Ilse
2014-01-01
Due to legislative protection, many species, including large carnivores, are currently recolonizing Europe. To address the impending human-wildlife conflicts in advance, predictive habitat models can be used to determine potentially suitable habitat and areas likely to be recolonized. As field data are often limited, quantitative rule based models or the extrapolation of results from other studies are often the techniques of choice. Using the wolf (Canis lupus) in Germany as a model for habitat generalists, we developed a habitat model based on the location and extent of twelve existing wolf home ranges in Eastern Germany, current knowledge on wolf biology, different habitat modeling techniques and various input data to analyze ten different input parameter sets and address the following questions: (1) How do a priori assumptions and different input data or habitat modeling techniques affect the abundance and distribution of potentially suitable wolf habitat and the number of wolf packs in Germany? (2) In a synthesis across input parameter sets, what areas are predicted to be most suitable? (3) Are existing wolf pack home ranges in Eastern Germany consistent with current knowledge on wolf biology and habitat relationships? Our results indicate that depending on which assumptions on habitat relationships are applied in the model and which modeling techniques are chosen, the amount of potentially suitable habitat estimated varies greatly. Depending on a priori assumptions, Germany could accommodate between 154 and 1769 wolf packs. The locations of the existing wolf pack home ranges in Eastern Germany indicate that wolves are able to adapt to areas densely populated by humans, but are limited to areas with low road densities. Our analysis suggests that predictive habitat maps in general, should be interpreted with caution and illustrates the risk for habitat modelers to concentrate on only one selection of habitat factors or modeling technique. PMID:25029506
The effect of prenatal care on birthweight: a full-information maximum likelihood approach.
Rous, Jeffrey J; Jewell, R Todd; Brown, Robert W
2004-03-01
This paper uses a full-information maximum likelihood estimation procedure, the Discrete Factor Method, to estimate the relationship between birthweight and prenatal care. This technique controls for the potential biases surrounding both the sample selection of the pregnancy-resolution decision and the endogeneity of prenatal care. In addition, we use the actual number of prenatal care visits; other studies have normally measured prenatal care as the month care is initiated. We estimate a birthweight production function using 1993 data from the US state of Texas. The results underscore the importance of correcting for estimation problems. Specifically, a model that does not control for sample selection and endogeneity overestimates the benefit of an additional visit for women who have relatively few visits. This overestimation may indicate 'positive fetal selection,' i.e., women who did not abort may have healthier babies. Also, a model that does not control for self-selection and endogenity predicts that past 17 visits, an additional visit leads to lower birthweight, while a model that corrects for these estimation problems predicts a positive effect for additional visits. This result shows the effect of mothers with less healthy fetuses making more prenatal care visits, known as 'adverse selection' in prenatal care. Copyright 2003 John Wiley & Sons, Ltd.
Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan; Kim, Hae-Young
2014-03-01
This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models.
Capra, Hervé; Plichard, Laura; Bergé, Julien; Pella, Hervé; Ovidio, Michaël; McNeil, Eric; Lamouroux, Nicolas
2017-02-01
Modeling individual fish habitat selection in highly variable environments such as hydropeaking rivers is required for guiding efficient management decisions. We analyzed fish microhabitat selection in the heterogeneous hydraulic and thermal conditions (modeled in two-dimensions) of a reach of the large hydropeaking Rhône River locally warmed by the cooling system of a nuclear power plant. We used modern fixed acoustic telemetry techniques to survey 18 fish individuals (five barbels, six catfishes, seven chubs) signaling their position every 3s over a three-month period. Fish habitat selection depended on combinations of current microhabitat hydraulics (e.g. velocity, depth), past microhabitat hydraulics (e.g. dewatering risk or maximum velocities during the past 15days) and to a lesser extent substrate and temperature. Mixed-effects habitat selection models indicated that individual effects were often stronger than specific effects. In the Rhône, fish individuals appear to memorize spatial and temporal environmental changes and to adopt a "least constraining" habitat selection. Avoiding fast-flowing midstream habitats, fish generally live along the banks in areas where the dewatering risk is high. When discharge decreases, however, they select higher velocities but avoid both dewatering areas and very fast-flowing midstream habitats. Although consistent with the available knowledge on static fish habitat selection, our quantitative results demonstrate temporal variations in habitat selection, depending on individual behavior and environmental history. Their generality could be further tested using comparative experiments in different environmental configurations. Copyright © 2016 Elsevier B.V. All rights reserved.