ERIC Educational Resources Information Center
Van Zalk, Maarten Herman Walter; Kerr, Margaret; Branje, Susan J. T.; Stattin, Hakan; Meeus, Wim H. J.
2010-01-01
The authors of this study tested a selection-influence-de-selection model of depression. This model explains friendship influence processes (i.e., friends' depressive symptoms increase adolescents' depressive symptoms) while controlling for two processes: friendship selection (i.e., selection of friends with similar levels of depressive symptoms)…
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
IT vendor selection model by using structural equation model & analytical hierarchy process
NASA Astrophysics Data System (ADS)
Maitra, Sarit; Dominic, P. D. D.
2012-11-01
Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.
Multicriteria framework for selecting a process modelling language
NASA Astrophysics Data System (ADS)
Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel
2016-01-01
The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.
Process for selecting engineering tools : applied to selecting a SysML tool.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Spain, Mark J.; Post, Debra S.; Taylor, Jeffrey L.
2011-02-01
Process for Selecting Engineering Tools outlines the process and tools used to select a SysML (Systems Modeling Language) tool. The process is general in nature and users could use the process to select most engineering tools and software applications.
NASA Astrophysics Data System (ADS)
Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen
2018-01-01
Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.
Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan
2015-11-01
Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.
Models for Selecting Chief State School Officers. Policy Memo Series, No. 1.
ERIC Educational Resources Information Center
Sanchez, Karen L. Van Til; Hall, Gayle C.
The process of selecting a chief state school officer (CSSO) can be a significant means of allocating policymaking power in state educational governance. This paper examines the role of the chief state school officer and explains how that role is influenced by the selection process. Four selection models are described, along with the advantages…
Mental health courts and their selection processes: modeling variation for consistency.
Wolff, Nancy; Fabrikant, Nicole; Belenko, Steven
2011-10-01
Admission into mental health courts is based on a complicated and often variable decision-making process that involves multiple parties representing different expertise and interests. To the extent that eligibility criteria of mental health courts are more suggestive than deterministic, selection bias can be expected. Very little research has focused on the selection processes underpinning problem-solving courts even though such processes may dominate the performance of these interventions. This article describes a qualitative study designed to deconstruct the selection and admission processes of mental health courts. In this article, we describe a multi-stage, complex process for screening and admitting clients into mental health courts. The selection filtering model that is described has three eligibility screening stages: initial, assessment, and evaluation. The results of this study suggest that clients selected by mental health courts are shaped by the formal and informal selection criteria, as well as by the local treatment system.
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model.
Wichary, Szymon; Smolen, Tomasz
2016-01-01
In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals.
A Process Model of Principal Selection.
ERIC Educational Resources Information Center
Flanigan, J. L.; And Others
A process model to assist school district superintendents in the selection of principals is presented in this paper. Components of the process are described, which include developing an action plan, formulating an explicit job description, advertising, assessing candidates' philosophy, conducting interview analyses, evaluating response to stress,…
ERIC Educational Resources Information Center
Eignor, Daniel R.; Douglass, James B.
This paper attempts to provide some initial information about the use of a variety of item response theory (IRT) models in the item selection process; its purpose is to compare the information curves derived from the selection of items characterized by several different IRT models and their associated parameter estimation programs. These…
Second Generation Crop Yield Models Review
NASA Technical Reports Server (NTRS)
Hodges, T. (Principal Investigator)
1982-01-01
Second generation yield models, including crop growth simulation models and plant process models, may be suitable for large area crop yield forecasting in the yield model development project. Subjective and objective criteria for model selection are defined and models which might be selected are reviewed. Models may be selected to provide submodels as input to other models; for further development and testing; or for immediate testing as forecasting tools. A plant process model may range in complexity from several dozen submodels simulating (1) energy, carbohydrates, and minerals; (2) change in biomass of various organs; and (3) initiation and development of plant organs, to a few submodels simulating key physiological processes. The most complex models cannot be used directly in large area forecasting but may provide submodels which can be simplified for inclusion into simpler plant process models. Both published and unpublished models which may be used for development or testing are reviewed. Several other models, currently under development, may become available at a later date.
Sutton, Steven C; Hu, Mingxiu
2006-05-05
Many mathematical models have been proposed for establishing an in vitro/in vivo correlation (IVIVC). The traditional IVIVC model building process consists of 5 steps: deconvolution, model fitting, convolution, prediction error evaluation, and cross-validation. This is a time-consuming process and typically a few models at most are tested for any given data set. The objectives of this work were to (1) propose a statistical tool to screen models for further development of an IVIVC, (2) evaluate the performance of each model under different circumstances, and (3) investigate the effectiveness of common statistical model selection criteria for choosing IVIVC models. A computer program was developed to explore which model(s) would be most likely to work well with a random variation from the original formulation. The process used Monte Carlo simulation techniques to build IVIVC models. Data-based model selection criteria (Akaike Information Criteria [AIC], R2) and the probability of passing the Food and Drug Administration "prediction error" requirement was calculated. To illustrate this approach, several real data sets representing a broad range of release profiles are used to illustrate the process and to demonstrate the advantages of this automated process over the traditional approach. The Hixson-Crowell and Weibull models were often preferred over the linear. When evaluating whether a Level A IVIVC model was possible, the model selection criteria AIC generally selected the best model. We believe that the approach we proposed may be a rapid tool to determine which IVIVC model (if any) is the most applicable.
Neural Underpinnings of Decision Strategy Selection: A Review and a Theoretical Model
Wichary, Szymon; Smolen, Tomasz
2016-01-01
In multi-attribute choice, decision makers use decision strategies to arrive at the final choice. What are the neural mechanisms underlying decision strategy selection? The first goal of this paper is to provide a literature review on the neural underpinnings and cognitive models of decision strategy selection and thus set the stage for a neurocognitive model of this process. The second goal is to outline such a unifying, mechanistic model that can explain the impact of noncognitive factors (e.g., affect, stress) on strategy selection. To this end, we review the evidence for the factors influencing strategy selection, the neural basis of strategy use and the cognitive models of this process. We also present the Bottom-Up Model of Strategy Selection (BUMSS). The model assumes that the use of the rational Weighted Additive strategy and the boundedly rational heuristic Take The Best can be explained by one unifying, neurophysiologically plausible mechanism, based on the interaction of the frontoparietal network, orbitofrontal cortex, anterior cingulate cortex and the brainstem nucleus locus coeruleus. According to BUMSS, there are three processes that form the bottom-up mechanism of decision strategy selection and lead to the final choice: (1) cue weight computation, (2) gain modulation, and (3) weighted additive evaluation of alternatives. We discuss how these processes might be implemented in the brain, and how this knowledge allows us to formulate novel predictions linking strategy use and neural signals. PMID:27877103
Robust model selection and the statistical classification of languages
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. L.
2012-10-01
In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.
Coupling Spatiotemporal Community Assembly Processes to Changes in Microbial Metabolism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Crump, Alex R.; Resch, Charles T.
Community assembly processes govern shifts in species abundances in response to environmental change, yet our understanding of assembly remains largely decoupled from ecosystem function. Here, we test hypotheses regarding assembly and function across space and time using hyporheic microbial communities as a model system. We pair sampling of two habitat types through hydrologic fluctuation with null modeling and multivariate statistics. We demonstrate that dual selective pressures assimilate to generate compositional changes at distinct timescales among habitat types, resulting in contrasting associations of Betaproteobacteria and Thaumarchaeota with selection and with seasonal changes in aerobic metabolism. Our results culminate in a conceptualmore » model in which selection from contrasting environments regulates taxon abundance and ecosystem function through time, with increases in function when oscillating selection opposes stable selective pressures. Our model is applicable within both macrobial and microbial ecology and presents an avenue for assimilating community assembly processes into predictions of ecosystem function.« less
NASA Technical Reports Server (NTRS)
Lien, Mei-Ching; Proctor, Robert W.
2002-01-01
The purpose of this paper was to provide insight into the nature of response selection by reviewing the literature on stimulus-response compatibility (SRC) effects and the psychological refractory period (PRP) effect individually and jointly. The empirical findings and theoretical explanations of SRC effects that have been studied within a single-task context suggest that there are two response-selection routes-automatic activation and intentional translation. In contrast, all major PRP models reviewed in this paper have treated response selection as a single processing stage. In particular, the response-selection bottleneck (RSB) model assumes that the processing of Task 1 and Task 2 comprises two separate streams and that the PRP effect is due to a bottleneck located at response selection. Yet, considerable evidence from studies of SRC in the PRP paradigm shows that the processing of the two tasks is more interactive than is suggested by the RSB model and by most other models of the PRP effect. The major implication drawn from the studies of SRC effects in the PRP context is that response activation is a distinct process from final response selection. Response activation is based on both long-term and short-term task-defined S-R associations and occurs automatically and in parallel for the two tasks. The final response selection is an intentional act required even for highly compatible and practiced tasks and is restricted to processing one task at a time. Investigations of SRC effects and response-selection variables in dual-task contexts should be conducted more systematically because they provide significant insight into the nature of response-selection mechanisms.
Dendrites Enable a Robust Mechanism for Neuronal Stimulus Selectivity.
Cazé, Romain D; Jarvis, Sarah; Foust, Amanda J; Schultz, Simon R
2017-09-01
Hearing, vision, touch: underlying all of these senses is stimulus selectivity, a robust information processing operation in which cortical neurons respond more to some stimuli than to others. Previous models assume that these neurons receive the highest weighted input from an ensemble encoding the preferred stimulus, but dendrites enable other possibilities. Nonlinear dendritic processing can produce stimulus selectivity based on the spatial distribution of synapses, even if the total preferred stimulus weight does not exceed that of nonpreferred stimuli. Using a multi-subunit nonlinear model, we demonstrate that stimulus selectivity can arise from the spatial distribution of synapses. We propose this as a general mechanism for information processing by neurons possessing dendritic trees. Moreover, we show that this implementation of stimulus selectivity increases the neuron's robustness to synaptic and dendritic failure. Importantly, our model can maintain stimulus selectivity for a larger range of loss of synapses or dendrites than an equivalent linear model. We then use a layer 2/3 biophysical neuron model to show that our implementation is consistent with two recent experimental observations: (1) one can observe a mixture of selectivities in dendrites that can differ from the somatic selectivity, and (2) hyperpolarization can broaden somatic tuning without affecting dendritic tuning. Our model predicts that an initially nonselective neuron can become selective when depolarized. In addition to motivating new experiments, the model's increased robustness to synapses and dendrites loss provides a starting point for fault-resistant neuromorphic chip development.
PopGen Fishbowl: A Free Online Simulation Model of Microevolutionary Processes
ERIC Educational Resources Information Center
Jones, Thomas C.; Laughlin, Thomas F.
2010-01-01
Natural selection and other components of evolutionary theory are known to be particularly challenging concepts for students to understand. To help illustrate these concepts, we developed a simulation model of microevolutionary processes. The model features all the components of Hardy-Weinberg theory, with population size, selection, gene flow,…
Estimating animal resource selection from telemetry data using point process models
Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.
2013-01-01
To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.
Attentional Selection in Object Recognition
1993-02-01
order. It also affects the choice of strategies in both the 24 A Computational Model of Attentional Selection filtering and arbiter stages. The set...such processing. In Treisman’s model this was hidden in the concept of the selection filter . Later computational models of attention tried to...This thesis presents a novel approach to the selection problem by propos. ing a computational model of visual attentional selection as a paradigm for
A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.
Oztürk, Necla; Tozan, Hakan
2015-01-01
Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
The coalescent process in models with selection and recombination.
Hudson, R R; Kaplan, N L
1988-11-01
The statistical properties of the process describing the genealogical history of a random sample of genes at a selectively neutral locus which is linked to a locus at which natural selection operates are investigated. It is found that the equations describing this process are simple modifications of the equations describing the process assuming that the two loci are completely linked. Thus, the statistical properties of the genealogical process for a random sample at a neutral locus linked to a locus with selection follow from the results obtained for the selected locus. Sequence data from the alcohol dehydrogenase (Adh) region of Drosophila melanogaster are examined and compared to predictions based on the theory. It is found that the spatial distribution of nucleotide differences between Fast and Slow alleles of Adh is very similar to the spatial distribution predicted if balancing selection operates to maintain the allozyme variation at the Adh locus. The spatial distribution of nucleotide differences between different Slow alleles of Adh do not match the predictions of this simple model very well.
POST-PROCESSING ANALYSIS FOR THC SEEPAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. SUN
This report describes the selection of water compositions for the total system performance assessment (TSPA) model of results from the thermal-hydrological-chemical (THC) seepage model documented in ''Drift-Scale THC Seepage Model'' (BSC 2004 [DIRS 169856]). The selection has been conducted in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2004 [DIRS 171334]). This technical work plan (TWP) was prepared in accordance with AP-2.27Q, ''Planning for Science Activities''. Section 1.2.3 of the TWP describes planning information pertaining to the technical scope, content, and managementmore » of this report. The post-processing analysis for THC seepage (THC-PPA) documented in this report provides a methodology for evaluating the near-field compositions of water and gas around a typical waste emplacement drift as these relate to the chemistry of seepage, if any, into the drift. The THC-PPA inherits the conceptual basis of the THC seepage model, but is an independently developed process. The relationship between the post-processing analysis and other closely related models, together with their main functions in providing seepage chemistry information for the Total System Performance Assessment for the License Application (TSPA-LA), are illustrated in Figure 1-1. The THC-PPA provides a data selection concept and direct input to the physical and chemical environment (P&CE) report that supports the TSPA model. The purpose of the THC-PPA is further discussed in Section 1.2. The data selection methodology of the post-processing analysis (Section 6.2.1) was initially applied to results of the THC seepage model as presented in ''Drift-Scale THC Seepage Model'' (BSC 2004 [DIRS 169856]). Other outputs from the THC seepage model (DTN: LB0302DSCPTHCS.002 [DIRS 161976]) used in the P&CE (BSC 2004 [DIRS 169860], Section 6.6) were also subjected to the same initial selection. The present report serves as a full documentation of this selection and also provides additional analyses in support of the choice of waters selected for further evaluation in ''Engineered Barrier System: Physical and Chemical Environment'' (BSC 2004 [DIRS 169860], Section 6.6). The work scope for the studies presented in this report is described in the TWP (BSC 2004 [DIRS 171334]) and other documents cited above and can be used to estimate water and gas compositions near waste emplacement drifts. Results presented in this report were submitted to the Technical Data Management System (TDMS) under specific data tracking numbers (DTNs) as listed in Appendix A. The major change from previous selection of results from the THC seepage model is that the THC-PPA now considers data selection in space around the modeled waste emplacement drift, tracking the evolution of pore-water and gas-phase composition at the edge of the dryout zone around the drift. This post-processing analysis provides a scientific background for the selection of potential seepage water compositions.« less
Book Selection, Collection Development, and Bounded Rationality.
ERIC Educational Resources Information Center
Schwartz, Charles A.
1989-01-01
Reviews previously proposed schemes of classical rationality in book selection, describes new approaches to rational choice behavior, and presents a model of book selection based on bounded rationality in a garbage can decision process. The role of tacit knowledge and symbolic content in the selection process are also discussed. (102 references)…
Selecting the process variables for filament winding
NASA Technical Reports Server (NTRS)
Calius, E.; Springer, G. S.
1986-01-01
A model is described which can be used to determine the appropriate values of the process variables for filament winding cylinders. The process variables which can be selected by the model include the winding speed, fiber tension, initial resin degree of cure, and the temperatures applied during winding, curing, and post-curing. The effects of these process variables on the properties of the cylinder during and after manufacture are illustrated by a numerical example.
Adaptive Modeling Procedure Selection by Data Perturbation.
Zhang, Yongli; Shen, Xiaotong
2015-10-01
Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy.
Information-Processing Models and Curriculum Design
ERIC Educational Resources Information Center
Calfee, Robert C.
1970-01-01
"This paper consists of three sections--(a) the relation of theoretical analyses of learning to curriculum design, (b) the role of information-processing models in analyses of learning processes, and (c) selected examples of the application of information-processing models to curriculum design problems." (Author)
On spatial mutation-selection models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de; Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu; Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
2013-11-15
We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.
Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A
2012-06-01
The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population's phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models.
Razeto-Barry, Pablo; Díaz, Javier; Vásquez, Rodrigo A.
2012-01-01
The general theories of molecular evolution depend on relatively arbitrary assumptions about the relative distribution and rate of advantageous, deleterious, neutral, and nearly neutral mutations. The Fisher geometrical model (FGM) has been used to make distributions of mutations biologically interpretable. We explored an FGM-based molecular model to represent molecular evolutionary processes typically studied by nearly neutral and selection models, but in which distributions and relative rates of mutations with different selection coefficients are a consequence of biologically interpretable parameters, such as the average size of the phenotypic effect of mutations and the number of traits (complexity) of organisms. A variant of the FGM-based model that we called the static regime (SR) represents evolution as a nearly neutral process in which substitution rates are determined by a dynamic substitution process in which the population’s phenotype remains around a suboptimum equilibrium fitness produced by a balance between slightly deleterious and slightly advantageous compensatory substitutions. As in previous nearly neutral models, the SR predicts a negative relationship between molecular evolutionary rate and population size; however, SR does not have the unrealistic properties of previous nearly neutral models such as the narrow window of selection strengths in which they work. In addition, the SR suggests that compensatory mutations cannot explain the high rate of fixations driven by positive selection currently found in DNA sequences, contrary to what has been previously suggested. We also developed a generalization of SR in which the optimum phenotype can change stochastically due to environmental or physiological shifts, which we called the variable regime (VR). VR models evolution as an interplay between adaptive processes and nearly neutral steady-state processes. When strong environmental fluctuations are incorporated, the process becomes a selection model in which evolutionary rate does not depend on population size, but is critically dependent on the complexity of organisms and mutation size. For SR as well as VR we found that key parameters of molecular evolution are linked by biological factors, and we showed that they cannot be fixed independently by arbitrary criteria, as has usually been assumed in previous molecular evolutionary models. PMID:22426879
ERIC Educational Resources Information Center
Abar, Caitlin C.; Maggs, Jennifer L.
2010-01-01
Research indicates that social influences impact college students' alcohol consumption; however, how selection processes may serve as an influential factor predicting alcohol use in this population has not been widely addressed. A model of influence and selection processes contributing to alcohol use across the transition to college was examined…
Fantasy-Testing-Assessment: A Proposed Model for the Investigation of Mate Selection.
ERIC Educational Resources Information Center
Nofz, Michael P.
1984-01-01
Proposes a model for mate selection which outlines three modes of interpersonal relating--fantasy, testing, and assessment (FTA). The model is viewed as a more accurate representation of mate selection processes than suggested by earlier theories, and can be used to clarify couples' understandings of their own relationships. (JAC)
NASA Technical Reports Server (NTRS)
1981-01-01
The development of a coal gasification system design and mass and energy balance simulation program for the TVA and other similar facilities is described. The materials-process-product model (MPPM) and the advanced system for process engineering (ASPEN) computer program were selected from available steady state and dynamic models. The MPPM was selected to serve as the basis for development of system level design model structure because it provided the capability for process block material and energy balance and high-level systems sizing and costing. The ASPEN simulation serves as the basis for assessing detailed component models for the system design modeling program. The ASPEN components were analyzed to identify particular process blocks and data packages (physical properties) which could be extracted and used in the system design modeling program. While ASPEN physical properties calculation routines are capable of generating physical properties required for process simulation, not all required physical property data are available, and must be user-entered.
Constraints and Approach for Selecting the Mars Surveyor '01 Landing Site
NASA Technical Reports Server (NTRS)
Golombek, M.; Bridges, N.; Gilmore, M.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.; Smith, J.; Weitz, C.
1999-01-01
There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough and defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities.
Constraints, Approach and Present Status for Selecting the Mars Surveyor 2001 Landing Site
NASA Technical Reports Server (NTRS)
Golombek, M.; Anderson, F.; Bridges, N.; Briggs, G.; Gilmore, M.; Gulick, V.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.;
1999-01-01
There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough, defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities.
The Modular Modeling System (MMS): User's Manual
Leavesley, G.H.; Restrepo, Pedro J.; Markstrom, S.L.; Dixon, M.; Stannard, L.G.
1996-01-01
The Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide the research and operational framework needed to support development, testing, and evaluation of physical-process algorithms and to facilitate integration of user-selected sets of algorithms into operational physical-process models. MMS uses a module library that contains modules for simulating a variety of water, energy, and biogeochemical processes. A model is created by selectively coupling the most appropriate modules from the library to create a 'suitable' model for the desired application. Where existing modules do not provide appropriate process algorithms, new modules can be developed. The MMS user's manual provides installation instructions and a detailed discussion of system concepts, module development, and model development and application using the MMS graphical user interface.
Fermentation process tracking through enhanced spectral calibration modeling.
Triadaphillou, Sophia; Martin, Elaine; Montague, Gary; Norden, Alison; Jeffkins, Paul; Stimpson, Sarah
2007-06-15
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.
The source of dual-task limitations: Serial or parallel processing of multiple response selections?
Marois, René
2014-01-01
Although it is generally recognized that the concurrent performance of two tasks incurs costs, the sources of these dual-task costs remain controversial. The serial bottleneck model suggests that serial postponement of task performance in dual-task conditions results from a central stage of response selection that can only process one task at a time. Cognitive-control models, by contrast, propose that multiple response selections can proceed in parallel, but that serial processing of task performance is predominantly adopted because its processing efficiency is higher than that of parallel processing. In the present study, we empirically tested this proposition by examining whether parallel processing would occur when it was more efficient and financially rewarded. The results indicated that even when parallel processing was more efficient and was incentivized by financial reward, participants still failed to process tasks in parallel. We conclude that central information processing is limited by a serial bottleneck. PMID:23864266
Journal selection decisions: a biomedical library operations research model. I. The framework.
Kraft, D H; Polacsek, R A; Soergel, L; Burns, K; Klair, A
1976-01-01
The problem of deciding which journal titles to select for acquisition in a biomedical library is modeled. The approach taken is based on cost/benefit ratios. Measures of journal worth, methods of data collection, and journal cost data are considered. The emphasis is on the development of a practical process for selecting journal titles, based on the objectivity and rationality of the model; and on the collection of the approprate data and library statistics in a reasonable manner. The implications of this process towards an overall management information system (MIS) for biomedical serials handling are discussed. PMID:820391
Automating an integrated spatial data-mining model for landfill site selection
NASA Astrophysics Data System (ADS)
Abujayyab, Sohaib K. M.; Ahamad, Mohd Sanusi S.; Yahya, Ahmad Shukri; Ahmad, Siti Zubaidah; Aziz, Hamidi Abdul
2017-10-01
An integrated programming environment represents a robust approach to building a valid model for landfill site selection. One of the main challenges in the integrated model is the complicated processing and modelling due to the programming stages and several limitations. An automation process helps avoid the limitations and improve the interoperability between integrated programming environments. This work targets the automation of a spatial data-mining model for landfill site selection by integrating between spatial programming environment (Python-ArcGIS) and non-spatial environment (MATLAB). The model was constructed using neural networks and is divided into nine stages distributed between Matlab and Python-ArcGIS. A case study was taken from the north part of Peninsular Malaysia. 22 criteria were selected to utilise as input data and to build the training and testing datasets. The outcomes show a high-performance accuracy percentage of 98.2% in the testing dataset using 10-fold cross validation. The automated spatial data mining model provides a solid platform for decision makers to performing landfill site selection and planning operations on a regional scale.
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
Universal Darwinism As a Process of Bayesian Inference.
Campbell, John O
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.
Universal Darwinism As a Process of Bayesian Inference
Campbell, John O.
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438
Beyond perceptual load and dilution: a review of the role of working memory in selective attention
de Fockert, Jan W.
2013-01-01
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed. PMID:23734139
Beyond perceptual load and dilution: a review of the role of working memory in selective attention.
de Fockert, Jan W
2013-01-01
The perceptual load and dilution models differ fundamentally in terms of the proposed mechanism underlying variation in distractibility during different perceptual conditions. However, both models predict that distracting information can be processed beyond perceptual processing under certain conditions, a prediction that is well-supported by the literature. Load theory proposes that in such cases, where perceptual task aspects do not allow for sufficient attentional selectivity, the maintenance of task-relevant processing depends on cognitive control mechanisms, including working memory. The key prediction is that working memory plays a role in keeping clear processing priorities in the face of potential distraction, and the evidence reviewed and evaluated in a meta-analysis here supports this claim, by showing that the processing of distracting information tends to be enhanced when load on a concurrent task of working memory is high. Low working memory capacity is similarly associated with greater distractor processing in selective attention, again suggesting that the unavailability of working memory during selective attention leads to an increase in distractibility. Together, these findings suggest that selective attention against distractors that are processed beyond perception depends on the availability of working memory. Possible mechanisms for the effects of working memory on selective attention are discussed.
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
Application of simulation models for the optimization of business processes
NASA Astrophysics Data System (ADS)
Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří
2016-06-01
The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.
Network models of frequency modulated sweep detection.
Skorheim, Steven; Razak, Khaleel; Bazhenov, Maxim
2014-01-01
Frequency modulated (FM) sweeps are common in species-specific vocalizations, including human speech. Auditory neurons selective for the direction and rate of frequency change in FM sweeps are present across species, but the synaptic mechanisms underlying such selectivity are only beginning to be understood. Even less is known about mechanisms of experience-dependent changes in FM sweep selectivity. We present three network models of synaptic mechanisms of FM sweep direction and rate selectivity that explains experimental data: (1) The 'facilitation' model contains frequency selective cells operating as coincidence detectors, summing up multiple excitatory inputs with different time delays. (2) The 'duration tuned' model depends on interactions between delayed excitation and early inhibition. The strength of delayed excitation determines the preferred duration. Inhibitory rebound can reinforce the delayed excitation. (3) The 'inhibitory sideband' model uses frequency selective inputs to a network of excitatory and inhibitory cells. The strength and asymmetry of these connections results in neurons responsive to sweeps in a single direction of sufficient sweep rate. Variations of these properties, can explain the diversity of rate-dependent direction selectivity seen across species. We show that the inhibitory sideband model can be trained using spike timing dependent plasticity (STDP) to develop direction selectivity from a non-selective network. These models provide a means to compare the proposed synaptic and spectrotemporal mechanisms of FM sweep processing and can be utilized to explore cellular mechanisms underlying experience- or training-dependent changes in spectrotemporal processing across animal models. Given the analogy between FM sweeps and visual motion, these models can serve a broader function in studying stimulus movement across sensory epithelia.
77 FR 61307 - New Postal Product
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-09
...: Transfer Mail Processing Cost Model for Machinable and Irregular Standard Mail Parcels to the Mail Processing Cost Model for Parcel Select/Parcel Return Service. The Postal Service proposes to move the machinable and irregular cost worksheets contained in the Standard Mail parcel mail processing cost model to...
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
NASA Astrophysics Data System (ADS)
Bascetin, A.
2007-04-01
The selection of an optimal reclamation method is one of the most important factors in open-pit design and production planning. It also affects economic considerations in open-pit design as a function of plan location and depth. Furthermore, the selection is a complex multi-person, multi-criteria decision problem. The group decision-making process can be improved by applying a systematic and logical approach to assess the priorities based on the inputs of several specialists from different functional areas within the mine company. The analytical hierarchy process (AHP) can be very useful in involving several decision makers with different conflicting objectives to arrive at a consensus decision. In this paper, the selection of an optimal reclamation method using an AHP-based model was evaluated for coal production in an open-pit coal mine located at Seyitomer region in Turkey. The use of the proposed model indicates that it can be applied to improve the group decision making in selecting a reclamation method that satisfies optimal specifications. Also, it is found that the decision process is systematic and using the proposed model can reduce the time taken to select a optimal method.
A PROCESS FOR SELECTING INDICATORS FOR MONITORING CONDITIONS OF RANGELAND HEALTH (COPY)
This paper reports on a process for selecting a suite of indicators that, in combination, can be useful in assessing the ecological conditions of rangelands. Conceptual models that depict the structural and functional properties of ecological processes were used to show the linka...
Moenickes, Sylvia; Höltge, Sibylla; Kreuzig, Robert; Richter, Otto
2011-12-01
Fate monitoring data on anaerobic transformation of the benzimidazole anthelmintics flubendazole (FLU) and fenbendazole (FEN) in liquid pig manure and aerobic transformation and sorption in soil and manured soil under laboratory conditions were used for corresponding fate modeling. Processes considered were reversible and irreversible sequestration, mineralization, and metabolization, from which a set of up to 50 different models, both nested and concurrent, was assembled. Five selection criteria served for model selection after parameter fitting: the coefficient of determination, modeling efficiency, a likelihood ratio test, an information criterion, and a determinability measure. From the set of models selected, processes were classified as essential or sufficient. This strategy to identify process dominance was corroborated through application to data from analogous experiments for sulfadiazine and a comparison with established fate models for this substance. For both, FLU and FEN, model selection performance was fine, including indication of weak data support where observed. For FLU reversible and irreversible sequestration in a nonextractable fraction was determined. In particular, both the extractable and the nonextractable fraction were equally sufficient sources for irreversible sequestration. For FEN generally reversible formation of the extractable sulfoxide metabolite and reversible sequestration of both the parent and the metabolite were dominant. Similar to FLU, irreversible sequestration in the nonextractable fraction was determined for which both the extractable or the nonextractable fraction were equally sufficient sources. Formation of the sulfone metabolite was determined as irreversible, originating from the first metabolite. Copyright © 2011 Elsevier B.V. All rights reserved.
Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations
NASA Astrophysics Data System (ADS)
Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad
2012-11-01
This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.
A Gambler's Model of Natural Selection.
ERIC Educational Resources Information Center
Nolan, Michael J.; Ostrovsky, David S.
1996-01-01
Presents an activity that highlights the mechanism and power of natural selection. Allows students to think in terms of modeling a biological process and instills an appreciation for a mathematical approach to biological problems. (JRH)
Smith, Philip L; Sewell, David K
2013-07-01
We generalize the integrated system model of Smith and Ratcliff (2009) to obtain a new theory of attentional selection in brief, multielement visual displays. The theory proposes that attentional selection occurs via competitive interactions among detectors that signal the presence of task-relevant features at particular display locations. The outcome of the competition, together with attention, determines which stimuli are selected into visual short-term memory (VSTM). Decisions about the contents of VSTM are made by a diffusion-process decision stage. The selection process is modeled by coupled systems of shunting equations, which perform gated where-on-what pathway VSTM selection. The theory provides a computational account of key findings from attention tasks with near-threshold stimuli. These are (a) the success of the MAX model of visual search and spatial cuing, (b) the distractor homogeneity effect, (c) the double-target detection deficit, (d) redundancy costs in the post-stimulus probe task, (e) the joint item and information capacity limits of VSTM, and (f) the object-based nature of attentional selection. We argue that these phenomena are all manifestations of an underlying competitive VSTM selection process, which arise as a natural consequence of our theory. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Spatial Selection and Local Adaptation Jointly Shape Life-History Evolution during Range Expansion.
Van Petegem, Katrien H P; Boeye, Jeroen; Stoks, Robby; Bonte, Dries
2016-11-01
In the context of climate change and species invasions, range shifts increasingly gain attention because the rates at which they occur in the Anthropocene induce rapid changes in biological assemblages. During range shifts, species experience multiple selection pressures. For poleward expansions in particular, it is difficult to interpret observed evolutionary dynamics because of the joint action of evolutionary processes related to spatial selection and to adaptation toward local climatic conditions. To disentangle the effects of these two processes, we integrated stochastic modeling and data from a common garden experiment, using the spider mite Tetranychus urticae as a model species. By linking the empirical data with those derived form a highly parameterized individual-based model, we infer that both spatial selection and local adaptation contributed to the observed latitudinal life-history divergence. Spatial selection best described variation in dispersal behavior, while variation in development was best explained by adaptation to the local climate. Divergence in life-history traits in species shifting poleward could consequently be jointly determined by contemporary evolutionary dynamics resulting from adaptation to the environmental gradient and from spatial selection. The integration of modeling with common garden experiments provides a powerful tool to study the contribution of these evolutionary processes on life-history evolution during range expansion.
Selection of shuttle payload data processing drivers for the data system new technology study
NASA Technical Reports Server (NTRS)
1976-01-01
An investigation of all payloads in the IBM disciplines and the selection of driver payloads within each discipline are described. The driver payloads were selected on the basis of their data processing requirements. These requirements are measured by a weighting scheme. The total requirements for each discipline are estimated by use of the technology payload model. The driver selection process which was both a payload by payload comparison and a comparison of expected groupings of payloads was examined.
Five Guidelines for Selecting Hydrological Signatures
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Westerberg, I.; Branger, F.
2017-12-01
Hydrological signatures are index values derived from observed or modeled series of hydrological data such as rainfall, flow or soil moisture. They are designed to extract relevant information about hydrological behavior, such as to identify dominant processes, and to determine the strength, speed and spatiotemporal variability of the rainfall-runoff response. Hydrological signatures play an important role in model evaluation. They allow us to test whether particular model structures or parameter sets accurately reproduce the runoff generation processes within the watershed of interest. Most modeling studies use a selection of different signatures to capture different aspects of the catchment response, for example evaluating overall flow distribution as well as high and low flow extremes and flow timing. Such studies often choose their own set of signatures, or may borrow subsets of signatures used in multiple other works. The link between signature values and hydrological processes is not always straightforward, leading to uncertainty and variability in hydrologists' signature choices. In this presentation, we aim to encourage a more rigorous approach to hydrological signature selection, which considers the ability of signatures to represent hydrological behavior and underlying processes for the catchment and application in question. To this end, we propose a set of guidelines for selecting hydrological signatures. We describe five criteria that any hydrological signature should conform to: Identifiability, Robustness, Consistency, Representativeness, and Discriminatory Power. We describe an example of the design process for a signature, assessing possible signature designs against the guidelines above. Due to their ubiquity, we chose a signature related to the Flow Duration Curve, selecting the FDC mid-section slope as a proposed signature to quantify catchment overall behavior and flashiness. We demonstrate how assessment against each guideline could be used to compare or choose between alternative signature definitions. We believe that reaching a consensus on selection criteria for hydrological signatures will assist modelers to choose between competing signatures, facilitate comparison between hydrological studies, and help hydrologists to fully evaluate their models.
Mincarone, Pierpaolo; Leo, Carlo Giacomo; Trujillo-Martín, Maria Del Mar; Manson, Jan; Guarino, Roberto; Ponzini, Giuseppe; Sabina, Saverio
2018-04-01
The importance of working toward quality improvement in healthcare implies an increasing interest in analysing, understanding and optimizing process logic and sequences of activities embedded in healthcare processes. Their graphical representation promotes faster learning, higher retention and better compliance. The study identifies standardized graphical languages and notations applied to patient care processes and investigates their usefulness in the healthcare setting. Peer-reviewed literature up to 19 May 2016. Information complemented by a questionnaire sent to the authors of selected studies. Systematic review conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement. Five authors extracted results of selected studies. Ten articles met the inclusion criteria. One notation and language for healthcare process modelling were identified with an application to patient care processes: Business Process Model and Notation and Unified Modeling Language™. One of the authors of every selected study completed the questionnaire. Users' comprehensibility and facilitation of inter-professional analysis of processes have been recognized, in the filled in questionnaires, as major strengths for process modelling in healthcare. Both the notation and the language could increase the clarity of presentation thanks to their visual properties, the capacity of easily managing macro and micro scenarios, the possibility of clearly and precisely representing the process logic. Both could increase guidelines/pathways applicability by representing complex scenarios through charts and algorithms hence contributing to reduce unjustified practice variations which negatively impact on quality of care and patient safety.
Constraints, Approach, and Status of Mars Surveyor 2001 Landing Site Selection
NASA Technical Reports Server (NTRS)
Golombek, M.; Bridges, N.; Briggs, G.; Gilmore, M.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.; Smith, J.; Soderblom, L.
1999-01-01
There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough and defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities. Additional information is contained in the original extended abstract.
Burnham, Bryan R
2018-05-03
During visual search, both top-down factors and bottom-up properties contribute to the guidance of visual attention, but selection history can influence attention independent of bottom-up and top-down factors. For example, priming of pop-out (PoP) is the finding that search for a singleton target is faster when the target and distractor features repeat than when those features trade roles between trials. Studies have suggested that such priming (selection history) effects on pop-out search manifest either early, by biasing the selection of the preceding target feature, or later in processing, by facilitating response and target retrieval processes. The present study was designed to examine the influence of selection history on pop-out search by introducing a speed-accuracy trade-off manipulation in a pop-out search task. Ratcliff diffusion modeling (RDM) was used to examine how selection history influenced both attentional bias and response execution processes. The results support the hypothesis that selection history biases attention toward the preceding target's features on the current trial and also influences selection of the response to the target.
76 FR 296 - Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
... part would update the mail processing portion of the Parcel Select/Parcel Return Service cost models...) processing cost model that was filed as Proposal Seven on September 8, 2010. Proposal Thirteen at 1. These... develop the Standard Mail/non-flat machinable (NFM) mail processing cost model. It also proposes to use...
Sale, Mark; Sherer, Eric A
2015-01-01
The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792
Epistasis can accelerate adaptive diversification in haploid asexual populations.
Griswold, Cortland K
2015-03-07
A fundamental goal of the biological sciences is to determine processes that facilitate the evolution of diversity. These processes can be separated into ecological, physiological, developmental and genetic. An ecological process that facilitates diversification is frequency-dependent selection caused by competition. Models of frequency-dependent adaptive diversification have generally assumed a genetic basis of phenotype that is non-epistatic. Here, we present a model that indicates diversification is accelerated by an epistatic basis of phenotype in combination with a competition model that invokes frequency-dependent selection. Our model makes use of a genealogical model of epistasis and insights into the effects of balancing selection on the genealogical structure of a population to understand how epistasis can facilitate diversification. The finding that epistasis facilitates diversification may be informative with respect to empirical results that indicate an epistatic basis of phenotype in experimental bacterial populations that experienced adaptive diversification. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
A road map for multi-way calibration models.
Escandar, Graciela M; Olivieri, Alejandro C
2017-08-07
A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.
Coupling Spatiotemporal Community Assembly Processes to Changes in Microbial Metabolism.
Graham, Emily B; Crump, Alex R; Resch, Charles T; Fansler, Sarah; Arntzen, Evan; Kennedy, David W; Fredrickson, Jim K; Stegen, James C
2016-01-01
Community assembly processes generate shifts in species abundances that influence ecosystem cycling of carbon and nutrients, yet our understanding of assembly remains largely separate from ecosystem-level functioning. Here, we investigate relationships between assembly and changes in microbial metabolism across space and time in hyporheic microbial communities. We pair sampling of two habitat types (i.e., attached and planktonic) through seasonal and sub-hourly hydrologic fluctuation with null modeling and temporally explicit multivariate statistics. We demonstrate that multiple selective pressures-imposed by sediment and porewater physicochemistry-integrate to generate changes in microbial community composition at distinct timescales among habitat types. These changes in composition are reflective of contrasting associations of Betaproteobacteria and Thaumarchaeota with ecological selection and with seasonal changes in microbial metabolism. We present a conceptual model based on our results in which metabolism increases when oscillating selective pressures oppose temporally stable selective pressures. Our conceptual model is pertinent to both macrobial and microbial systems experiencing multiple selective pressures and presents an avenue for assimilating community assembly processes into predictions of ecosystem-level functioning.
Foveal analysis and peripheral selection during active visual sampling
Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.
2014-01-01
Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588
Hodge, N. E.; Ferencz, R. M.; Vignes, R. M.
2016-05-30
Selective laser melting (SLM) is an additive manufacturing process in which multiple, successive layers of metal powders are heated via laser in order to build a part. Modeling of SLM requires consideration of the complex interaction between heat transfer and solid mechanics. Here, the present work describes the authors initial efforts to validate their first generation model. In particular, the comparison of model-generated solid mechanics results, including both deformation and stresses, is presented. Additionally, results of various perturbations of the process parameters and modeling strategies are discussed.
Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection
NASA Astrophysics Data System (ADS)
Harwati
2017-06-01
Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
The Genealogy of Samples in Models with Selection
Neuhauser, C.; Krone, S. M.
1997-01-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models, DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case. PMID:9071604
The Abstract Selection Task: New Data and an Almost Comprehensive Model
ERIC Educational Resources Information Center
Klauer, Karl Christoph; Stahl, Christoph; Erdfelder, Edgar
2007-01-01
A complete quantitative account of P. Wason's (1966) abstract selection task is proposed. The account takes the form of a mathematical model. It is assumed that some response patterns are caused by inferential reasoning, whereas other responses reflect cognitive processes that affect each card selection separately and independently of other card…
Jung, Won-Mo; Park, In-Soo; Lee, Ye-Seul; Kim, Chang-Eop; Lee, Hyangsook; Hahm, Dae-Hyun; Park, Hi-Joon; Jang, Bo-Hyoung; Chae, Younbyoung
2018-04-12
Comprehension of the medical diagnoses of doctors and treatment of diseases is important to understand the underlying principle in selecting appropriate acupoints. The pattern recognition process that pertains to symptoms and diseases and informs acupuncture treatment in a clinical setting was explored. A total of 232 clinical records were collected using a Charting Language program. The relationship between symptom information and selected acupoints was trained using an artificial neural network (ANN). A total of 11 hidden nodes with the highest average precision score were selected through a tenfold cross-validation. Our ANN model could predict the selected acupoints based on symptom and disease information with an average precision score of 0.865 (precision, 0.911; recall, 0.811). This model is a useful tool for diagnostic classification or pattern recognition and for the prediction and modeling of acupuncture treatment based on clinical data obtained in a real-world setting. The relationship between symptoms and selected acupoints could be systematically characterized through knowledge discovery processes, such as pattern identification.
NASA Astrophysics Data System (ADS)
Gesing, Adam J.; Das, Subodh K.
2017-02-01
With United States Department of Energy Advanced Research Project Agency funding, experimental proof-of-concept was demonstrated for RE-12TM electrorefining process of extraction of desired amount of Mg from recycled scrap secondary Al molten alloys. The key enabling technology for this process was the selection of the suitable electrolyte composition and operating temperature. The selection was made using the FactSage thermodynamic modeling software and the light metal, molten salt, and oxide thermodynamic databases. Modeling allowed prediction of the chemical equilibria, impurity contents in both anode and cathode products, and in the electrolyte. FactSage also provided data on the physical properties of the electrolyte and the molten metal phases including electrical conductivity and density of the molten phases. Further modeling permitted selection of electrode and cell construction materials chemically compatible with the combination of molten metals and the electrolyte.
Modeling Dynamic Food Choice Processes to Understand Dietary Intervention Effects.
Marcum, Christopher Steven; Goldring, Megan R; McBride, Colleen M; Persky, Susan
2018-02-17
Meal construction is largely governed by nonconscious and habit-based processes that can be represented as a collection of in dividual, micro-level food choices that eventually give rise to a final plate. Despite this, dietary behavior intervention research rarely captures these micro-level food choice processes, instead measuring outcomes at aggregated levels. This is due in part to a dearth of analytic techniques to model these dynamic time-series events. The current article addresses this limitation by applying a generalization of the relational event framework to model micro-level food choice behavior following an educational intervention. Relational event modeling was used to model the food choices that 221 mothers made for their child following receipt of an information-based intervention. Participants were randomized to receive either (a) control information; (b) childhood obesity risk information; (c) childhood obesity risk information plus a personalized family history-based risk estimate for their child. Participants then made food choices for their child in a virtual reality-based food buffet simulation. Micro-level aspects of the built environment, such as the ordering of each food in the buffet, were influential. Other dynamic processes such as choice inertia also influenced food selection. Among participants receiving the strongest intervention condition, choice inertia decreased and the overall rate of food selection increased. Modeling food selection processes can elucidate the points at which interventions exert their influence. Researchers can leverage these findings to gain insight into nonconscious and uncontrollable aspects of food selection that influence dietary outcomes, which can ultimately improve the design of dietary interventions.
Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs
ERIC Educational Resources Information Center
Guasch, Marc; Haro, Juan; Boada, Roger
2017-01-01
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
A theory of germinal center B cell selection, division, and exit.
Meyer-Hermann, Michael; Mohr, Elodie; Pelletier, Nadége; Zhang, Yang; Victora, Gabriel D; Toellner, Kai-Michael
2012-07-26
High-affinity antibodies are generated in germinal centers in a process involving mutation and selection of B cells. Information processing in germinal center reactions has been investigated in a number of recent experiments. These have revealed cell migration patterns, asymmetric cell divisions, and cell-cell interaction characteristics, used here to develop a theory of germinal center B cell selection, division, and exit (the LEDA model). According to this model, B cells selected by T follicular helper cells on the basis of successful antigen processing always return to the dark zone for asymmetric division, and acquired antigen is inherited by one daughter cell only. Antigen-retaining B cells differentiate to plasma cells and leave the germinal center through the dark zone. This theory has implications for the functioning of germinal centers because compared to previous models, high-affinity antibodies appear one day earlier and the amount of derived plasma cells is considerably larger. Copyright © 2012 The Authors. Published by Elsevier Inc. All rights reserved.
Archaeological data reveal slow rates of evolution during plant domestication.
Purugganan, Michael D; Fuller, Dorian Q
2011-01-01
Domestication is an evolutionary process of species divergence in which morphological and physiological changes result from the cultivation/tending of plant or animal species by a mutualistic partner, most prominently humans. Darwin used domestication as an analogy to evolution by natural selection although there is strong debate on whether this process of species evolution by human association is an appropriate model for evolutionary study. There is a presumption that selection under domestication is strong and most models assume rapid evolution of cultivated species. Using archaeological data for 11 species from 60 archaeological sites, we measure rates of evolution in two plant domestication traits--nonshattering and grain/seed size increase. Contrary to previous assumptions, we find the rates of phenotypic evolution during domestication are slow, and significantly lower or comparable to those observed among wild species subjected to natural selection. Our study indicates that the magnitudes of the rates of evolution during the domestication process, including the strength of selection, may be similar to those measured for wild species. This suggests that domestication may be driven by unconscious selection pressures similar to that observed for natural selection, and the study of the domestication process may indeed prove to be a valid model for the study of evolutionary change. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.
2016-02-01
The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.
NASA Astrophysics Data System (ADS)
Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao
A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.
Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...
A Unifying Mechanistic Model of Selective Attention in Spiking Neurons
Bobier, Bruce; Stewart, Terrence C.; Eliasmith, Chris
2014-01-01
Visuospatial attention produces myriad effects on the activity and selectivity of cortical neurons. Spiking neuron models capable of reproducing a wide variety of these effects remain elusive. We present a model called the Attentional Routing Circuit (ARC) that provides a mechanistic description of selective attentional processing in cortex. The model is described mathematically and implemented at the level of individual spiking neurons, with the computations for performing selective attentional processing being mapped to specific neuron types and laminar circuitry. The model is used to simulate three studies of attention in macaque, and is shown to quantitatively match several observed forms of attentional modulation. Specifically, ARC demonstrates that with shifts of spatial attention, neurons may exhibit shifting and shrinking of receptive fields; increases in responses without changes in selectivity for non-spatial features (i.e. response gain), and; that the effect on contrast-response functions is better explained as a response-gain effect than as contrast-gain. Unlike past models, ARC embodies a single mechanism that unifies the above forms of attentional modulation, is consistent with a wide array of available data, and makes several specific and quantifiable predictions. PMID:24921249
Design Of Computer Based Test Using The Unified Modeling Language
NASA Astrophysics Data System (ADS)
Tedyyana, Agus; Danuri; Lidyawati
2017-12-01
The Admission selection of Politeknik Negeri Bengkalis through interest and talent search (PMDK), Joint Selection of admission test for state Polytechnics (SB-UMPN) and Independent (UM-Polbeng) were conducted by using paper-based Test (PBT). Paper Based Test model has some weaknesses. They are wasting too much paper, the leaking of the questios to the public, and data manipulation of the test result. This reasearch was Aimed to create a Computer-based Test (CBT) models by using Unified Modeling Language (UML) the which consists of Use Case diagrams, Activity diagram and sequence diagrams. During the designing process of the application, it is important to pay attention on the process of giving the password for the test questions before they were shown through encryption and description process. RSA cryptography algorithm was used in this process. Then, the questions shown in the questions banks were randomized by using the Fisher-Yates Shuffle method. The network architecture used in Computer Based test application was a client-server network models and Local Area Network (LAN). The result of the design was the Computer Based Test application for admission to the selection of Politeknik Negeri Bengkalis.
An Evaluation Model To Select an Integrated Learning System in a Large, Suburban School District.
ERIC Educational Resources Information Center
Curlette, William L.; And Others
The systematic evaluation process used in Georgia's DeKalb County School System to purchase comprehensive instructional software--an integrated learning system (ILS)--is described, and the decision-making model for selection is presented. Selection and implementation of an ILS were part of an instructional technology plan for the DeKalb schools…
Direction selectivity of blowfly motion-sensitive neurons is computed in a two-stage process.
Borst, A; Egelhaaf, M
1990-01-01
Direction selectivity of motion-sensitive neurons is generally thought to result from the nonlinear interaction between the signals derived from adjacent image points. Modeling of motion-sensitive networks, however, reveals that such elements may still respond to motion in a rather poor directionally selective way. Direction selectivity can be significantly enhanced if the nonlinear interaction is followed by another processing stage in which the signals of elements with opposite preferred directions are subtracted from each other. Our electrophysiological experiments in the fly visual system suggest that here direction selectivity is acquired in such a two-stage process. Images PMID:2251278
Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry
2018-06-19
Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.
Selecting Models for Measuring Change When True Experimental Conditions Do Not Exist.
ERIC Educational Resources Information Center
Fortune, Jim C.; Hutson, Barbara A.
1984-01-01
Measuring change when true experimental conditions do not exist is a difficult process. This article reviews the artifacts of change measurement in evaluations and quasi-experimental designs, delineates considerations in choosing a model to measure change under nonideal conditions, and suggests ways to organize models to facilitate selection.…
Modeling selective attention using a neuromorphic analog VLSI device.
Indiveri, G
2000-12-01
Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.
The Multilingual Lexicon: Modelling Selection and Control
ERIC Educational Resources Information Center
de Bot, Kees
2004-01-01
In this paper an overview of research on the multilingual lexicon is presented as the basis for a model for processing multiple languages. With respect to specific issues relating to the processing of more than two languages, it is suggested that there is no need to develop a specific model for such multilingual processing, but at the same time we…
ERIC Educational Resources Information Center
Brysbaert, Marc; Duyck, Wouter
2010-01-01
The Revised Hierarchical Model (RHM) of bilingual language processing dominates current thinking on bilingual language processing. Recently, basic tenets of the model have been called into question. First, there is little evidence for separate lexicons. Second, there is little evidence for language selective access. Third, the inclusion of…
Oliveira, Roberta B; Pereira, Aledir S; Tavares, João Manuel R S
2017-10-01
The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results. Copyright © 2017 Elsevier B.V. All rights reserved.
The Role of Short-Term Memory in Operator Workload
1988-04-01
Craik , F.I. and Lockhart , R.S., 1972, Levels of processing : A framework for memory research...examples of postcategorical selction models. Precategorical Selection Models Sperling’s Model. Unlike Craik and Lockhart’s levels -of- processing model...subdivided to contain a primary memory unit and a mechanism for the direction of conscious attention. Levels -of- Processing . Craik and Lockhart’s levels
NASA Astrophysics Data System (ADS)
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.
Watershed Simulation of Nutrient Processes
In this presentation, nitrogen processes simulated in watershed models were reviewed and compared. Furthermore, current researches on nitrogen losses from agricultural fields were also reviewed. Finally, applications with those models were reviewed and selected successful and u...
Felo, Michael; Christensen, Brandon; Higgins, John
2013-01-01
The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.
Long-term care information systems: an overview of the selection process.
Nahm, Eun-Shim; Mills, Mary Etta; Feege, Barbara
2006-06-01
Under the current Medicare Prospective Payment System method and the ever-changing managed care environment, the long-term care information system is vital to providing quality care and to surviving in business. system selection process should be an interdisciplinary effort involving all necessary stakeholders for the proposed system. The system selection process can be modeled following the Systems Developmental Life Cycle: identifying problems, opportunities, and objectives; determining information requirements; analyzing system needs; designing the recommended system; and developing and documenting software.
A Primer on High-Throughput Computing for Genomic Selection
Wu, Xiao-Lin; Beissinger, Timothy M.; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J. M.; Weigel, Kent A.; Gatti, Natalia de Leon; Gianola, Daniel
2011-01-01
High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin–Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized genetic gain). Eventually, HTC may change our view of data analysis as well as decision-making in the post-genomic era of selection programs in animals and plants, or in the study of complex diseases in humans. PMID:22303303
Stock, Ann-Kathrin; Hoffmann, Sven; Beste, Christian
2017-09-01
Effects of binge drinking on cognitive control and response selection are increasingly recognized in research on alcohol (ethanol) effects. Yet, little is known about how those processes are modulated by hangover effects. Given that acute intoxication and hangover seem to be characterized by partly divergent effects and mechanisms, further research on this topic is needed. In the current study, we hence investigated this with a special focus on potentially differential effects of alcohol intoxication and subsequent hangover on sub-processes involved in the decision to select a response. We do so combining drift diffusion modeling of behavioral data with neurophysiological (EEG) data. Opposed to common sense, the results do not show an impairment of all assessed measures. Instead, they show specific effects of high dose alcohol intoxication and hangover on selective drift diffusion model and EEG parameters (as compared to a sober state). While the acute intoxication induced by binge-drinking decreased the drift rate, it was increased by the subsequent hangover, indicating more efficient information accumulation during hangover. Further, the non-decisional processes of information encoding decreased with intoxication, but not during hangover. These effects were reflected in modulations of the N2, P1 and N1 event-related potentials, which reflect conflict monitoring, perceptual gating and attentional selection processes, respectively. As regards the functional neuroanatomical architecture, the anterior cingulate cortex (ACC) as well as occipital networks seem to be modulated. Even though alcohol is known to have broad neurobiological effects, its effects on cognitive processes are rather specific. © 2016 Society for the Study of Addiction.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
Modeling of plant in vitro cultures: overview and estimation of biotechnological processes.
Maschke, Rüdiger W; Geipel, Katja; Bley, Thomas
2015-01-01
Plant cell and tissue cultivations are of growing interest for the production of structurally complex and expensive plant-derived products, especially in pharmaceutical production. Problems with up-scaling, low yields, and high-priced process conditions result in an increased demand for models to provide comprehension, simulation, and optimization of production processes. In the last 25 years, many models have evolved in plant biotechnology; the majority of them are specialized models for a few selected products or nutritional conditions. In this article we review, delineate, and discuss the concepts and characteristics of the most commonly used models. Therefore, the authors focus on models for plant suspension and submerged hairy root cultures. The article includes a short overview of modeling and mathematics and integrated parameters, as well as the application scope for each model. The review is meant to help researchers better understand and utilize the numerous models published for plant cultures, and to select the most suitable model for their purposes. © 2014 Wiley Periodicals, Inc.
Schleuning, Matthias; Farwig, Nina; Peters, Marcell K; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Naumann, Clas M; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J Wolfgang; Böhning-Gaese, Katrin
2011-01-01
Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants.
Schleuning, Matthias; Farwig, Nina; Peters, Marcell K.; Bergsdorf, Thomas; Bleher, Bärbel; Brandl, Roland; Dalitz, Helmut; Fischer, Georg; Freund, Wolfram; Gikungu, Mary W.; Hagen, Melanie; Garcia, Francisco Hita; Kagezi, Godfrey H.; Kaib, Manfred; Kraemer, Manfred; Lung, Tobias; Schaab, Gertrud; Templin, Mathias; Uster, Dana; Wägele, J. Wolfgang; Böhning-Gaese, Katrin
2011-01-01
Forest fragmentation and selective logging are two main drivers of global environmental change and modify biodiversity and environmental conditions in many tropical forests. The consequences of these changes for the functioning of tropical forest ecosystems have rarely been explored in a comprehensive approach. In a Kenyan rainforest, we studied six animal-mediated ecosystem processes and recorded species richness and community composition of all animal taxa involved in these processes. We used linear models and a formal meta-analysis to test whether forest fragmentation and selective logging affected ecosystem processes and biodiversity and used structural equation models to disentangle direct from biodiversity-related indirect effects of human disturbance on multiple ecosystem processes. Fragmentation increased decomposition and reduced antbird predation, while selective logging consistently increased pollination, seed dispersal and army-ant raiding. Fragmentation modified species richness or community composition of five taxa, whereas selective logging did not affect any component of biodiversity. Changes in the abundance of functionally important species were related to lower predation by antbirds and higher decomposition rates in small forest fragments. The positive effects of selective logging on bee pollination, bird seed dispersal and army-ant raiding were direct, i.e. not related to changes in biodiversity, and were probably due to behavioural changes of these highly mobile animal taxa. We conclude that animal-mediated ecosystem processes respond in distinct ways to different types of human disturbance in Kakamega Forest. Our findings suggest that forest fragmentation affects ecosystem processes indirectly by changes in biodiversity, whereas selective logging influences processes directly by modifying local environmental conditions and resource distributions. The positive to neutral effects of selective logging on ecosystem processes show that the functionality of tropical forests can be maintained in moderately disturbed forest fragments. Conservation concepts for tropical forests should thus include not only remaining pristine forests but also functionally viable forest remnants. PMID:22114695
Wan, Jian; Chen, Yi-Chieh; Morris, A Julian; Thennadil, Suresh N
2017-07-01
Near-infrared (NIR) spectroscopy is being widely used in various fields ranging from pharmaceutics to the food industry for analyzing chemical and physical properties of the substances concerned. Its advantages over other analytical techniques include available physical interpretation of spectral data, nondestructive nature and high speed of measurements, and little or no need for sample preparation. The successful application of NIR spectroscopy relies on three main aspects: pre-processing of spectral data to eliminate nonlinear variations due to temperature, light scattering effects and many others, selection of those wavelengths that contribute useful information, and identification of suitable calibration models using linear/nonlinear regression . Several methods have been developed for each of these three aspects and many comparative studies of different methods exist for an individual aspect or some combinations. However, there is still a lack of comparative studies for the interactions among these three aspects, which can shed light on what role each aspect plays in the calibration and how to combine various methods of each aspect together to obtain the best calibration model. This paper aims to provide such a comparative study based on four benchmark data sets using three typical pre-processing methods, namely, orthogonal signal correction (OSC), extended multiplicative signal correction (EMSC) and optical path-length estimation and correction (OPLEC); two existing wavelength selection methods, namely, stepwise forward selection (SFS) and genetic algorithm optimization combined with partial least squares regression for spectral data (GAPLSSP); four popular regression methods, namely, partial least squares (PLS), least absolute shrinkage and selection operator (LASSO), least squares support vector machine (LS-SVM), and Gaussian process regression (GPR). The comparative study indicates that, in general, pre-processing of spectral data can play a significant role in the calibration while wavelength selection plays a marginal role and the combination of certain pre-processing, wavelength selection, and nonlinear regression methods can achieve superior performance over traditional linear regression-based calibration.
Analysis of acoustic emission signals and monitoring of machining processes
Govekar; Gradisek; Grabec
2000-03-01
Monitoring of a machining process on the basis of sensor signals requires a selection of informative inputs in order to reliably characterize and model the process. In this article, a system for selection of informative characteristics from signals of multiple sensors is presented. For signal analysis, methods of spectral analysis and methods of nonlinear time series analysis are used. With the aim of modeling relationships between signal characteristics and the corresponding process state, an adaptive empirical modeler is applied. The application of the system is demonstrated by characterization of different parameters defining the states of a turning machining process, such as: chip form, tool wear, and onset of chatter vibration. The results show that, in spite of the complexity of the turning process, the state of the process can be well characterized by just a few proper characteristics extracted from a representative sensor signal. The process characterization can be further improved by joining characteristics from multiple sensors and by application of chaotic characteristics.
Simulating natural selection in landscape genetics
E. L. Landguth; S. A. Cushman; N. Johnson
2012-01-01
Linking landscape effects to key evolutionary processes through individual organism movement and natural selection is essential to provide a foundation for evolutionary landscape genetics. Of particular importance is determining how spatially- explicit, individual-based models differ from classic population genetics and evolutionary ecology models based on ideal...
HYBRID SNCR-SCR TECHNOLOGIES FOR NOX CONTROL: MODELING AND EXPERIMENT
The hybrid process of homogeneous gas-phase selective non-catalytic reduction (SNCR) followed by selective catalytic reduction (SCR) of nitric oxide (NO) was investigated through experimentation and modeling. Measurements, using NO-doped flue gas from a gas-fired 29 kW test combu...
NASA Astrophysics Data System (ADS)
Li, Zhanjie; Yu, Jingshan; Xu, Xinyi; Sun, Wenchao; Pang, Bo; Yue, Jiajia
2018-06-01
Hydrological models are important and effective tools for detecting complex hydrological processes. Different models have different strengths when capturing the various aspects of hydrological processes. Relying on a single model usually leads to simulation uncertainties. Ensemble approaches, based on multi-model hydrological simulations, can improve application performance over single models. In this study, the upper Yalongjiang River Basin was selected for a case study. Three commonly used hydrological models (SWAT, VIC, and BTOPMC) were selected and used for independent simulations with the same input and initial values. Then, the BP neural network method was employed to combine the results from the three models. The results show that the accuracy of BP ensemble simulation is better than that of the single models.
ERIC Educational Resources Information Center
Raaijmakers, Steven F.; Baars, Martine; Schaap, Lydia; Paas, Fred; van Merriënboer, Jeroen; van Gog, Tamara
2018-01-01
Self-assessment and task-selection skills are crucial in self-regulated learning situations in which students can choose their own tasks. Prior research suggested that training with video modeling examples, in which another person (the model) demonstrates and explains the cyclical process of problem-solving task performance, self-assessment, and…
Medical student selection and society: Lessons we learned from sociological theories.
Yaghmaei, Minoo; Yazdani, Shahram; Ahmady, Soleiman
2016-01-01
The aim of this study was to show the interaction between the society, applicants and medical schools in terms of medical student selection. In this study, the trends to implement social factors in the selection process were highlighted. These social factors were explored through functionalism and conflict theories, each focusing on different categories of social factors. While functionalist theorists pay attention to diversity in the selection process, conflict theorists highlight the importance of socio-economic class. Although both theories believe in sorting, their different views are reflected in their sorting strategies. Both theories emphasize the importance of the person-society relationship in motivation to enter university. Furthermore, the impacts of social goals on the selection policies are derived from both theories. Theories in the sociology of education offer an approach to student selection that acknowledges and supports complexity, plurality of approaches and innovative means of selection. Medical student selection does not solely focus on the individual assessment and qualification, but it focuses on a social and collective process, which includes all the influences and interactions between the medical schools and the society. Sociological perspective of medical student selection proposes a model that envelops the individual and the society. In this model, the selection methods should meet the criteria of merit at the individual level, while the selection policies should aim at the society goals at the institutional level.
Selective attention in multi-chip address-event systems.
Bartolozzi, Chiara; Indiveri, Giacomo
2009-01-01
Selective attention is the strategy used by biological systems to cope with the inherent limits in their available computational resources, in order to efficiently process sensory information. The same strategy can be used in artificial systems that have to process vast amounts of sensory data with limited resources. In this paper we present a neuromorphic VLSI device, the "Selective Attention Chip" (SAC), which can be used to implement these models in multi-chip address-event systems. We also describe a real-time sensory-motor system, which integrates the SAC with a dynamic vision sensor and a robotic actuator. We present experimental results from each component in the system, and demonstrate how the complete system implements a real-time stimulus-driven selective attention model.
Bias Reduction in Quasi-Experiments with Little Selection Theory but Many Covariates
ERIC Educational Resources Information Center
Steiner, Peter M.; Cook, Thomas D.; Li, Wei; Clark, M. H.
2015-01-01
In observational studies, selection bias will be completely removed only if the selection mechanism is ignorable, namely, all confounders of treatment selection and potential outcomes are reliably measured. Ideally, well-grounded substantive theories about the selection process and outcome-generating model are used to generate the sample of…
Models of Cultural Niche Construction with Selection and Assortative Mating
Feldman, Marcus W.
2012-01-01
Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits. PMID:22905167
Information processing. [in human performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Flach, John M.
1988-01-01
Theoretical models of sensory-information processing by the human brain are reviewed from a human-factors perspective, with a focus on their implications for aircraft and avionics design. The topics addressed include perception (signal detection and selection), linguistic factors in perception (context provision, logical reversals, absence of cues, and order reversals), mental models, and working and long-term memory. Particular attention is given to decision-making problems such as situation assessment, decision formulation, decision quality, selection of action, the speed-accuracy tradeoff, stimulus-response compatibility, stimulus sequencing, dual-task performance, task difficulty and structure, and factors affecting multiple task performance (processing modalities, codes, and stages).
NASA Astrophysics Data System (ADS)
Xiang, Lin
This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be incomplete and many relationships among the model ideas had not been well established by the end of the study. Most of them did not treat the natural selection model as a whole but only focused on some ideas within the model. Very few of them could scientifically apply the natural selection model to interpret other evolutionary phenomena. The findings about participating students' programming processes revealed these processes were composed of consecutive programming cycles. The cycle typically included posing a task, constructing and running program codes, and examining the resulting simulation. Students held multiple ideas and applied various programming strategies in these cycles. Students were involved in MBI at each step of a cycle. Three types of ideas, six programming strategies and ten MBI actions were identified out of the processes. The relationships among these ideas, strategies and actions were also identified and described. Findings suggested that ABPM activities could support MBI by (1) exposing students' personal models and understandings, (2) provoking and supporting a series of model-based inquiry activities, such as elaborating target phenomena, abstracting patterns, and revising conceptual models, and (3) provoking and supporting tangible and productive conversations among students, as well as between the instructor and students. Findings also revealed three programming behaviors that appeared to impede productive MBI, including (1) solely phenomenon-orientated programming, (2) transplanting program codes, and (3) blindly running procedures. Based on the findings, I propose a general modeling process in ABPM activities, summarize the ways in which MBI can be supported in ABPM activities and constrained by multiple factors, and suggest the implications of this study in the future ABPM-assisted science instructional design and research.
Cárdenas, V; Cordobés, M; Blanco, M; Alcalà, M
2015-10-10
The pharmaceutical industry is under stringent regulations on quality control of their products because is critical for both, productive process and consumer safety. According to the framework of "process analytical technology" (PAT), a complete understanding of the process and a stepwise monitoring of manufacturing are required. Near infrared spectroscopy (NIRS) combined with chemometrics have lately performed efficient, useful and robust for pharmaceutical analysis. One crucial step in developing effective NIRS-based methodologies is selecting an appropriate calibration set to construct models affording accurate predictions. In this work, we developed calibration models for a pharmaceutical formulation during its three manufacturing stages: blending, compaction and coating. A novel methodology is proposed for selecting the calibration set -"process spectrum"-, into which physical changes in the samples at each stage are algebraically incorporated. Also, we established a "model space" defined by Hotelling's T(2) and Q-residuals statistics for outlier identification - inside/outside the defined space - in order to select objectively the factors to be used in calibration set construction. The results obtained confirm the efficacy of the proposed methodology for stepwise pharmaceutical quality control, and the relevance of the study as a guideline for the implementation of this easy and fast methodology in the pharma industry. Copyright © 2015 Elsevier B.V. All rights reserved.
Ivezic, Nenad; Potok, Thomas E.
2003-09-30
A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.
"That in your hands". A comprehensive process analysis of a significant event in psychotherapy.
Elliott, R
1983-05-01
This article illustrates a new approach to the study of change processes in psychotherapy. The approach involves selecting significant change events and analyzing them according to the Comprehensive Process Model. In this model, client and therapist behaviors are analyzed for content, interpersonal action, style and response quality by using information derived from Interpersonal Process Recall, client and therapist objective process ratings and qualitative analyses. The event selected for analysis in this paper was rated by client and therapist as significantly helpful. The focal therapist response was a reflective-interpretive intervention in which the therapist collaboratively and evocatively expanded the client's implicit meanings. The event involved working through an earlier insight and realization of progress by the client. The event suggests an association between subjective "felt shifts" and public "process shifts" in client in-therapy behaviors. A model, consistent with Gendlin's experiential psychotherapy (1970), is offered to describe the change process which occurred in this event.
NASA Astrophysics Data System (ADS)
Ranatunga, T.
2017-12-01
Modeling of fate and transport of fecal bacteria in a watershed is a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads from SELECT model were input to the SWAT model, and simulate the bacteria transport through the land and in-stream. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on regional land use, population and household forecast (up to 2040). Based on the reductions required to meet the water quality standards in-stream, the corresponding required source load reductions were estimated.
MPTinR: analysis of multinomial processing tree models in R.
Singmann, Henrik; Kellen, David
2013-06-01
We introduce MPTinR, a software package developed for the analysis of multinomial processing tree (MPT) models. MPT models represent a prominent class of cognitive measurement models for categorical data with applications in a wide variety of fields. MPTinR is the first software for the analysis of MPT models in the statistical programming language R, providing a modeling framework that is more flexible than standalone software packages. MPTinR also introduces important features such as (1) the ability to calculate the Fisher information approximation measure of model complexity for MPT models, (2) the ability to fit models for categorical data outside the MPT model class, such as signal detection models, (3) a function for model selection across a set of nested and nonnested candidate models (using several model selection indices), and (4) multicore fitting. MPTinR is available from the Comprehensive R Archive Network at http://cran.r-project.org/web/packages/MPTinR/ .
Modeling HIV-1 Drug Resistance as Episodic Directional Selection
Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L.; Scheffler, Konrad
2012-01-01
The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance. PMID:22589711
Modeling HIV-1 drug resistance as episodic directional selection.
Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad
2012-01-01
The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.
A CLIPS-based expert system for the evaluation and selection of robots
NASA Technical Reports Server (NTRS)
Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.
1994-01-01
This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
Data mining and statistical inference in selective laser melting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, Chandrika
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Data mining and statistical inference in selective laser melting
Kamath, Chandrika
2016-01-11
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Model of the best-of-N nest-site selection process in honeybees.
Reina, Andreagiovanni; Marshall, James A R; Trianni, Vito; Bose, Thomas
2017-05-01
The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N-1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.
Model of the best-of-N nest-site selection process in honeybees
NASA Astrophysics Data System (ADS)
Reina, Andreagiovanni; Marshall, James A. R.; Trianni, Vito; Bose, Thomas
2017-05-01
The ability of a honeybee swarm to select the best nest site plays a fundamental role in determining the future colony's fitness. To date, the nest-site selection process has mostly been modeled and theoretically analyzed for the case of binary decisions. However, when the number of alternative nests is larger than two, the decision-process dynamics qualitatively change. In this work, we extend previous analyses of a value-sensitive decision-making mechanism to a decision process among N nests. First, we present the decision-making dynamics in the symmetric case of N equal-quality nests. Then, we generalize our findings to a best-of-N decision scenario with one superior nest and N -1 inferior nests, previously studied empirically in bees and ants. Whereas previous binary models highlighted the crucial role of inhibitory stop-signaling, the key parameter in our new analysis is the relative time invested by swarm members in individual discovery and in signaling behaviors. Our new analysis reveals conflicting pressures on this ratio in symmetric and best-of-N decisions, which could be solved through a time-dependent signaling strategy. Additionally, our analysis suggests how ecological factors determining the density of suitable nest sites may have led to selective pressures for an optimal stable signaling ratio.
Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda
2015-01-01
To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Cariapa, Vikram
1993-01-01
The trend in the modern global economy towards free market policies has motivated companies to use rapid prototyping technologies to not only reduce product development cycle time but also to maintain their competitive edge. A rapid prototyping technology is one which combines computer aided design with computer controlled tracking of focussed high energy source (eg. lasers, heat) on modern ceramic powders, metallic powders, plastics or photosensitive liquid resins in order to produce prototypes or models. At present, except for the process of shape melting, most rapid prototyping processes generate products that are only dimensionally similar to those of the desired end product. There is an urgent need, therefore, to enhance the understanding of the characteristics of these processes in order to realize their potential for production. Currently, the commercial market is dominated by four rapid prototyping processes, namely selective laser sintering, stereolithography, fused deposition modelling and laminated object manufacturing. This phase of the research has focussed on the selective laser sintering and stereolithography rapid prototyping processes. A theoretical model for these processes is under development. Different rapid prototyping sites supplied test specimens (based on ASTM 638-84, Type I) that have been measured and tested to provide a data base on surface finish, dimensional variation and ultimate tensile strength. Further plans call for developing and verifying the theoretical models by carefully designed experiments. This will be a joint effort between NASA and other prototyping centers to generate a larger database, thus encouraging more widespread usage by product designers.
Bornhorst, Ellen R; Tang, Juming; Sablani, Shyam S; Barbosa-Cánovas, Gustavo V; Liu, Fang
2017-07-01
Development and selection of model foods is a critical part of microwave thermal process development, simulation validation, and optimization. Previously developed model foods for pasteurization process evaluation utilized Maillard reaction products as the time-temperature integrators, which resulted in similar temperature sensitivity among the models. The aim of this research was to develop additional model foods based on different time-temperature integrators, determine their dielectric properties and color change kinetics, and validate the optimal model food in hot water and microwave-assisted pasteurization processes. Color, quantified using a * value, was selected as the time-temperature indicator for green pea and garlic puree model foods. Results showed 915 MHz microwaves had a greater penetration depth into the green pea model food than the garlic. a * value reaction rates for the green pea model were approximately 4 times slower than in the garlic model food; slower reaction rates were preferred for the application of model food in this study, that is quality evaluation for a target process of 90 °C for 10 min at the cold spot. Pasteurization validation used the green pea model food and results showed that there were quantifiable differences between the color of the unheated control, hot water pasteurization, and microwave-assisted thermal pasteurization system. Both model foods developed in this research could be utilized for quality assessment and optimization of various thermal pasteurization processes. © 2017 Institute of Food Technologists®.
Juliano, Pablo; Knoerzer, Kai; Fryer, Peter J; Versteeg, Cornelis
2009-01-01
High-pressure, high-temperature (HPHT) processing is effective for microbial spore inactivation using mild preheating, followed by rapid volumetric compression heating and cooling on pressure release, enabling much shorter processing times than conventional thermal processing for many food products. A computational thermal fluid dynamic (CTFD) model has been developed to model all processing steps, including the vertical pressure vessel, an internal polymeric carrier, and food packages in an axis-symmetric geometry. Heat transfer and fluid dynamic equations were coupled to four selected kinetic models for the inactivation of C. botulinum; the traditional first-order kinetic model, the Weibull model, an nth-order model, and a combined discrete log-linear nth-order model. The models were solved to compare the resulting microbial inactivation distributions. The initial temperature of the system was set to 90 degrees C and pressure was selected at 600 MPa, holding for 220 s, with a target temperature of 121 degrees C. A representation of the extent of microbial inactivation throughout all processing steps was obtained for each microbial model. Comparison of the models showed that the conventional thermal processing kinetics (not accounting for pressure) required shorter holding times to achieve a 12D reduction of C. botulinum spores than the other models. The temperature distribution inside the vessel resulted in a more uniform inactivation distribution when using a Weibull or an nth-order kinetics model than when using log-linear kinetics. The CTFD platform could illustrate the inactivation extent and uniformity provided by the microbial models. The platform is expected to be useful to evaluate models fitted into new C. botulinum inactivation data at varying conditions of pressure and temperature, as an aid for regulatory filing of the technology as well as in process and equipment design.
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; de Azevedo Mello, Paola; Ferrão, Marco Flores; de Fátima Pereira dos Santos, Maria; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm(-1)). This model produced a RMSECV of 400 mg kg(-1) S and RMSEP of 420 mg kg(-1) S, showing a correlation coefficient of 0.990. Copyright © 2011 Elsevier B.V. All rights reserved.
Dotan, Dror; Friedmann, Naama
2018-04-01
We propose a detailed cognitive model of multi-digit number reading. The model postulates separate processes for visual analysis of the digit string and for oral production of the verbal number. Within visual analysis, separate sub-processes encode the digit identities and the digit order, and additional sub-processes encode the number's decimal structure: its length, the positions of 0, and the way it is parsed into triplets (e.g., 314987 → 314,987). Verbal production consists of a process that generates the verbal structure of the number, and another process that retrieves the phonological forms of each number word. The verbal number structure is first encoded in a tree-like structure, similarly to syntactic trees of sentences, and then linearized to a sequence of number-word specifiers. This model is based on an investigation of the number processing abilities of seven individuals with different selective deficits in number reading. We report participants with impairment in specific sub-processes of the visual analysis of digit strings - in encoding the digit order, in encoding the number length, or in parsing the digit string to triplets. Other participants were impaired in verbal production, making errors in the number structure (shifts of digits to another decimal position, e.g., 3,040 → 30,004). Their selective deficits yielded several dissociations: first, we found a double dissociation between visual analysis deficits and verbal production deficits. Second, several dissociations were found within visual analysis: a double dissociation between errors in digit order and errors in the number length; a dissociation between order/length errors and errors in parsing the digit string into triplets; and a dissociation between the processing of different digits - impaired order encoding of the digits 2-9, without errors in the 0 position. Third, within verbal production, a dissociation was found between digit shifts and substitutions of number words. A selective deficit in any of the processes described by the model would cause difficulties in number reading, which we propose to term "dysnumeria". Copyright © 2017 Elsevier Ltd. All rights reserved.
A randomised approach for NARX model identification based on a multivariate Bernoulli distribution
NASA Astrophysics Data System (ADS)
Bianchi, F.; Falsone, A.; Prandini, M.; Piroddi, L.
2017-04-01
The identification of polynomial NARX models is typically performed by incremental model building techniques. These methods assess the importance of each regressor based on the evaluation of partial individual models, which may ultimately lead to erroneous model selections. A more robust assessment of the significance of a specific model term can be obtained by considering ensembles of models, as done by the RaMSS algorithm. In that context, the identification task is formulated in a probabilistic fashion and a Bernoulli distribution is employed to represent the probability that a regressor belongs to the target model. Then, samples of the model distribution are collected to gather reliable information to update it, until convergence to a specific model. The basic RaMSS algorithm employs multiple independent univariate Bernoulli distributions associated to the different candidate model terms, thus overlooking the correlations between different terms, which are typically important in the selection process. Here, a multivariate Bernoulli distribution is employed, in which the sampling of a given term is conditioned by the sampling of the others. The added complexity inherent in considering the regressor correlation properties is more than compensated by the achievable improvements in terms of accuracy of the model selection process.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Bocedi, Greta; Reid, Jane M
2015-01-01
Explaining the evolution and maintenance of polyandry remains a key challenge in evolutionary ecology. One appealing explanation is the sexually selected sperm (SSS) hypothesis, which proposes that polyandry evolves due to indirect selection stemming from positive genetic covariance with male fertilization efficiency, and hence with a male's success in postcopulatory competition for paternity. However, the SSS hypothesis relies on verbal analogy with “sexy-son” models explaining coevolution of female preferences for male displays, and explicit models that validate the basic SSS principle are surprisingly lacking. We developed analogous genetically explicit individual-based models describing the SSS and “sexy-son” processes. We show that the analogy between the two is only partly valid, such that the genetic correlation arising between polyandry and fertilization efficiency is generally smaller than that arising between preference and display, resulting in less reliable coevolution. Importantly, indirect selection was too weak to cause polyandry to evolve in the presence of negative direct selection. Negatively biased mutations on fertilization efficiency did not generally rescue runaway evolution of polyandry unless realized fertilization was highly skewed toward a single male, and coevolution was even weaker given random mating order effects on fertilization. Our models suggest that the SSS process is, on its own, unlikely to generally explain the evolution of polyandry. PMID:25330405
NASA Astrophysics Data System (ADS)
Krawczyk, Piotr; Badyda, Krzysztof
2011-12-01
The paper presents key assumptions of the mathematical model which describes heat and mass transfer phenomena in a solar sewage drying process, as well as techniques used for solving this model with the Fluent computational fluid dynamics (CFD) software. Special attention was paid to implementation of boundary conditions on the sludge surface, which is a physical boundary between the gaseous phase - air, and solid phase - dried matter. Those conditions allow to model heat and mass transfer between the media during first and second drying stages. Selection of the computational geometry is also discussed - it is a fragment of the entire drying facility. Selected modelling results are presented in the final part of the paper.
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
NASA Astrophysics Data System (ADS)
Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.
2013-12-01
The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all individual posterior models, how informative the data types were for reducing prediction uncertainty of evapotranspiration and deep drainage, and how well the model structure can be identified based on the different data types and subsets. We further analyze the impact of measurement uncertainty und systematic model errors on the effective sample size of the BF and the resulting model weights.
SSL: A Theory of How People Learn to Select Strategies
ERIC Educational Resources Information Center
Rieskamp, Jorg; Otto, Philipp E.
2006-01-01
The assumption that people possess a repertoire of strategies to solve the inference problems they face has been raised repeatedly. However, a computational model specifying how people select strategies from their repertoire is still lacking. The proposed strategy selection learning (SSL) theory predicts a strategy selection process on the basis…
Discrimination of correlated and entangling quantum channels with selective process tomography
Dumitrescu, Eugene; Humble, Travis S.
2016-10-10
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
Computer modeling of lung cancer diagnosis-to-treatment process
Ju, Feng; Lee, Hyo Kyung; Osarogiagbon, Raymond U.; Yu, Xinhua; Faris, Nick
2015-01-01
We introduce an example of a rigorous, quantitative method for quality improvement in lung cancer care-delivery. Computer process modeling methods are introduced for lung cancer diagnosis, staging and treatment selection process. Two types of process modeling techniques, discrete event simulation (DES) and analytical models, are briefly reviewed. Recent developments in DES are outlined and the necessary data and procedures to develop a DES model for lung cancer diagnosis, leading up to surgical treatment process are summarized. The analytical models include both Markov chain model and closed formulas. The Markov chain models with its application in healthcare are introduced and the approach to derive a lung cancer diagnosis process model is presented. Similarly, the procedure to derive closed formulas evaluating the diagnosis process performance is outlined. Finally, the pros and cons of these methods are discussed. PMID:26380181
NASA Astrophysics Data System (ADS)
Peng, Hong-Gang; Wang, Jian-Qiang
2017-11-01
In recent years, sustainable energy crop has become an important energy development strategy topic in many countries. Selecting the most sustainable energy crop is a significant problem that must be addressed during any biofuel production process. The focus of this study is the development of an innovative multi-criteria decision-making (MCDM) method to handle sustainable energy crop selection problems. Given that various uncertain data are encountered in the evaluation of sustainable energy crops, linguistic intuitionistic fuzzy numbers (LIFNs) are introduced to present the information necessary to the evaluation process. Processing qualitative concepts requires the effective support of reliable tools; then, a cloud model can be used to deal with linguistic intuitionistic information. First, LIFNs are converted and a novel concept of linguistic intuitionistic cloud (LIC) is proposed. The operations, score function and similarity measurement of the LICs are defined. Subsequently, the linguistic intuitionistic cloud density-prioritised weighted Heronian mean operator is developed, which served as the basis for the construction of an applicable MCDM model for sustainable energy crop selection. Finally, an illustrative example is provided to demonstrate the proposed method, and its feasibility and validity are further verified by comparing it with other existing methods.
Modeling of sorption processes on solid-phase ion-exchangers
NASA Astrophysics Data System (ADS)
Dorofeeva, Ludmila; Kuan, Nguyen Anh
2018-03-01
Research of alkaline elements separation on solid-phase ion-exchangers is carried out to define the selectivity coefficients and height of an equivalent theoretical stage for both continuous and stepwise filling of column by ionite. On inorganic selective sorbents the increase in isotope enrichment factor up to 0.0127 is received. Also, parametrical models that are adequately describing dependence of the pressure difference and the magnitude expansion in the ion-exchange layer from the flow rate and temperature have been obtained. The concentration rate value under the optimum realization conditions of process and depending on type of a selective material changes in a range 1.021÷1.092. Calculated results show agreement with experimental data.
ERIC Educational Resources Information Center
Melinger, Alissa; Rahman, Rasha Abdel
2013-01-01
In this study, we present 3 picture-word interference (PWI) experiments designed to investigate whether lexical selection processes are competitive. We focus on semantic associative relations, which should interfere according to competitive models but not according to certain noncompetitive models. In a modified version of the PWI paradigm,…
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2005-01-01
A system and method for monitoring an apparatus or process asset including creating a process model comprised of a plurality of process submodels each correlative to at least one training data subset partitioned from an unpartitioned training data set and each having an operating mode associated thereto; acquiring a set of observed signal data values from the asset; determining an operating mode of the asset for the set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a set of estimated signal data values from the selected process submodel for the determined operating mode; and determining asset status as a function of the calculated set of estimated signal data values for providing asset surveillance and/or control.
Relapse Model among Iranian Drug Users: A Qualitative Study.
Jalali, Amir; Seyedfatemi, Naiemeh; Peyrovi, Hamid
2015-01-01
Relapse is a common problem in drug user's rehabilitation program and reported in all over the country. An in-depth study on patients' experiences can be used for exploring the relapse process among drug users. Therefore, this study suggests a model for relapse process among Iranian drug users. In this qualitative study with grounded theory approach, 22 participants with rich information about the phenomenon under the study were selected using purposive, snowball and theoretical sampling methods. After obtaining the informed consent, data were collected based on face-to-face, in-depth, semi-structured interviews. All interviews were analyzed in three stages of axial, selective and open coding methods. Nine main categories emerged, including avoiding of drugs, concerns about being accepted, family atmosphere, social conditions, mental challenge, self-management, self-deception, use and remorse and a main category, feeling of loss as the core variable. Mental challenge has two subcategories, evoking pleasure and craving. Relapse model is a dynamic and systematic process including from cycles of drug avoidance to remorse with a core variable as feeling of loss. Relapse process is a dynamic and systematic process that needs an effective control. Determining a relapse model as a clear process could be helpful in clinical sessions. RESULTS of this research have depicted relapse process among Iranian drugs user by conceptual model.
Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent
Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M.
2017-01-01
The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients. PMID:28744212
Personalized Offline and Pseudo-Online BCI Models to Detect Pedaling Intent.
Rodríguez-Ugarte, Marisol; Iáñez, Eduardo; Ortíz, Mario; Azorín, Jose M
2017-01-01
The aim of this work was to design a personalized BCI model to detect pedaling intention through EEG signals. The approach sought to select the best among many possible BCI models for each subject. The choice was between different processing windows, feature extraction algorithms and electrode configurations. Moreover, data was analyzed offline and pseudo-online (in a way suitable for real-time applications), with a preference for the latter case. A process for selecting the best BCI model was described in detail. Results for the pseudo-online processing with the best BCI model of each subject were on average 76.7% of true positive rate, 4.94 false positives per minute and 55.1% of accuracy. The personalized BCI model approach was also found to be significantly advantageous when compared to the typical approach of using a fixed feature extraction algorithm and electrode configuration. The resulting approach could be used to more robustly interface with lower limb exoskeletons in the context of the rehabilitation of stroke patients.
Model Identification of Integrated ARMA Processes
ERIC Educational Resources Information Center
Stadnytska, Tetiana; Braun, Simone; Werner, Joachim
2008-01-01
This article evaluates the Smallest Canonical Correlation Method (SCAN) and the Extended Sample Autocorrelation Function (ESACF), automated methods for the Autoregressive Integrated Moving-Average (ARIMA) model selection commonly available in current versions of SAS for Windows, as identification tools for integrated processes. SCAN and ESACF can…
Administrative Decision Making and Resource Allocation.
ERIC Educational Resources Information Center
Sardy, Susan; Sardy, Hyman
This paper considers selected aspects of the systems analysis of administrative decisionmaking regarding resource allocations in an educational system. A model of the instructional materials purchase system is presented. The major components of this model are: environment, input, decision process, conversion structure, conversion process, output,…
Processing demands in belief-desire reasoning: inhibition or general difficulty?
Friedman, Ori; Leslie, Alan M
2005-05-01
Most 4-year-olds can predict the behavior of a person who wants an object but is mistaken about its location. More difficult is predicting behavior when the person is mistaken about location and wants to avoid the object. We tested between two explanations for children's difficulties with avoidance false belief: the Selection Processing model of inhibitory processing and a General Difficulty account. Children were presented with a false belief task and a control task, in which belief attribution was as difficult as in the false belief task. Predicting behavior in light of the character's desire to avoid the object added more difficulty in the false belief task. This finding is consistent with the Selection Processing model, but not with the General Difficulty account.
NASA Astrophysics Data System (ADS)
Zhang, Zhifen; Chen, Huabin; Xu, Yanling; Zhong, Jiyong; Lv, Na; Chen, Shanben
2015-08-01
Multisensory data fusion-based online welding quality monitoring has gained increasing attention in intelligent welding process. This paper mainly focuses on the automatic detection of typical welding defect for Al alloy in gas tungsten arc welding (GTAW) by means of analzing arc spectrum, sound and voltage signal. Based on the developed algorithms in time and frequency domain, 41 feature parameters were successively extracted from these signals to characterize the welding process and seam quality. Then, the proposed feature selection approach, i.e., hybrid fisher-based filter and wrapper was successfully utilized to evaluate the sensitivity of each feature and reduce the feature dimensions. Finally, the optimal feature subset with 19 features was selected to obtain the highest accuracy, i.e., 94.72% using established classification model. This study provides a guideline for feature extraction, selection and dynamic modeling based on heterogeneous multisensory data to achieve a reliable online defect detection system in arc welding.
Furlanello, Cesare; Serafini, Maria; Merler, Stefano; Jurman, Giuseppe
2003-11-06
We describe the E-RFE method for gene ranking, which is useful for the identification of markers in the predictive classification of array data. The method supports a practical modeling scheme designed to avoid the construction of classification rules based on the selection of too small gene subsets (an effect known as the selection bias, in which the estimated predictive errors are too optimistic due to testing on samples already considered in the feature selection process). With E-RFE, we speed up the recursive feature elimination (RFE) with SVM classifiers by eliminating chunks of uninteresting genes using an entropy measure of the SVM weights distribution. An optimal subset of genes is selected according to a two-strata model evaluation procedure: modeling is replicated by an external stratified-partition resampling scheme, and, within each run, an internal K-fold cross-validation is used for E-RFE ranking. Also, the optimal number of genes can be estimated according to the saturation of Zipf's law profiles. Without a decrease of classification accuracy, E-RFE allows a speed-up factor of 100 with respect to standard RFE, while improving on alternative parametric RFE reduction strategies. Thus, a process for gene selection and error estimation is made practical, ensuring control of the selection bias, and providing additional diagnostic indicators of gene importance.
Enviro-HIRLAM Applicability for Black Carbon Studies in Arctic
NASA Astrophysics Data System (ADS)
Nuterman, Roman; Mahura, Alexander; Baklanov, Alexander; Kurganskiy, Alexander; Amstrup, Bjarne; Kaas, Eigil
2015-04-01
One of the main aims of the Nordic CarboNord project ("Impact of black carbon on air quality and climate in Northern Europe and Arctic") is focused on providing new information on distribution and effects of black carbon in Northern Europe and Arctic. It can be done through assessing robustness of model predictions of long-range black carbon distribution and its relation to climate change and forcing. In our study, the online integrated meteorology-chemistry/aerosols model - Enviro-HIRLAM (Environment - HIgh Resolution Limited Area Model) - is used. This study, at first, is focused on adaptation (model setup, domain for the Northern Hemisphere and Arctic region, emissions, boundary conditions, refining aerosols microphysics and chemistry, cloud-aerosol interaction processes) of Enviro-HIRLAM model and selection of most unfavorable weather and air pollution episodes for the Arctic region. Simulations of interactions between black carbon and meteorological processes in northern conditions for selected episodes will be performed (at DMI's supercomputer HPC CRAY-XT5), and then long-term simulations at regional scale for selected winter vs. summer months. Modelling results will be compared on a diurnal cycle and monthly basis against observations for key meteorological parameters (such as air temperature, wind speed, relative humidity, and precipitation) as well as aerosol concentration. Finally, evaluation of black carbon atmospheric transport, dispersion, and deposition patterns at different spatio-temporal scales; physical-chemical processes and transformations of black carbon containing aerosols; and interactions and effects between black carbon and meteorological processes in Arctic weather conditions will be done.
Thermo-optical Modelling of Laser Matter Interactions in Selective Laser Melting Processes.
NASA Astrophysics Data System (ADS)
Vinnakota, Raj; Genov, Dentcho
Selective laser melting (SLM) is one of the promising advanced manufacturing techniques, which is providing an ideal platform to manufacture components with zero geometric constraints. Coupling the electromagnetic and thermodynamic processes involved in the SLM, and developing the comprehensive theoretical model of the same is of great importance since it can provide significant improvements in the printing processes by revealing the optimal parametric space related to applied laser power, scan velocity, powder material, layer thickness and porosity. Here, we present a self-consistent Thermo-optical model which simultaneously solves the Maxwell's and the heat transfer equations and provides an insight into the electromagnetic energy released in the powder-beds and the concurrent thermodynamics of the particles temperature rise and onset of melting. The numerical calculations are compared with developed analytical model of the SLM process providing insight into the dynamics between laser facilitated Joule heating and radiation mitigated rise in temperature. These results provide guidelines toward improved energy efficiency and optimization of the SLM process scan rates. The current work is funded by the NSF EPSCoR CIMM project under award #OIA-1541079.
Christidi, Foteini; Zalonis, Ioannis; Smyrnis, Nikolaos; Evdokimidis, Ioannis
2012-09-01
The present study investigates selective attention and verbal free recall in amyotrophic lateral sclerosis (ALS) and examines the contribution of selective attention, encoding, consolidation, and retrieval memory processes to patients' verbal free recall. We examined 22 non-demented patients with sporadic ALS and 22 demographically related controls using Stroop Neuropsychological Screening Test (SNST; selective attention) and Rey Auditory Verbal Learning Test (RAVLT; immediate & delayed verbal free recall). The item-specific deficit approach (ISDA) was applied to RAVLT to evaluate encoding, consolidation, and retrieval difficulties. ALS patients performed worse than controls on SNST (p < .001) and RAVLT immediate and delayed recall (p < .001) and showed deficient encoding (p = .001) and consolidation (p = .002) but not retrieval (p = .405). Hierarchical regression analysis revealed that SNST and ISDA indices accounted for: (a) 91.1% of the variance in RAVLT immediate recall, with encoding (p = .016), consolidation (p < .001), and retrieval (p = .032) significantly contributing to the overall model and the SNST alone accounting for 41.6%; and (b) 85.2% of the variance in RAVLT delayed recall, with consolidation (p < .001) and retrieval (p = .008) significantly contributing to the overall model and the SNST alone accounting for 39.8%. Thus, selective attention, encoding, and consolidation, and to a lesser extent of retrieval, influenced both immediate and delayed verbal free recall. Concluding, selective attention and the memory processes of encoding, consolidation, and retrieval should be considered while interpreting patients' impaired free recall. (JINS, 2012, 18, 1-10).
Bayesian Model Selection under Time Constraints
NASA Astrophysics Data System (ADS)
Hoege, M.; Nowak, W.; Illman, W. A.
2017-12-01
Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.
Dion, Kenneth L
2004-07-01
The issues of interpersonal and group processes in long-term spacecrews from the perspectives of social and organizational psychology are considered here. A contrast between the Amundsen vs. Scott expeditions to the South Pole 90 yrs. ago highlights the importance of personnel selection and attention to interpersonal and group dynamics in expeditions to extreme and dangerous environments, such as long-term spaceflights today. Under the rubric of personnel selection, some further psychological "select-in" and "select-out" criteria are suggested, among them implicit measures of human motivation, intergroup attitudes ("implicit" and "explicit" measures of prejudice, social dominance orientation, and right-wing authoritarianism), attachment styles, and dispositional hardiness. The situational interview and the idea of "selection for teams," drawn from current advances in organizational psychology, are recommended for selecting members for future spacecrews. Under the rubrics of interpersonal and group processes, the social relations model is introduced as a technique for modeling and understanding interdependence among spacecrew members and partialling out variance in behavioral and perceptual data into actor/perceiver, partner/target, and relationship components. Group cohesion as a multidimensional construct is introduced, along with a consideration of the groupthink phenomenon and its controversial link to cohesion. Group composition issues are raised with examples concerning cultural heterogeneity and gender composition. Cultural value dimensions, especially power distance and individual-collectivism, should be taken into account at both societal and psychological levels in long-term space missions. Finally, intergroup processes and language issues in crews are addressed. The recategorization induction from the common ingroup identity model is recommended as a possible intervention for overcoming and inhibiting intergroup biases within spacecrews and between space- and groundcrews.
Efficient spiking neural network model of pattern motion selectivity in visual cortex.
Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L
2014-07-01
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.
NASA Astrophysics Data System (ADS)
Olchev, A. V.; Rozinkina, I. A.; Kuzmina, E. V.; Nikitin, M. A.; Rivin, G. S.
2018-01-01
This modeling study intends to estimate the possible influence of forest cover change on regional weather conditions using the non-hydrostatic model COSMO. The central part of the East European Plain was selected as the ‘model region’ for the study. The results of numerical experiments conducted for the warm period of 2010 for the modeling domain covering almost the whole East European Plain showed that deforestation and afforestation processes within the selected model region of the area about 105 km2 can lead to significant changes in regional weather conditions. The deforestation processes have resulted in an increase of the air temperature and a reduction in the amount of precipitation. The afforestation processes can produce the opposite effects, as manifested in decreased air temperature and increased precipitation. Whereas a change of the air temperature is observed mainly inside of the model region, the changes of the precipitation are evident within the entire East European Plain, even in regions situated far away from the external boundaries of the model region.
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
NASA Astrophysics Data System (ADS)
Broderick, Scott R.; Santhanam, Ganesh Ram; Rajan, Krishna
2016-08-01
As the size of databases has significantly increased, whether through high throughput computation or through informatics-based modeling, the challenge of selecting the optimal material for specific design requirements has also arisen. Given the multiple, and often conflicting, design requirements, this selection process is not as trivial as sorting the database for a given property value. We suggest that the materials selection process should minimize selector bias, as well as take data uncertainty into account. For this reason, we discuss and apply decision theory for identifying chemical additions to Ni-base alloys. We demonstrate and compare results for both a computational array of chemistries and standard commercial superalloys. We demonstrate how we can use decision theory to select the best chemical additions for enhancing both property and processing, which would not otherwise be easily identifiable. This work is one of the first examples of introducing the mathematical framework of set theory and decision analysis into the domain of the materials selection process.
A neuromorphic VLSI device for implementing 2-D selective attention systems.
Indiveri, G
2001-01-01
Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Design and Development of a User Interface for the Dynamic Model of Software Project Management.
1988-03-01
rectory of the user’s choice for future...the last choice selected. Let us assume for the sake of this tour that the user has selected all eight choices . ESTIMATED ACTUAL PROJECT SIZE DEFINITION...manipulation of varaibles in the * •. TJin~ca model "h ... ser Inter ace for the Dynamica model was designed b in iterative process of prototyping
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
Validating archetypes for the Multiple Sclerosis Functional Composite.
Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin
2014-08-03
Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.
Validating archetypes for the Multiple Sclerosis Functional Composite
2014-01-01
Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081
ERIC Educational Resources Information Center
Cohen, Gillian
1979-01-01
Kinsbourne's attentional model of hemisphere differences is reviewed, and some difficulties inherent in this model are described. Although others have succeeded in identifying some factors that govern effects of selective activation, effects of general activation are uncertain, so the overall outcome of concurrent memory loading is still difficult…
NASA Astrophysics Data System (ADS)
Lee, Y. J.; Bonfanti, C. E.; Trailovic, L.; Etherton, B.; Govett, M.; Stewart, J.
2017-12-01
At present, a fraction of all satellite observations are ultimately used for model assimilation. The satellite data assimilation process is computationally expensive and data are often reduced in resolution to allow timely incorporation into the forecast. This problem is only exacerbated by the recent launch of Geostationary Operational Environmental Satellite (GOES)-16 satellite and future satellites providing several order of magnitude increase in data volume. At the NOAA Earth System Research Laboratory (ESRL) we are researching the use of machine learning the improve the initial selection of satellite data to be used in the model assimilation process. In particular, we are investigating the use of deep learning. Deep learning is being applied to many image processing and computer vision problems with great success. Through our research, we are using convolutional neural network to find and mark regions of interest (ROI) to lead to intelligent extraction of observations from satellite observation systems. These targeted observations will be used to improve the quality of data selected for model assimilation and ultimately improve the impact of satellite data on weather forecasts. Our preliminary efforts to identify the ROI's are focused in two areas: applying and comparing state-of-art convolutional neural network models using the analysis data from the National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) weather model, and using these results as a starting point to optimize convolution neural network model for pattern recognition on the higher resolution water vapor data from GOES-WEST and other satellite. This presentation will provide an introduction to our convolutional neural network model to identify and process these ROI's, along with the challenges of data preparation, training the model, and parameter optimization.
Manufacturing process and material selection in concurrent collaborative design of MEMS devices
NASA Astrophysics Data System (ADS)
Zha, Xuan F.; Du, H.
2003-09-01
In this paper we present knowledge of an intensive approach and system for selecting suitable manufacturing processes and materials for microelectromechanical systems (MEMS) devices in concurrent collaborative design environment. In the paper, fundamental issues on MEMS manufacturing process and material selection such as concurrent design framework, manufacturing process and material hierarchies, and selection strategy are first addressed. Then, a fuzzy decision support scheme for a multi-criteria decision-making problem is proposed for estimating, ranking and selecting possible manufacturing processes, materials and their combinations. A Web-based prototype advisory system for the MEMS manufacturing process and material selection, WebMEMS-MASS, is developed based on the client-knowledge server architecture and framework to help the designer find good processes and materials for MEMS devices. The system, as one of the important parts of an advanced simulation and modeling tool for MEMS design, is a concept level process and material selection tool, which can be used as a standalone application or a Java applet via the Web. The running sessions of the system are inter-linked with webpages of tutorials and reference pages to explain the facets, fabrication processes and material choices, and calculations and reasoning in selection are performed using process capability and material property data from a remote Web-based database and interactive knowledge base that can be maintained and updated via the Internet. The use of the developed system including operation scenario, use support, and integration with an MEMS collaborative design system is presented. Finally, an illustration example is provided.
Adaptive Greedy Dictionary Selection for Web Media Summarization.
Cong, Yang; Liu, Ji; Sun, Gan; You, Quanzeng; Li, Yuncheng; Luo, Jiebo
2017-01-01
Initializing an effective dictionary is an indispensable step for sparse representation. In this paper, we focus on the dictionary selection problem with the objective to select a compact subset of basis from original training data instead of learning a new dictionary matrix as dictionary learning models do. We first design a new dictionary selection model via l 2,0 norm. For model optimization, we propose two methods: one is the standard forward-backward greedy algorithm, which is not suitable for large-scale problems; the other is based on the gradient cues at each forward iteration and speeds up the process dramatically. In comparison with the state-of-the-art dictionary selection models, our model is not only more effective and efficient, but also can control the sparsity. To evaluate the performance of our new model, we select two practical web media summarization problems: 1) we build a new data set consisting of around 500 users, 3000 albums, and 1 million images, and achieve effective assisted albuming based on our model and 2) by formulating the video summarization problem as a dictionary selection issue, we employ our model to extract keyframes from a video sequence in a more flexible way. Generally, our model outperforms the state-of-the-art methods in both these two tasks.
Revisiting Gaussian Process Regression Modeling for Localization in Wireless Sensor Networks
Richter, Philipp; Toledano-Ayala, Manuel
2015-01-01
Signal strength-based positioning in wireless sensor networks is a key technology for seamless, ubiquitous localization, especially in areas where Global Navigation Satellite System (GNSS) signals propagate poorly. To enable wireless local area network (WLAN) location fingerprinting in larger areas while maintaining accuracy, methods to reduce the effort of radio map creation must be consolidated and automatized. Gaussian process regression has been applied to overcome this issue, also with auspicious results, but the fit of the model was never thoroughly assessed. Instead, most studies trained a readily available model, relying on the zero mean and squared exponential covariance function, without further scrutinization. This paper studies the Gaussian process regression model selection for WLAN fingerprinting in indoor and outdoor environments. We train several models for indoor/outdoor- and combined areas; we evaluate them quantitatively and compare them by means of adequate model measures, hence assessing the fit of these models directly. To illuminate the quality of the model fit, the residuals of the proposed model are investigated, as well. Comparative experiments on the positioning performance verify and conclude the model selection. In this way, we show that the standard model is not the most appropriate, discuss alternatives and present our best candidate. PMID:26370996
Measurement of Outcomes in Vision-Related Rehabilitation.
ERIC Educational Resources Information Center
Head, Daniel
1998-01-01
Comments on an earlier article by Lorraine Lidoff on health insurance coverage of vision-related rehabilitation services. Urges a standard model of services involving selection of measurable outcomes that reflect treatment processes, selection of the most appropriate time to measure outcomes, and selection of the best method for collecting outcome…
Size reduction techniques for vital compliant VHDL simulation models
Rich, Marvin J.; Misra, Ashutosh
2006-08-01
A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.
Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard
2017-08-01
This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.
ERIC Educational Resources Information Center
Young, I. Phillip; De La Torre, Guadalupe Xavier
2006-01-01
Research addressing the attraction and selection of individuals for administrator positions is encapsulated in a structural model that depicts different phases of the employee procurement process. Within the present study, attention is devoted to the prescreening stage of the selection process, and screening decisions of superintendents are…
ERIC Educational Resources Information Center
Coker, Cindy E.
2015-01-01
The purpose of this exploratory phenomenological narrative qualitative study was to investigate the influence of Facebook on first-generation college students' selection of a college framed within Hossler and Gallagher's (1987) college process model. The three questions which guided this research explored the influence of the social media website…
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
Dispersal-Based Microbial Community Assembly Decreases Biogeochemical Function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Stegen, James C.
Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of microbial communities over biogeochemical function. We used an ecological simulation model to evaluate this hypothesis, defining biogeochemical function generically to represent any biogeochemical reaction of interest. We assembled receiving communities under different levels of dispersal from a source community that was assembled purely by selection. The dispersal scenarios ranged from no dispersal (i.e., selection-only) to dispersal ratesmore » high enough to overwhelm selection (i.e., homogenizing dispersal). We used an aggregate measure of community fitness to infer a given community’s biogeochemical function relative to other communities. We also used ecological null models to further link the relative influence of deterministic assembly to function. We found that increasing rates of dispersal decrease biogeochemical function by increasing the proportion of maladapted taxa in a local community. Niche breadth was also a key determinant of biogeochemical function, suggesting a tradeoff between the function of generalist and specialist species. Finally, we show that microbial assembly processes exert greater influence over biogeochemical function when there is variation in the relative contributions of dispersal and selection among communities. Taken together, our results highlight the influence of spatial processes on biogeochemical function and indicate the need to account for such effects in models that aim to predict biogeochemical function under future environmental scenarios.« less
Dispersal-Based Microbial Community Assembly Decreases Biogeochemical Function
Graham, Emily B.; Stegen, James C.
2017-11-01
Ecological mechanisms influence relationships among microbial communities, which in turn impact biogeochemistry. In particular, microbial communities are assembled by deterministic (e.g., selection) and stochastic (e.g., dispersal) processes, and the relative balance of these two process types is hypothesized to alter the influence of microbial communities over biogeochemical function. We used an ecological simulation model to evaluate this hypothesis, defining biogeochemical function generically to represent any biogeochemical reaction of interest. We assembled receiving communities under different levels of dispersal from a source community that was assembled purely by selection. The dispersal scenarios ranged from no dispersal (i.e., selection-only) to dispersal ratesmore » high enough to overwhelm selection (i.e., homogenizing dispersal). We used an aggregate measure of community fitness to infer a given community’s biogeochemical function relative to other communities. We also used ecological null models to further link the relative influence of deterministic assembly to function. We found that increasing rates of dispersal decrease biogeochemical function by increasing the proportion of maladapted taxa in a local community. Niche breadth was also a key determinant of biogeochemical function, suggesting a tradeoff between the function of generalist and specialist species. Finally, we show that microbial assembly processes exert greater influence over biogeochemical function when there is variation in the relative contributions of dispersal and selection among communities. Taken together, our results highlight the influence of spatial processes on biogeochemical function and indicate the need to account for such effects in models that aim to predict biogeochemical function under future environmental scenarios.« less
Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.
Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit
2015-09-09
Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.
One approach to predictive modeling of biological contamination of recreational waters and drinking water sources involves applying process-based models that consider microbial sources, hydrodynamic transport, and microbial fate. Fecal indicator bacteria such as enterococci have ...
Initial Crisis Reaction and Poliheuristic Theory
ERIC Educational Resources Information Center
DeRouen, Karl, Jr.; Sprecher, Christopher
2004-01-01
Poliheuristic (PH) theory models foreign policy decisions using a two-stage process. The first step eliminates alternatives on the basis of a simplifying heuristic. The second step involves a selection from among the remaining alternatives and can employ a more rational and compensatory means of processing information. The PH model posits that…
A Cognitive Model of Depressive Onset.
ERIC Educational Resources Information Center
Ganellen, Ronald; Blaney, Paul H.
A model drawn from recently expanding research literature is presented to clarify the process involved in the development of clinical depression. A body of literature is reviewed that deals with information processing, specifically memory, which relates to the selective recall of negative experiences clinically seen in depressives. A second body…
The Ability to Process Abstract Information.
1983-09-01
Responses Associated with Stress . .. 8 2. Filter Theories: A. Broadbent’s filter model . . . . 12 B. Treisaman’s attentuation model . . . 12 3... model has been proposed by Schneider and Shiffrin (1977) and Shiffrin and Schneider (1977). Unlike Broadbent’s filter models Schneider and Shiffrin...allows for processing to take place only on the input "selected". This filter model is shown in Figure 2A. According to this theory, any information
The Structure of Processing Resource Demands in Monitoring Automatic Systems.
1981-01-01
Attempts at modelling the human failure detection process have continually focused on normative predictions of optimal operator behavior ( Smallwood ...Broadbent’s filter model (Broadbent, 1957), to Treisman’s attenuation model (Treisman, 1964), to Norman’s late selection model ( Norman , 1968), tife concept...survey and a model. Acta Psychologica, 1967, 27, 84-92. Moray, N. Mental workload: Its theory and measurement. New York: Plenum Press, 1979. Li 42 Norman
Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils
Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng
2017-01-01
This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412
He, Zhangyi; Beaumont, Mark; Yu, Feng
2017-01-01
We explore the effect of different mechanisms of natural selection on the evolution of populations for one- and two-locus systems. We compare the effect of viability and fecundity selection in the context of the Wright-Fisher model with selection under the assumption of multiplicative fitness. We show that these two modes of natural selection correspond to different orderings of the processes of population regulation and natural selection in the Wright-Fisher model. We find that under the Wright-Fisher model these two different orderings can affect the distribution of trajectories of haplotype frequencies evolving with genetic recombination. However, the difference in the distribution of trajectories is only appreciable when the population is in significant linkage disequilibrium. We find that as linkage disequilibrium decays the trajectories for the two different models rapidly become indistinguishable. We discuss the significance of these findings in terms of biological examples of viability and fecundity selection, and speculate that the effect may be significant when factors such as gene migration maintain a degree of linkage disequilibrium. PMID:28500051
Nielsen, Simon; Wilms, L Inge
2014-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, R.W.; Wilson, M.L.; Dockery, H.A.
1992-07-01
This report describes an assessment of the long-term performance of a repository system that contains deeply buried highly radioactive waste; the system is assumed to be located at the potential site at Yucca Mountain, Nevada. The study includes an identification of features, events, and processes that might affect the potential repository, a construction of scenarios based on this identification, a selection of models describing these scenarios (including abstraction of appropriate models from detailed models), a selection of probability distributions for the parameters in the models, a stochastic calculation of radionuclide releases for the scenarios, and a derivation of complementary cumulativemore » distribution functions (CCDFs) for the releases. Releases and CCDFs are calculated for four categories of scenarios: aqueous flow (modeling primarily the existing conditions at the site, with allowances for climate change), gaseous flow, basaltic igneous activity, and human intrusion. The study shows that models of complex processes can be abstracted into more simplified representations that preserve the understanding of the processes and produce results consistent with those of more complex models.« less
Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process
NASA Astrophysics Data System (ADS)
Yan, Wei; Chang, Yuwen
2016-12-01
Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224
1993-02-10
new technology is to have sufficient control of processing to *- describable by an appropriate elecromagnetic model . build useful devices. For example...3. W aveguide Modulators .................................. 7 B. Integrated Optical Device and Circuit Modeling ... ................... .. 10 C...following categories: A. Integrated Optical Devices and Technology B. Integrated Optical Device and Circuit Modeling C. Cryogenic Etching for Low
ERIC Educational Resources Information Center
Stallings, Jane
The purpose of the Follow Through Classroom Observation Evaluation was to assess the implementation of seven Follow Through sponsor models included in the study and to examine the relationships between classroom instructional processes and child outcomes. The seven programs selected for study include two behavioristic models, an open school model…
Selection, calibration, and validation of models of tumor growth.
Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C
2016-11-01
This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory animals while demonstrating successful implementations of OPAL.
NASA Astrophysics Data System (ADS)
Ranatunga, T.
2016-12-01
Modeling of fate and transport of fecal bacteria in a watershed is generally a processed based approach that considers releases from manure, point sources, and septic systems. Overland transport with water and sediments, infiltration into soils, transport in the vadose zone and groundwater, die-off and growth processes, and in-stream transport are considered as the other major processes in bacteria simulation. This presentation will discuss a simulation of fecal indicator bacteria (E.coli) source loading and in-stream conditions of a non-tidal watershed (Cedar Bayou Watershed) in South Central Texas using two models; Spatially Explicit Load Enrichment Calculation Tool (SELECT) and Soil and Water Assessment Tool (SWAT). Furthermore, it will discuss a probable approach of bacteria source load reduction in order to meet the water quality standards in the streams. The selected watershed is listed as having levels of fecal indicator bacteria that posed a risk for contact recreation and wading by the Texas Commission of Environmental Quality (TCEQ). The SELECT modeling approach was used in estimating the bacteria source loading from land categories. Major bacteria sources considered were, failing septic systems, discharges from wastewater treatment facilities, excreta from livestock (Cattle, Horses, Sheep and Goat), excreta from Wildlife (Feral Hogs, and Deer), Pet waste (mainly from Dogs), and runoff from urban surfaces. The estimated source loads were input to the SWAT model in order to simulate the transport through the land and in-stream conditions. The calibrated SWAT model was then used to estimate the indicator bacteria in-stream concentrations for future years based on H-GAC's regional land use, population and household projections (up to 2040). Based on the in-stream reductions required to meet the water quality standards, the corresponding required source load reductions were estimated.
An integrated fuzzy approach for strategic alliance partner selection in third-party logistics.
Erkayman, Burak; Gundogar, Emin; Yilmaz, Aysegul
2012-01-01
Outsourcing some of the logistic activities is a useful strategy for companies in recent years. This makes it possible for firms to concentrate on their main issues and processes and presents facility to improve logistics performance, to reduce costs, and to improve quality. Therefore provider selection and evaluation in third-party logistics become important activities for companies. Making a strategic decision like this is significantly hard and crucial. In this study we proposed a fuzzy multicriteria decision making (MCDM) approach to effectively select the most appropriate provider. First we identify the provider selection criteria and build the hierarchical structure of decision model. After building the hierarchical structure we determined the selection criteria weights by using fuzzy analytical hierarchy process (AHP) technique. Then we applied fuzzy technique for order preference by similarity to ideal solution (TOPSIS) to obtain final rankings for providers. And finally an illustrative example is also given to demonstrate the effectiveness of the proposed model.
An Integrated Fuzzy Approach for Strategic Alliance Partner Selection in Third-Party Logistics
Gundogar, Emin; Yılmaz, Aysegul
2012-01-01
Outsourcing some of the logistic activities is a useful strategy for companies in recent years. This makes it possible for firms to concentrate on their main issues and processes and presents facility to improve logistics performance, to reduce costs, and to improve quality. Therefore provider selection and evaluation in third-party logistics become important activities for companies. Making a strategic decision like this is significantly hard and crucial. In this study we proposed a fuzzy multicriteria decision making (MCDM) approach to effectively select the most appropriate provider. First we identify the provider selection criteria and build the hierarchical structure of decision model. After building the hierarchical structure we determined the selection criteria weights by using fuzzy analytical hierarchy process (AHP) technique. Then we applied fuzzy technique for order preference by similarity to ideal solution (TOPSIS) to obtain final rankings for providers. And finally an illustrative example is also given to demonstrate the effectiveness of the proposed model. PMID:23365520
Pattern-oriented modelling: a ‘multi-scope’ for predictive systems ecology
Grimm, Volker; Railsback, Steven F.
2012-01-01
Modern ecology recognizes that modelling systems across scales and at multiple levels—especially to link population and ecosystem dynamics to individual adaptive behaviour—is essential for making the science predictive. ‘Pattern-oriented modelling’ (POM) is a strategy for doing just this. POM is the multi-criteria design, selection and calibration of models of complex systems. POM starts with identifying a set of patterns observed at multiple scales and levels that characterize a system with respect to the particular problem being modelled; a model from which the patterns emerge should contain the right mechanisms to address the problem. These patterns are then used to (i) determine what scales, entities, variables and processes the model needs, (ii) test and select submodels to represent key low-level processes such as adaptive behaviour, and (iii) find useful parameter values during calibration. Patterns are already often used in these ways, but a mini-review of applications of POM confirms that making the selection and use of patterns more explicit and rigorous can facilitate the development of models with the right level of complexity to understand ecological systems and predict their response to novel conditions. PMID:22144392
Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.
2015-08-19
Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less
Exploring Several Methods of Groundwater Model Selection
NASA Astrophysics Data System (ADS)
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Sierra-de-Grado, Rosario; Pando, Valentín; Martínez-Zurimendi, Pablo; Peñalvo, Alejandro; Báscones, Esther; Moulia, Bruno
2008-06-01
Stem straightness is an important selection trait in Pinus pinaster Ait. breeding programs. Despite the stability of stem straightness rankings in provenance trials, the efficiency of breeding programs based on a quantitative index of stem straightness remains low. An alternative approach is to analyze biomechanical processes that underlie stem form. The rationale for this selection method is that genetic differences in the biomechanical processes that maintain stem straightness in young plants will continue to control stem form throughout the life of the tree. We analyzed the components contributing most to genetic differences among provenances in stem straightening processes by kinetic analysis and with a biomechanical model defining the interactions between the variables involved (Fournier's model). This framework was tested on three P. pinaster provenances differing in adult stem straightness and growth. One-year-old plants were tilted at 45 degrees, and individual stem positions and sizes were recorded weekly for 5 months. We measured the radial extension of reaction wood and the anatomical features of wood cells in serial stem cross sections. The integral effect of reaction wood on stem leaning was computed with Fournier's model. Responses driven by both primary and secondary growth were involved in the stem straightening process, but secondary-growth-driven responses accounted for most differences among provenances. Plants from the straight-stemmed provenance showed a greater capacity for stem straightening than plants from the sinuous provenances mainly because of (1) more efficient reaction wood (higher maturation strains) and (2) more pronounced secondary-growth-driven autotropic decurving. These two process-based traits are thus good candidates for early selection of stem straightness, but additional tests on a greater number of genotypes over a longer period are required.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extraction of benzene and cyclohexane using [BMIM][N(CN)2] and their equilibrium modeling
NASA Astrophysics Data System (ADS)
Ismail, Marhaina; Bustam, M. Azmi; Man, Zakaria
2017-12-01
The separation of aromatic compound from aliphatic mixture is one of the essential industrial processes for an economically green process. In order to determine the separation efficiency of ionic liquid (IL) as a solvent in the separation, the ternary diagram of liquid-liquid extraction (LLE) 1-butyl-3-methylimidazolium dicyanamide [BMIM][N(CN)2] with benzene and cyclohexane was studied at T=298.15 K and atmospheric pressure. The solute distribution coefficient and solvent selectivity derived from the equilibrium data were used to evaluate if the selected ionic liquid can be considered as potential solvent for the separation of benzene from cyclohexane. The experimental tie line data was correlated using non-random two liquid model (NRTL) and Margules model. It was found that the solute distribution coefficient is (0.4430-0.0776) and selectivity of [BMIM][N(CN)2] for benzene is (53.6-13.9). The ternary diagram showed that the selected IL can perform the separation of benzene and cyclohexane as it has extractive capacity and selectivity. Therefore, [BMIM][N(CN)2] can be considered as a potential extracting solvent for the LLE of benzene and cyclohexane.
A model of two-way selection system for human behavior.
Zhou, Bin; Qin, Shujia; Han, Xiao-Pu; He, Zhe; Xie, Jia-Rong; Wang, Bing-Hong
2014-01-01
Two-way selection is a common phenomenon in nature and society. It appears in the processes like choosing a mate between men and women, making contracts between job hunters and recruiters, and trading between buyers and sellers. In this paper, we propose a model of two-way selection system, and present its analytical solution for the expectation of successful matching total and the regular pattern that the matching rate trends toward an inverse proportion to either the ratio between the two sides or the ratio of the state total to the smaller group's people number. The proposed model is verified by empirical data of the matchmaking fairs. Results indicate that the model well predicts this typical real-world two-way selection behavior to the bounded error extent, thus it is helpful for understanding the dynamics mechanism of the real-world two-way selection system.
Stephanie A. Snyder; Keith D. Stockmann; Gaylord E. Morris
2012-01-01
The US Forest Service used contracted helicopter services as part of its wildfire suppression strategy. An optimization decision-modeling system was developed to assist in the contract selection process. Three contract award selection criteria were considered: cost per pound of delivered water, total contract cost, and quality ratings of the aircraft and vendors....
ERIC Educational Resources Information Center
McManus, John
A study compared two models (economic and journalistic) of news selection in an attempt to explain what becomes news. The news gathering and news decisionmaking processes of three western United States network-affiliated television stations, one each in a small, medium, and large market, were observed during 12 "typical" days.…
ERIC Educational Resources Information Center
Williams, Dana E.
2012-01-01
The purpose of this qualitative phenomenological study was to explore factors for selecting a business model for scaling online enrollment by institutions of higher education. The goal was to explore the lived experiences of academic industry experts involved in the selection process. The research question for this study was: What were the lived…
An Actuarial Model for Selecting Participants for a Special Medical Education Program.
ERIC Educational Resources Information Center
Walker-Bartnick, Leslie; And Others
An actuarial model applied to the selection process of a special medical school program at the University of Maryland School of Medicine was tested. The 77 students in the study sample were admitted to the university's Fifth Pathway Program, which is designed for U.S. citizens who completed their medical school training, except for internship and…
SENCA: A Multilayered Codon Model to Study the Origins and Dynamics of Codon Usage
Pouyet, Fanny; Bailly-Bechet, Marc; Mouchiroud, Dominique; Guéguen, Laurent
2016-01-01
Gene sequences are the target of evolution operating at different levels, including the nucleotide, codon, and amino acid levels. Disentangling the impact of those different levels on gene sequences requires developing a probabilistic model with three layers. Here we present SENCA (site evolution of nucleotides, codons, and amino acids), a codon substitution model that separately describes 1) nucleotide processes which apply on all sites of a sequence such as the mutational bias, 2) preferences between synonymous codons, and 3) preferences among amino acids. We argue that most synonymous substitutions are not neutral and that SENCA provides more accurate estimates of selection compared with more classical codon sequence models. We study the forces that drive the genomic content evolution, intraspecifically in the core genome of 21 prokaryotes and interspecifically for five Enterobacteria. We retrieve the existence of a universal mutational bias toward AT, and that taking into account selection on synonymous codon usage has consequences on the measurement of selection on nonsynonymous substitutions. We also confirm that codon usage bias is mostly driven by selection on preferred codons. We propose new summary statistics to measure the relative importance of the different evolutionary processes acting on sequences. PMID:27401173
NASA Astrophysics Data System (ADS)
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide verification strategies to assess the accuracy of those techniques, which we illustrate in the context of the HIV model. Finally, we examine active subspace methods as an alternative to parameter subset selection techniques. The objective of active subspace methods is to determine the subspace of inputs that most strongly affect the model response, and to reduce the dimension of the input space. The major difference between active subspace methods and parameter selection techniques is that parameter selection identifies influential parameters whereas subspace selection identifies a linear combination of parameters that impacts the model responses significantly. We employ active subspace methods discussed in [22] for the HIV model and present a verification that the active subspace successfully reduces the input dimensions.
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-01-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
NASA Astrophysics Data System (ADS)
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-10-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.
Wang, T; Zhao, G; Tang, H Y; Jiang, Z D
2015-01-01
Cell survival upon cryopreservation is affected by the cooling rate. However, it is difficult to model the heat transfer process or to predict the cooling curve of a cryoprotective agent (CPA) solution due to the uncertainty of its convective heat transfer coefficient (h). To measure the h and to better understand the heat transfer process of cryovials filled with CPA solution being plunged in liquid nitrogen. The temperatures at three locations of the CPA solution in a cryovial were measured. Different h values were selected after the cooling process was modeled as natural convection heat transfer, the film boiling and the nucleate boiling, respectively. And the temperatures of the selected points are simulated based on the selected h values. h was determined when the simulated temperature best fitted the experimental temperature. When the experimental results were best fitted, according to natural convection heat transfer model, h(1) = 120 W/(m(2)·K) while due to film boiling and nucleate boiling regimes h(f) = 5 W/(m(2)·K) followed by h(n) = 245 W/(m(2)·K). These values were verified by the differential cooling rates at the three locations of a cryovial. The heat transfer process during cooling in liquid nitrogen is better modeled as film boiling followed by nucleate boiling.
A decision modeling for phasor measurement unit location selection in smart grid systems
NASA Astrophysics Data System (ADS)
Lee, Seung Yup
As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.
A new perspective on the perceptual selectivity of attention under load.
Giesbrecht, Barry; Sy, Jocelyn; Bundesen, Claus; Kyllingsbaek, Søren
2014-05-01
The human attention system helps us cope with a complex environment by supporting the selective processing of information relevant to our current goals. Understanding the perceptual, cognitive, and neural mechanisms that mediate selective attention is a core issue in cognitive neuroscience. One prominent model of selective attention, known as load theory, offers an account of how task demands determine when information is selected and an account of the efficiency of the selection process. However, load theory has several critical weaknesses that suggest that it is time for a new perspective. Here we review the strengths and weaknesses of load theory and offer an alternative biologically plausible computational account that is based on the neural theory of visual attention. We argue that this new perspective provides a detailed computational account of how bottom-up and top-down information is integrated to provide efficient attentional selection and allocation of perceptual processing resources. © 2014 New York Academy of Sciences.
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
ISRU System Model Tool: From Excavation to Oxygen Production
NASA Technical Reports Server (NTRS)
Santiago-Maldonado, Edgardo; Linne, Diane L.
2007-01-01
In the late 80's, conceptual designs for an in situ oxygen production plant were documented in a study by Eagle Engineering [1]. In the "Summary of Findings" of this study, it is clearly pointed out that: "reported process mass and power estimates lack a consistent basis to allow comparison." The study goes on to say: "A study to produce a set of process mass, power, and volume requirements on a consistent basis is recommended." Today, approximately twenty years later, as humans plan to return to the moon and venture beyond, the need for flexible up-to-date models of the oxygen extraction production process has become even more clear. Multiple processes for the production of oxygen from lunar regolith are being investigated by NASA, academia, and industry. Three processes that have shown technical merit are molten regolith electrolysis, hydrogen reduction, and carbothermal reduction. These processes have been selected by NASA as the basis for the development of the ISRU System Model Tool (ISMT). In working to develop up-to-date system models for these processes NASA hopes to accomplish the following: (1) help in the evaluation process to select the most cost-effective and efficient process for further prototype development, (2) identify key parameters, (3) optimize the excavation and oxygen production processes, and (4) provide estimates on energy and power requirements, mass and volume of the system, oxygen production rate, mass of regolith required, mass of consumables, and other important parameters. Also, as confidence and high fidelity is achieved with each component's model, new techniques and processes can be introduced and analyzed at a fraction of the cost of traditional hardware development and test approaches. A first generation ISRU System Model Tool has been used to provide inputs to the Lunar Architecture Team studies.
Modelling and simulation techniques for membrane biology.
Burrage, Kevin; Hancock, John; Leier, André; Nicolau, Dan V
2007-07-01
One of the most important aspects of Computational Cell Biology is the understanding of the complicated dynamical processes that take place on plasma membranes. These processes are often so complicated that purely temporal models cannot always adequately capture the dynamics. On the other hand, spatial models can have large computational overheads. In this article, we review some of these issues with respect to chemistry, membrane microdomains and anomalous diffusion and discuss how to select appropriate modelling and simulation paradigms based on some or all the following aspects: discrete, continuous, stochastic, delayed and complex spatial processes.
Schiek, Richard [Albuquerque, NM
2006-06-20
A method of generating two-dimensional masks from a three-dimensional model comprises providing a three-dimensional model representing a micro-electro-mechanical structure for manufacture and a description of process mask requirements, reducing the three-dimensional model to a topological description of unique cross sections, and selecting candidate masks from the unique cross sections and the cross section topology. The method further can comprise reconciling the candidate masks based on the process mask requirements description to produce two-dimensional process masks.
Airport Facility Queuing Model Validation
DOT National Transportation Integrated Search
1977-05-01
Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...
Frequency-dependent selection predicts patterns of radiations and biodiversity.
Melián, Carlos J; Alonso, David; Vázquez, Diego P; Regetz, James; Allesina, Stefano
2010-08-26
Most empirical studies support a decline in speciation rates through time, although evidence for constant speciation rates also exists. Declining rates have been explained by invoking pre-existing niches, whereas constant rates have been attributed to non-adaptive processes such as sexual selection and mutation. Trends in speciation rate and the processes underlying it remain unclear, representing a critical information gap in understanding patterns of global diversity. Here we show that the temporal trend in the speciation rate can also be explained by frequency-dependent selection. We construct a frequency-dependent and DNA sequence-based model of speciation. We compare our model to empirical diversity patterns observed for cichlid fish and Darwin's finches, two classic systems for which speciation rates and richness data exist. Negative frequency-dependent selection predicts well both the declining speciation rate found in cichlid fish and explains their species richness. For groups like the Darwin's finches, in which speciation rates are constant and diversity is lower, speciation rate is better explained by a model without frequency-dependent selection. Our analysis shows that differences in diversity may be driven by incipient species abundance with frequency-dependent selection. Our results demonstrate that genetic-distance-based speciation and frequency-dependent selection are sufficient to explain the high diversity observed in natural systems and, importantly, predict decay through time in speciation rate in the absence of pre-existing niches.
Balcarras, Matthew; Ardid, Salva; Kaping, Daniel; Everling, Stefan; Womelsdorf, Thilo
2016-02-01
Attention includes processes that evaluate stimuli relevance, select the most relevant stimulus against less relevant stimuli, and bias choice behavior toward the selected information. It is not clear how these processes interact. Here, we captured these processes in a reinforcement learning framework applied to a feature-based attention task that required macaques to learn and update the value of stimulus features while ignoring nonrelevant sensory features, locations, and action plans. We found that value-based reinforcement learning mechanisms could account for feature-based attentional selection and choice behavior but required a value-independent stickiness selection process to explain selection errors while at asymptotic behavior. By comparing different reinforcement learning schemes, we found that trial-by-trial selections were best predicted by a model that only represents expected values for the task-relevant feature dimension, with nonrelevant stimulus features and action plans having only a marginal influence on covert selections. These findings show that attentional control subprocesses can be described by (1) the reinforcement learning of feature values within a restricted feature space that excludes irrelevant feature dimensions, (2) a stochastic selection process on feature-specific value representations, and (3) value-independent stickiness toward previous feature selections akin to perseveration in the motor domain. We speculate that these three mechanisms are implemented by distinct but interacting brain circuits and that the proposed formal account of feature-based stimulus selection will be important to understand how attentional subprocesses are implemented in primate brain networks.
Diffusion models of the flanker task: Discrete versus gradual attentional selection
White, Corey N.; Ratcliff, Roger; Starns, Jeffrey S.
2011-01-01
The present study tested diffusion models of processing in the flanker task, in which participants identify a target that is flanked by items that indicate the same (congruent) or opposite response (incongruent). Single- and dual-process flanker models were implemented in a diffusion-model framework and tested against data from experiments that manipulated response bias, speed/accuracy tradeoffs, attentional focus, and stimulus configuration. There was strong mimcry among the models, and each captured the main trends in the data for the standard conditions. However, when more complex conditions were used, a single-process spotlight model captured qualitative and quantitative patterns that the dual-process models could not. Since the single-process model provided the best balance of fit quality and parsimony, the results indicate that processing in the simple versions of the flanker task is better described by gradual rather than discrete narrowing of attention. PMID:21964663
ERIC Educational Resources Information Center
Jung, Jae Yup
2013-01-01
This study developed and tested a new model of the cognitive processes associated with occupational/career indecision for gifted adolescents. A survey instrument with rigorous psychometric properties, developed from a number of existing instruments, was administered to a sample of 687 adolescents attending three academically selective high schools…
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
NASA Astrophysics Data System (ADS)
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; Gibbs, Paul J.; Gibbs, John W.; Karma, Alain
2015-08-01
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. We focus on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues for investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.
The GEMS Model of Volunteer Administration.
ERIC Educational Resources Information Center
Culp, Ken, III; Deppe, Catherine A.; Castillo, Jaime X.; Wells, Betty J.
1998-01-01
Describes GEMS, a spiral model that profiles volunteer administration. Components include Generate, Educate, Mobilize, and Sustain, four sets of processes that span volunteer recruitment and selection to retention or disengagement. (SK)
Circular analysis in complex stochastic systems
Valleriani, Angelo
2015-01-01
Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656
Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max
2017-10-25
Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.
McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.
2013-01-01
Classic approaches to word learning emphasize the problem of referential ambiguity: in any naming situation the referent of a novel word must be selected from many possible objects, properties, actions, etc. To solve this problem, researchers have posited numerous constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative model in which referent selection is an online process that is independent of long-term learning. This two timescale approach creates significant power in the developing system. We illustrate this with a dynamic associative model in which referent selection is simulated as dynamic competition between competing referents, and learning is simulated using associative (Hebbian) learning. This model can account for a range of findings including the delay in expressive vocabulary relative to receptive vocabulary, learning under high degrees of referential ambiguity using cross-situational statistics, accelerating (vocabulary explosion) and decelerating (power-law) learning rates, fast-mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between individual differences in speed of processing and learning. Five theoretical points are illustrated. 1) Word learning does not require specialized processes – general association learning buttressed by dynamic competition can account for much of the literature. 2) The processes of recognizing familiar words are not different than those that support novel words (e.g., fast-mapping). 3) Online competition may allow the network (or child) to leverage information available in the task to augment performance or behavior despite what might be relatively slow learning or poor representations. 4) Even associative learning is more complex than previously thought – a major contributor to performance is the pruning of incorrect associations between words and referents. 5) Finally, the model illustrates that learning and referent selection/word recognition, though logically distinct, can be deeply and subtly related as phenomena like speed of processing and mutual exclusivity may derive in part from the way learning shapes the system. As a whole, this suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development and processing in children. PMID:23088341
Criaud, Marion; Longcamp, Marieke; Anton, Jean-Luc; Nazarian, Bruno; Roth, Muriel; Sescousse, Guillaume; Strafella, Antonio P; Ballanger, Bénédicte; Boulinguez, Philippe
2017-08-30
The neural mechanisms underlying response inhibition and related disorders are unclear and controversial for several reasons. First, it is a major challenge to assess the psychological bases of behaviour, and ultimately brain-behaviour relationships, of a function which is precisely intended to suppress overt measurable behaviours. Second, response inhibition is difficult to disentangle from other parallel processes involved in more general aspects of cognitive control. Consequently, different psychological and anatomo-functional models coexist, which often appear in conflict with each other even though they are not necessarily mutually exclusive. The standard model of response inhibition in go/no-go tasks assumes that inhibitory processes are reactively and selectively triggered by the stimulus that participants must refrain from reacting to. Recent alternative models suggest that action restraint could instead rely on reactive but non-selective mechanisms (all automatic responses are automatically inhibited in uncertain contexts) or on proactive and non-selective mechanisms (a gating function by which reaction to any stimulus is prevented in anticipation of stimulation when the situation is unpredictable). Here, we assessed the physiological plausibility of these different models by testing their respective predictions regarding event-related BOLD modulations (forward inference using fMRI). We set up a single fMRI design which allowed for us to record simultaneously the different possible forms of inhibition while limiting confounds between response inhibition and parallel cognitive processes. We found BOLD dynamics consistent with non-selective models. These results provide new theoretical and methodological lines of inquiry for the study of basic functions involved in behavioural control and related disorders. Copyright © 2017 Elsevier B.V. All rights reserved.
MIMO model of an interacting series process for Robust MPC via System Identification.
Wibowo, Tri Chandra S; Saad, Nordin
2010-07-01
This paper discusses the empirical modeling using system identification technique with a focus on an interacting series process. The study is carried out experimentally using a gaseous pilot plant as the process, in which the dynamic of such a plant exhibits the typical dynamic of an interacting series process. Three practical approaches are investigated and their performances are evaluated. The models developed are also examined in real-time implementation of a linear model predictive control. The selected model is able to reproduce the main dynamic characteristics of the plant in open-loop and produces zero steady-state errors in closed-loop control system. Several issues concerning the identification process and the construction of a MIMO state space model for a series interacting process are deliberated. 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Vivekanandan, T; Sriman Narayana Iyengar, N Ch
2017-11-01
Enormous data growth in multiple domains has posed a great challenge for data processing and analysis techniques. In particular, the traditional record maintenance strategy has been replaced in the healthcare system. It is vital to develop a model that is able to handle the huge amount of e-healthcare data efficiently. In this paper, the challenging tasks of selecting critical features from the enormous set of available features and diagnosing heart disease are carried out. Feature selection is one of the most widely used pre-processing steps in classification problems. A modified differential evolution (DE) algorithm is used to perform feature selection for cardiovascular disease and optimization of selected features. Of the 10 available strategies for the traditional DE algorithm, the seventh strategy, which is represented by DE/rand/2/exp, is considered for comparative study. The performance analysis of the developed modified DE strategy is given in this paper. With the selected critical features, prediction of heart disease is carried out using fuzzy AHP and a feed-forward neural network. Various performance measures of integrating the modified differential evolution algorithm with fuzzy AHP and a feed-forward neural network in the prediction of heart disease are evaluated in this paper. The accuracy of the proposed hybrid model is 83%, which is higher than that of some other existing models. In addition, the prediction time of the proposed hybrid model is also evaluated and has shown promising results. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Anna, I. D.; Cahyadi, I.; Yakin, A.
2018-01-01
Selection of marketing strategy is a prominent competitive advantage for small and medium enterprises business development. The selection process is is a multiple criteria decision-making problem, which includes evaluation of various attributes or criteria in a process of strategy formulation. The objective of this paper is to develop a model for the selection of a marketing strategy in Batik Madura industry. The current study proposes an integrated approach based on analytic network process (ANP) and technique for order preference by similarity to ideal solution (TOPSIS) to determine the best strategy for Batik Madura marketing problems. Based on the results of group decision-making technique, this study selected fourteen criteria, including consistency, cost, trend following, customer loyalty, business volume, uniqueness manpower, customer numbers, promotion, branding, bussiness network, outlet location, credibility and the inovation as Batik Madura marketing strategy evaluation criteria. A survey questionnaire developed from literature review was distributed to a sample frame of Batik Madura SMEs in Pamekasan. In the decision procedure step, expert evaluators were asked to establish the decision matrix by comparing the marketing strategy alternatives under each of the individual criteria. Then, considerations obtained from ANP and TOPSIS methods were applied to build the specific criteria constraints and range of the launch strategy in the model. The model in this study demonstrates that, under current business situation, Straight-focus marketing strategy is the best marketing strategy for Batik Madura SMEs in Pamekasan.
Alternate Models of Needs Assessment: Selecting the Right One for Your Organization.
ERIC Educational Resources Information Center
Leigh, Doug; Watkins, Ryan; Platt, William A.; Kaufman, Roger
2000-01-01
Defines needs assessment and compares different models in terms of levels (mega, macro, micro) and process and input. Recommends assessment of strengths and weakness of a model before using it in human resource development. (SK)
Evolution with Stochastic Fitness and Stochastic Migration
Rice, Sean H.; Papadopoulos, Anthony
2009-01-01
Background Migration between local populations plays an important role in evolution - influencing local adaptation, speciation, extinction, and the maintenance of genetic variation. Like other evolutionary mechanisms, migration is a stochastic process, involving both random and deterministic elements. Many models of evolution have incorporated migration, but these have all been based on simplifying assumptions, such as low migration rate, weak selection, or large population size. We thus have no truly general and exact mathematical description of evolution that incorporates migration. Methodology/Principal Findings We derive an exact equation for directional evolution, essentially a stochastic Price equation with migration, that encompasses all processes, both deterministic and stochastic, contributing to directional change in an open population. Using this result, we show that increasing the variance in migration rates reduces the impact of migration relative to selection. This means that models that treat migration as a single parameter tend to be biassed - overestimating the relative impact of immigration. We further show that selection and migration interact in complex ways, one result being that a strategy for which fitness is negatively correlated with migration rates (high fitness when migration is low) will tend to increase in frequency, even if it has lower mean fitness than do other strategies. Finally, we derive an equation for the effective migration rate, which allows some of the complex stochastic processes that we identify to be incorporated into models with a single migration parameter. Conclusions/Significance As has previously been shown with selection, the role of migration in evolution is determined by the entire distributions of immigration and emigration rates, not just by the mean values. The interactions of stochastic migration with stochastic selection produce evolutionary processes that are invisible to deterministic evolutionary theory. PMID:19816580
NASA Technical Reports Server (NTRS)
Lahoti, G. D.; Akgerman, N.; Altan, T.
1978-01-01
Mild steel (AISI 1018) was selected as model cold rolling material and Ti-6A1-4V and Inconel 718 were selected as typical hot rolling and cold rolling alloys, respectively. The flow stress and workability of these alloys were characterized and friction factor at the roll/workpiece interface was determined at their respective working conditions by conducting ring tests. Computer-aided mathematical models for predicting metal flow and stresses, and for simulating the shape rolling process were developed. These models utilized the upper bound and the slab methods of analysis, and were capable of predicting the lateral spread, roll separating force, roll torque, and local stresses, strains and strain rates. This computer-aided design system was also capable of simulating the actual rolling process, and thereby designing the roll pass schedule in rolling of an airfoil or a similar shape.
Auditory and visual cortex of primates: a comparison of two sensory systems
Rauschecker, Josef P.
2014-01-01
A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177
Coding of visual object features and feature conjunctions in the human brain.
Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M
2008-01-01
Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.
Towards Semantic Modelling of Business Processes for Networked Enterprises
NASA Astrophysics Data System (ADS)
Furdík, Karol; Mach, Marián; Sabol, Tomáš
The paper presents an approach to the semantic modelling and annotation of business processes and information resources, as it was designed within the FP7 ICT EU project SPIKE to support creation and maintenance of short-term business alliances and networked enterprises. A methodology for the development of the resource ontology, as a shareable knowledge model for semantic description of business processes, is proposed. Systematically collected user requirements, conceptual models implied by the selected implementation platform as well as available ontology resources and standards are employed in the ontology creation. The process of semantic annotation is described and illustrated using an example taken from a real application case.
NASA Technical Reports Server (NTRS)
Bradshaw, James F.; Sandefur, Paul G., Jr.; Young, Clarence P., Jr.
1991-01-01
A comprehensive study of braze alloy selection process and strength characterization with application to wind tunnel models is presented. The applications for this study include the installation of stainless steel pressure tubing in model airfoil sections make of 18 Ni 200 grade maraging steel and the joining of wing structural components by brazing. Acceptable braze alloys for these applications are identified along with process, thermal braze cycle data, and thermal management procedures. Shear specimens are used to evaluate comparative shear strength properties for the various alloys at both room and cryogenic (-300 F) temperatures and include the effects of electroless nickel plating. Nickel plating was found to significantly enhance both the wetability and strength properties for the various braze alloys studied. The data are provided for use in selecting braze alloys for use with 18 Ni grade 200 steel in the design of wind tunnel models to be tested in an ambient or cryogenic environment.
Crowdsourcing Based 3d Modeling
NASA Astrophysics Data System (ADS)
Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.
2016-06-01
Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
NASA Astrophysics Data System (ADS)
Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu
2018-06-01
A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.
Multi-Topic Tracking Model for dynamic social network
NASA Astrophysics Data System (ADS)
Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun
2016-07-01
The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.
Belief-desire reasoning as a process of selection.
Leslie, Alan M; German, Tim P; Polizzi, Pamela
2005-02-01
Human learning may depend upon domain specialized mechanisms. A plausible example is rapid, early learning about the thoughts and feelings of other people. A major achievement in this domain, at about age four in the typically developing child, is the ability to solve problems in which the child attributes false beliefs to other people and predicts their actions. The main focus of theorizing has been why 3-year-olds fail, and only recently have there been any models of how success is achieved in false-belief tasks. Leslie and Polizzi (Inhibitory processing in the false-belief task: Two conjectures. Developmental Science, 1, 247-254, 1998) proposed two competing models of success, which are the focus of the current paper. The models assume that belief-desire reasoning is a process which selects a content for an agent's belief and an action for the agent's desire. In false belief tasks, the theory of mind mechanism (ToMM) provides plausible candidate belief contents, among which will be a 'true-belief.' A second process reviews these candidates and by default will select the true-belief content for attribution. To succeed in a false-belief task, the default content must be inhibited so that attention shifts to another candidate belief. In traditional false-belief tasks, the protagonist's desire is to approach an object. Here we make use of tasks in which the protagonist has a desire to avoid an object, about which she has a false-belief. Children find such tasks much more difficult than traditional tasks. Our models explain the additional difficulty by assuming that predicting action from an avoidance desire also requires an inhibition. The two processing models differ in the way that belief and desire inhibitory processes combine to achieve successful action prediction. In six experiments we obtain evidence favoring one model, in which parallel inhibitory processes cancel out, over the other model, in which serial inhibitions force attention to a previously inhibited location. These results are discussed in terms of a set of simple proposals for the modus operandi of a domain specific learning mechanism. The learning mechanism is in part modular--the ToMM--and in part penetrable--the Selection Processor (SP). We show how ToMM-SP can account both for competence and for successful and unsuccessful performance on a wide range of belief-desire tasks across the preschool period. Together, ToMM and SP attend to and learn about mental states.
Compatibility of Common Instructional Models with the DACUM Process
ERIC Educational Resources Information Center
Wyrostek, Warren; Downey, Steven
2017-01-01
Practitioners use an expansive array of instructional design models. Although many of these models acknowledge the need for analyzing occupational roles, they do not define steps for conducting these analyses. This article reviews prominent models and provides prescriptive guidance for selecting appropriate models given a project's (a) Product…
Shields, Walker
2006-12-01
The author uses a dream specimen as interpreted during psychoanalysis to illustrate Modell's hypothesis that Edelman's theory of neuronal group selection (TNGS) may provide a valuable neurobiological model for Freud's dynamic unconscious, imaginative processes in the mind, the retranscription of memory in psychoanalysis, and intersubjective processes in the analytic relationship. He draws parallels between the interpretation of the dream material with keen attention to affect-laden meanings in the evolving analytic relationship in the domain of psychoanalysis and the principles of Edelman's TNGS in the domain of neurobiology. The author notes how this correlation may underscore the importance of dream interpretation in psychoanalysis. He also suggests areas for further investigation in both realms based on study of their interplay.
NASA Astrophysics Data System (ADS)
Nazri, Engku Muhammad; Yusof, Nur Ai'Syah; Ahmad, Norazura; Shariffuddin, Mohd Dino Khairri; Khan, Shazida Jan Mohd
2017-11-01
Prioritizing and making decisions on what student activities to be selected and conducted to fulfill the aspiration of a university as translated in its strategic plan must be executed with transparency and accountability. It is becoming even more crucial, particularly for universities in Malaysia with the recent budget cut imposed by the Malaysian government. In this paper, we illustrated how 0-1 integer programming (0-1 IP) model was implemented to select which activities among the forty activities proposed by the student body of Universiti Utara Malaysia (UUM) to be implemented for the 2017/2018 academic year. Two different models were constructed. The first model was developed to determine the minimum total budget that should be given to the student body by the UUM management to conduct all the activities that can fulfill the minimum targeted number of activities as stated in its strategic plan. On the other hand, the second model was developed to determine which activities to be selected based on the total budget already allocated beforehand by the UUM management towards fulfilling the requirements as set in its strategic plan. The selection of activities for the second model, was also based on the preference of the members of the student body whereby the preference value for each activity was determined using Compromised-Analytical Hierarchy Process. The outputs from both models were compared and discussed. The technique used in this study will be useful and suitable to be implemented by organizations with key performance indicator-oriented programs and having limited budget allocation issues.
NASA Astrophysics Data System (ADS)
Martinez, Guillermo F.; Gupta, Hoshin V.
2011-12-01
Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.
Mechanisms of Reference Frame Selection in Spatial Term Use: Computational and Empirical Studies
ERIC Educational Resources Information Center
Schultheis, Holger; Carlson, Laura A.
2017-01-01
Previous studies have shown that multiple reference frames are available and compete for selection during the use of spatial terms such as "above." However, the mechanisms that underlie the selection process are poorly understood. In the current paper we present two experiments and a comparison of three computational models of selection…
Modeling selective pressures on phytoplankton in the global ocean.
Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W
2010-03-10
Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and selective pressures that may be difficult or impossible to study by other means. More generally, and perhaps more importantly, this study introduces an approach for testing hypotheses about the processes that underlie genetic variation among marine microbes, embedded in the dynamic physical, chemical, and biological forces that generate and shape this diversity.
Modeling of block copolymer dry etching for directed self-assembly lithography
NASA Astrophysics Data System (ADS)
Belete, Zelalem; Baer, Eberhard; Erdmann, Andreas
2018-03-01
Directed self-assembly (DSA) of block copolymers (BCP) is a promising alternative technology to overcome the limits of patterning for the semiconductor industry. DSA exploits the self-assembling property of BCPs for nano-scale manufacturing and to repair defects in patterns created during photolithography. After self-assembly of BCPs, to transfer the created pattern to the underlying substrate, selective etching of PMMA (poly (methyl methacrylate)) to PS (polystyrene) is required. However, the etch process to transfer the self-assemble "fingerprint" DSA patterns to the underlying layer is still a challenge. Using combined experimental and modelling studies increases understanding of plasma interaction with BCP materials during the etch process and supports the development of selective process that form well-defined patterns. In this paper, a simple model based on a generic surface model has been developed and an investigation to understand the etch behavior of PS-b-PMMA for Ar, and Ar/O2 plasma chemistries has been conducted. The implemented model is calibrated for etch rates and etch profiles with literature data to extract parameters and conduct simulations. In order to understand the effect of the plasma on the block copolymers, first the etch model was calibrated for polystyrene (PS) and poly (methyl methacrylate) (PMMA) homopolymers. After calibration of the model with the homopolymers etch rate, a full Monte-Carlo simulation was conducted and simulation results are compared with the critical-dimension (CD) and selectivity of etch profile measurement. In addition, etch simulations for lamellae pattern have been demonstrated, using the implemented model.
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885
Effects of Genetic Drift and Gene Flow on the Selective Maintenance of Genetic Variation
Star, Bastiaan; Spencer, Hamish G.
2013-01-01
Explanations for the genetic variation ubiquitous in natural populations are often classified by the population–genetic processes they emphasize: natural selection or mutation and genetic drift. Here we investigate models that incorporate all three processes in a spatially structured population, using what we call a construction approach, simulating finite populations under selection that are bombarded with a steady stream of novel mutations. As expected, the amount of genetic variation compared to previous models that ignored the stochastic effects of drift was reduced, especially for smaller populations and when spatial structure was most profound. By contrast, however, for higher levels of gene flow and larger population sizes, the amount of genetic variation found after many generations was greater than that in simulations without drift. This increased amount of genetic variation is due to the introduction of slightly deleterious alleles by genetic drift and this process is more efficient when migration load is higher. The incorporation of genetic drift also selects for fitness sets that exhibit allele-frequency equilibria with larger domains of attraction: they are “more stable.” Moreover, the finiteness of populations strongly influences levels of local adaptation, selection strength, and the proportion of allele-frequency vectors that can be distinguished from the neutral expectation. PMID:23457235
Bully Victimization: Selection and Influence Within Adolescent Friendship Networks and Cliques.
Lodder, Gerine M A; Scholte, Ron H J; Cillessen, Antonius H N; Giletta, Matteo
2016-01-01
Adolescents tend to form friendships with similar peers and, in turn, their friends further influence adolescents' behaviors and attitudes. Emerging work has shown that these selection and influence processes also might extend to bully victimization. However, no prior work has examined selection and influence effects involved in bully victimization within cliques, despite theoretical account emphasizing the importance of cliques in this regard. This study examined selection and influence processes in adolescence regarding bully victimization both at the level of the entire friendship network and the level of cliques. We used a two-wave design (5-month interval). Participants were 543 adolescents (50.1% male, Mage = 15.8) in secondary education. Stochastic actor-based models indicated that at the level of the larger friendship network, adolescents tended to select friends with similar levels of bully victimization as they themselves. In addition, adolescent friends influenced each other in terms of bully victimization over time. Actor Parter Interdependence models showed that similarities in bully victimization between clique members were not due to selection of clique members. For boys, average clique bully victimization predicted individual bully victimization over time (influence), but not vice versa. No influence was found for girls, indicating that different mechanisms may underlie friend influence on bully victimization for girls and boys. The differences in results at the level of the larger friendship network versus the clique emphasize the importance of taking the type of friendship ties into account in research on selection and influence processes involved in bully victimization.
NASA Astrophysics Data System (ADS)
Maslowski, W.
2017-12-01
The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.
Using Dispersed Modes During Model Correlation
NASA Technical Reports Server (NTRS)
Stewart, Eric C.; Hathcock, Megan L.
2017-01-01
The model correlation process for the modal characteristics of a launch vehicle is well established. After a test, parameters within the nominal model are adjusted to reflect structural dynamics revealed during testing. However, a full model correlation process for a complex structure can take months of man-hours and many computational resources. If the analyst only has weeks, or even days, of time in which to correlate the nominal model to the experimental results, then the traditional correlation process is not suitable. This paper describes using model dispersions to assist the model correlation process and decrease the overall cost of the process. The process creates thousands of model dispersions from the nominal model prior to the test and then compares each of them to the test data. Using mode shape and frequency error metrics, one dispersion is selected as the best match to the test data. This dispersion is further improved by using a commercial model correlation software. In the three examples shown in this paper, this dispersion based model correlation process performs well when compared to models correlated using traditional techniques and saves time in the post-test analysis.
Song, Mingkai; Cui, Linlin; Kuang, Han; Zhou, Jingwei; Yang, Pengpeng; Zhuang, Wei; Chen, Yong; Liu, Dong; Zhu, Chenjie; Chen, Xiaochun; Ying, Hanjie; Wu, Jinglan
2018-08-10
An intermittent simulated moving bed (3F-ISMB) operation scheme, the extension of the 3W-ISMB to the non-linear adsorption region, has been introduced for separation of glucose, lactic acid and acetic acid ternary-mixture. This work focuses on exploring the feasibility of the proposed process theoretically and experimentally. Firstly, the real 3F-ISMB model coupled with the transport dispersive model (TDM) and the Modified-Langmuir isotherm was established to build up the separation parameter plane. Subsequently, three operating conditions were selected from the plane to run the 3F-ISMB unit. The experimental results were used to verify the model. Afterwards, the influences of the various flow rates on the separation performances were investigated systematically by means of the validated 3F-ISMB model. The intermittent-retained component lactic acid was finally obtained with the purity of 98.5%, recovery of 95.5% and the average concentration of 38 g/L. The proposed 3F-ISMB process can efficiently separate the mixture with low selectivity into three fractions. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew
National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes.
Bergman, Michael; Zhuang, Ziqing; Brochu, Elizabeth; Palmiero, Andrew
2016-01-01
National Institute for Occupational Safety and Health (NIOSH)-approved N95 filtering-facepiece respirators (FFR) are currently stockpiled by the U.S. Centers for Disease Control and Prevention (CDC) for emergency deployment to healthcare facilities in the event of a widespread emergency such as an influenza pandemic. This study assessed the fit of N95 FFRs purchased for the CDC Strategic National Stockpile. The study addresses the question of whether the fit achieved by specific respirator sizes relates to facial size categories as defined by two NIOSH fit test panels. Fit test data were analyzed from 229 test subjects who performed a nine-donning fit test on seven N95 FFR models using a quantitative fit test protocol. An initial respirator model selection process was used to determine if the subject could achieve an adequate fit on a particular model; subjects then tested the adequately fitting model for the nine-donning fit test. Only data for models which provided an adequate initial fit (through the model selection process) for a subject were analyzed for this study. For the nine-donning fit test, six of the seven respirator models accommodated the fit of subjects (as indicated by geometric mean fit factor > 100) for not only the intended NIOSH bivariate and PCA panel sizes corresponding to the respirator size, but also for other panel sizes which were tested for each model. The model which showed poor performance may not be accurately represented because only two subjects passed the initial selection criteria to use this model. Findings are supportive of the current selection of facial dimensions for the new NIOSH panels. The various FFR models selected for the CDC Strategic National Stockpile provide a range of sizing options to fit a variety of facial sizes. PMID:26877587
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dumitrescu, Eugene; Humble, Travis S.
The accurate and reliable characterization of quantum dynamical processes underlies efforts to validate quantum technologies, where discrimination between competing models of observed behaviors inform efforts to fabricate and operate qubit devices. We present a protocol for quantum channel discrimination that leverages advances in direct characterization of quantum dynamics (DCQD) codes. We demonstrate that DCQD codes enable selective process tomography to improve discrimination between entangling and correlated quantum dynamics. Numerical simulations show selective process tomography requires only a few measurement configurations to achieve a low false alarm rate and that the DCQD encoding improves the resilience of the protocol to hiddenmore » sources of noise. Lastly, our results show that selective process tomography with DCQD codes is useful for efficiently distinguishing sources of correlated crosstalk from uncorrelated noise in current and future experimental platforms.« less
Long, Chengjiang; Hua, Gang; Kapoor, Ashish
2015-01-01
We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Implementation of the nursing process in a health area: models and assessment structures used
Huitzi-Egilegor, Joseba Xabier; Elorza-Puyadena, Maria Isabel; Urkia-Etxabe, Jose Maria; Asurabarrena-Iraola, Carmen
2014-01-01
OBJECTIVE: to analyze what nursing models and nursing assessment structures have been used in the implementation of the nursing process at the public and private centers in the health area Gipuzkoa (Basque Country). METHOD: a retrospective study was undertaken, based on the analysis of the nursing records used at the 158 centers studied. RESULTS: the Henderson model, Carpenito's bifocal structure, Gordon's assessment structure and the Resident Assessment Instrument Nursing Home 2.0 have been used as nursing models and assessment structures to implement the nursing process. At some centers, the selected model or assessment structure has varied over time. CONCLUSION: Henderson's model has been the most used to implement the nursing process. Furthermore, the trend is observed to complement or replace Henderson's model by nursing assessment structures. PMID:25493672
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Tourret, Damien; Clarke, Amy J.; Imhoff, Seth D.; ...
2015-05-27
We present a three-dimensional extension of the multiscale dendritic needle network (DNN) model. This approach enables quantitative simulations of the unsteady dynamics of complex hierarchical networks in spatially extended dendritic arrays. We apply the model to directional solidification of Al-9.8 wt.%Si alloy and directly compare the model predictions with measurements from experiments with in situ x-ray imaging. The focus is on the dynamical selection of primary spacings over a range of growth velocities, and the influence of sample geometry on the selection of spacings. Simulation results show good agreement with experiments. The computationally efficient DNN model opens new avenues formore » investigating the dynamics of large dendritic arrays at scales relevant to solidification experiments and processes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crapps, Justin M.; Clarke, Kester D.; Katz, Joel D.
2012-06-06
We use experimentation and finite element modeling to study a Hot Isostatic Press (HIP) manufacturing process for U-10Mo Monolithic Fuel Plates. Finite element simulations are used to identify the material properties affecting the process and improve the process geometry. Accounting for the high temperature material properties and plasticity is important to obtain qualitative agreement between model and experimental results. The model allows us to improve the process geometry and provide guidance on selection of material and finish conditions for the process strongbacks. We conclude that the HIP can must be fully filled to provide uniform normal stress across the bondingmore » interface.« less
Aris-Brosou, Stéphane; Bielawski, Joseph P
2006-08-15
A popular approach to examine the roles of mutation and selection in the evolution of genomes has been to consider the relationship between codon bias and synonymous rates of molecular evolution. A significant relationship between these two quantities is taken to indicate the action of weak selection on substitutions among synonymous codons. The neutral theory predicts that the rate of evolution is inversely related to the level of functional constraint. Therefore, selection against the use of non-preferred codons among those coding for the same amino acid should result in lower rates of synonymous substitution as compared with sites not subject to such selection pressures. However, reliably measuring the extent of such a relationship is problematic, as estimates of synonymous rates are sensitive to our assumptions about the process of molecular evolution. Previous studies showed the importance of accounting for unequal codon frequencies, in particular when synonymous codon usage is highly biased. Yet, unequal codon frequencies can be modeled in different ways, making different assumptions about the mutation process. Here we conduct a simulation study to evaluate two different ways of modeling uneven codon frequencies and show that both model parameterizations can have a dramatic impact on rate estimates and affect biological conclusions about genome evolution. We reanalyze three large data sets to demonstrate the relevance of our results to empirical data analysis.
Analytical network process based optimum cluster head selection in wireless sensor network.
Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.
Analytical network process based optimum cluster head selection in wireless sensor network
Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad
2017-01-01
Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616
NASA Astrophysics Data System (ADS)
Hidayat, Taufiq; Shishin, Denis; Decterov, Sergei A.; Hayes, Peter C.; Jak, Evgueni
2017-01-01
Uncertainty in the metal price and competition between producers mean that the daily operation of a smelter needs to target high recovery of valuable elements at low operating cost. Options for the improvement of the plant operation can be examined and decision making can be informed based on accurate information from laboratory experimentation coupled with predictions using advanced thermodynamic models. Integrated high-temperature experimental and thermodynamic modelling research on phase equilibria and thermodynamics of copper-containing systems have been undertaken at the Pyrometallurgy Innovation Centre (PYROSEARCH). The experimental phase equilibria studies involve high-temperature equilibration, rapid quenching and direct measurement of phase compositions using electron probe X-ray microanalysis (EPMA). The thermodynamic modelling deals with the development of accurate thermodynamic database built through critical evaluation of experimental data, selection of solution models, and optimization of models parameters. The database covers the Al-Ca-Cu-Fe-Mg-O-S-Si chemical system. The gas, slag, matte, liquid and solid metal phases, spinel solid solution as well as numerous solid oxide and sulphide phases are included. The database works within the FactSage software environment. Examples of phase equilibria data and thermodynamic models of selected systems, as well as possible implementation of the research outcomes to selected copper making processes are presented.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Iseki, Ryuta
2004-12-01
This article reviewed research on construction of situation models during reading. To position variety of research in overall process appropriately, an unitary framework was devised in terms of three theories for on-line processing: resonance process, event-indexing model, and constructionist theory. Resonance process was treated as a basic activation mechanism in the framework. Event-indexing model was regarded as a screening system which selected and encoded activated information in situation models along with situational dimensions. Constructionist theory was considered to have a supervisory role based on coherence and explanation. From a view of the unitary framework, some problems concerning each theory were examined and possible interpretations were given. Finally, it was pointed out that there were little theoretical arguments on associative processing at global level and encoding text- and inference-information into long-term memory.
A universal deep learning approach for modeling the flow of patients under different severities.
Jiang, Shancheng; Chin, Kwai-Sang; Tsui, Kwok L
2018-02-01
The Accident and Emergency Department (A&ED) is the frontline for providing emergency care in hospitals. Unfortunately, relative A&ED resources have failed to keep up with continuously increasing demand in recent years, which leads to overcrowding in A&ED. Knowing the fluctuation of patient arrival volume in advance is a significant premise to relieve this pressure. Based on this motivation, the objective of this study is to explore an integrated framework with high accuracy for predicting A&ED patient flow under different triage levels, by combining a novel feature selection process with deep neural networks. Administrative data is collected from an actual A&ED and categorized into five groups based on different triage levels. A genetic algorithm (GA)-based feature selection algorithm is improved and implemented as a pre-processing step for this time-series prediction problem, in order to explore key features affecting patient flow. In our improved GA, a fitness-based crossover is proposed to maintain the joint information of multiple features during iterative process, instead of traditional point-based crossover. Deep neural networks (DNN) is employed as the prediction model to utilize their universal adaptability and high flexibility. In the model-training process, the learning algorithm is well-configured based on a parallel stochastic gradient descent algorithm. Two effective regularization strategies are integrated in one DNN framework to avoid overfitting. All introduced hyper-parameters are optimized efficiently by grid-search in one pass. As for feature selection, our improved GA-based feature selection algorithm has outperformed a typical GA and four state-of-the-art feature selection algorithms (mRMR, SAFS, VIFR, and CFR). As for the prediction accuracy of proposed integrated framework, compared with other frequently used statistical models (GLM, seasonal-ARIMA, ARIMAX, and ANN) and modern machine models (SVM-RBF, SVM-linear, RF, and R-LASSO), the proposed integrated "DNN-I-GA" framework achieves higher prediction accuracy on both MAPE and RMSE metrics in pairwise comparisons. The contribution of our study is two-fold. Theoretically, the traditional GA-based feature selection process is improved to have less hyper-parameters and higher efficiency, and the joint information of multiple features is maintained by fitness-based crossover operator. The universal property of DNN is further enhanced by merging different regularization strategies. Practically, features selected by our improved GA can be used to acquire an underlying relationship between patient flows and input features. Predictive values are significant indicators of patients' demand and can be used by A&ED managers to make resource planning and allocation. High accuracy achieved by the present framework in different cases enhances the reliability of downstream decision makings. Copyright © 2017 Elsevier B.V. All rights reserved.
Multi-locus analysis of genomic time series data from experimental evolution.
Terhorst, Jonathan; Schlötterer, Christian; Song, Yun S
2015-04-01
Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.
Fixation Probability in a Haploid-Diploid Population
Bessho, Kazuhiro; Otto, Sarah P.
2017-01-01
Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright–Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. PMID:27866168
Ortíz, Miguel A; Felizzola, Heriberto A; Nieto Isaza, Santiago
2015-01-01
The project selection process is a crucial step for healthcare organizations at the moment of implementing six sigma programs in both administrative and caring processes. However, six-sigma project selection is often defined as a decision making process with interaction and feedback between criteria; so that it is necessary to explore different methods to help healthcare companies to determine the Six-sigma projects that provide the maximum benefits. This paper describes the application of both ANP (Analytic Network process) and DEMATEL (Decision Making trial and evaluation laboratory)-ANP in a public medical centre to establish the most suitable six sigma project and finally, these methods were compared to evaluate their performance in the decision making process. ANP and DEMATEL-ANP were used to evaluate 6 six sigma project alternatives under an evaluation model composed by 3 strategies, 4 criteria and 15 sub-criteria. Judgement matrixes were completed by the six sigma team whose participants worked in different departments of the medical centre. The improving of care opportunity in obstetric outpatients was elected as the most suitable six sigma project with a score of 0,117 as contribution to the organization goals. DEMATEL-ANP performed better at decision making process since it reduced the error probability due to interactions and feedback. ANP and DEMATEL-ANP effectively supported six sigma project selection processes, helping to create a complete framework that guarantees the prioritization of projects that provide maximum benefits to healthcare organizations. As DEMATEL- ANP performed better, it should be used by practitioners involved in decisions related to the implementation of six sigma programs in healthcare sector accompanied by the adequate identification of the evaluation criteria that support the decision making model. Thus, this comparative study contributes to choosing more effective approaches in this field. Suggestions of further work are also proposed so that these methods can be applied more adequate in six sigma project selection processes in healthcare.
2015-01-01
Background The project selection process is a crucial step for healthcare organizations at the moment of implementing six sigma programs in both administrative and caring processes. However, six-sigma project selection is often defined as a decision making process with interaction and feedback between criteria; so that it is necessary to explore different methods to help healthcare companies to determine the Six-sigma projects that provide the maximum benefits. This paper describes the application of both ANP (Analytic Network process) and DEMATEL (Decision Making trial and evaluation laboratory)-ANP in a public medical centre to establish the most suitable six sigma project and finally, these methods were compared to evaluate their performance in the decision making process. Methods ANP and DEMATEL-ANP were used to evaluate 6 six sigma project alternatives under an evaluation model composed by 3 strategies, 4 criteria and 15 sub-criteria. Judgement matrixes were completed by the six sigma team whose participants worked in different departments of the medical centre. Results The improving of care opportunity in obstetric outpatients was elected as the most suitable six sigma project with a score of 0,117 as contribution to the organization goals. DEMATEL-ANP performed better at decision making process since it reduced the error probability due to interactions and feedback. Conclusions ANP and DEMATEL-ANP effectively supported six sigma project selection processes, helping to create a complete framework that guarantees the prioritization of projects that provide maximum benefits to healthcare organizations. As DEMATEL- ANP performed better, it should be used by practitioners involved in decisions related to the implementation of six sigma programs in healthcare sector accompanied by the adequate identification of the evaluation criteria that support the decision making model. Thus, this comparative study contributes to choosing more effective approaches in this field. Suggestions of further work are also proposed so that these methods can be applied more adequate in six sigma project selection processes in healthcare. PMID:26391445
QaaS (quality as a service) model for web services using big data technologies
NASA Astrophysics Data System (ADS)
Ahmad, Faisal; Sarkar, Anirban
2017-10-01
Quality of service (QoS) determines the service usability and utility and both of which influence the service selection process. The QoS varies from one service provider to other. Each web service has its own methodology for evaluating QoS. The lack of transparent QoS evaluation model makes the service selection challenging. Moreover, most QoS evaluation processes do not consider their historical data which not only helps in getting more accurate QoS but also helps for future prediction, recommendation and knowledge discovery. QoS driven service selection demands a model where QoS can be provided as a service to end users. This paper proposes a layered QaaS (quality as a service) model in the same line as PaaS and software as a service, where users can provide QoS attributes as inputs and the model returns services satisfying the user's QoS expectation. This paper covers all the key aspects in this context, like selection of data sources, its transformation, evaluation, classification and storage of QoS. The paper uses server log as the source for evaluating QoS values, common methodology for its evaluation and big data technologies for its transformation and analysis. This paper also establishes the fact that Spark outperforms the Pig with respect to evaluation of QoS from logs.
Crew Interface Analysis: Selected Articles on Space Human Factors Research, 1987 - 1991
1993-07-01
recognitions to that distractor ) suggest that the perceptual type of the graph has a strong representation in memory . We found that both training with... processing strategy. If my goal were to compare the value of variables or (possibly) to compare a trend, I would select a perceptual strategy. If...be needed to determine specific processing models for different questions using the perceptual strategy. In addition, predictions about the memory
Kim, Hui Taek; Ahn, Tae Young; Jang, Jae Hoon; Kim, Kang Hee; Lee, Sung Jae; Jung, Duk Young
2017-03-01
Three-dimensional (3D) computed tomography imaging is now being used to generate 3D models for planning orthopaedic surgery, but the process remains time consuming and expensive. For chronic radial head dislocation, we have designed a graphic overlay approach that employs selected 3D computer images and widely available software to simplify the process of osteotomy site selection. We studied 5 patients (2 traumatic and 3 congenital) with unilateral radial head dislocation. These patients were treated with surgery based on traditional radiographs, but they also had full sets of 3D CT imaging done both before and after their surgery: these 3D CT images form the basis for this study. From the 3D CT images, each patient generated 3 sets of 3D-printed bone models: 2 copies of the preoperative condition, and 1 copy of the postoperative condition. One set of the preoperative models was then actually osteotomized and fixed in the manner suggested by our graphic technique. Arcs of rotation of the 3 sets of 3D-printed bone models were then compared. Arcs of rotation of the 3 groups of bone models were significantly different, with the models osteotomized accordingly to our graphic technique having the widest arcs. For chronic radial head dislocation, our graphic overlay approach simplifies the selection of the osteotomy site(s). Three-dimensional-printed bone models suggest that this approach could improve range of motion of the forearm in actual surgical practice. Level IV-therapeutic study.
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Simulation of generation of new ideas for new product development and IT services
NASA Astrophysics Data System (ADS)
Nasiopoulos, Dimitrios K.; Sakas, Damianos P.; Vlachos, D. S.; Mavrogianni, Amanda
2015-02-01
This paper describes a dynamic model of the New Product Development (NPD) process. The model has been occurring from best practice noticed in our research conducted at a range of situations. The model contributes to determine and put an IT company's NPD activities into the frame of the overall NPD process[1]. It has been found to be a useful tool for organizing data on IT company's NPD activities without enforcement an excessively restrictive research methodology refers to the model of NPD. The framework, which strengthens the model, will help to promote a research of the methods undertaken within an IT company's NPD process, thus promoting understanding and improvement of the simulation process[2]. IT companies tested many techniques with several different practices designed to improve the validity and efficacy of their NPD process[3]. Supported by the model, this research examines how widely accepted stated tactics are and what impact these best tactics have on NPD performance. The main assumption of this study is that simulation of generation of new ideas[4] will lead to greater NPD effectiveness and more successful products in IT companies. With the model implementation, practices concern the implementation strategies of NPD (product selection, objectives, leadership, marketing strategy and customer satisfaction) are all more widely accepted than best practices related with controlling the application of NPD (process control, measurements, results). In linking simulation with impact, our results states product success depends on developing strong products and ensuring organizational emphasis, through proper project selection. Project activities strengthens both product and project success. IT products and services success also depends on monitoring the NPD procedure through project management and ensuring team consistency with group rewards. Sharing experiences between projects can positively influence the NPD process.
NASA Technical Reports Server (NTRS)
Kranbuehl, D.; Kingsley, P.; Hart, S.; Loos, A.; Hasko, G.; Dexter, B.
1992-01-01
In-situ frequency dependent electromagnetic sensors (FDEMS) and the Loos resin transfer model have been used to select and control the processing properties of an epoxy resin during liquid pressure RTM impregnation and cure. Once correlated with viscosity and degree of cure the FDEMS sensor monitors and the RTM processing model predicts the reaction advancement of the resin, viscosity and the impregnation of the fabric. This provides a direct means for predicting, monitoring, and controlling the liquid RTM process in-situ in the mold throughout the fabrication process and the effects of time, temperature, vacuum and pressure. Most importantly, the FDEMS-sensor model system has been developed to make intelligent decisions, thereby automating the liquid RTM process and removing the need for operator direction.
USING MM5 VERSION 2 WITH CMAQ AND MODELS-3, A USER'S GUIDE AND TUTORIAL
Meteorological data are important in many of the processes simulated in the Community Multi-Scale Air Quality (CMAQ) model and the Models-3 framework. The first meteorology model that has been selected and evaluated with CMAQ is the Fifth-Generation Pennsylvania State University...
NASA Astrophysics Data System (ADS)
Chuan, Ngam Min; Thiruchelvam, Sivadass; Nasharuddin Mustapha, Kamal; Che Muda, Zakaria; Mat Husin, Norhayati; Yong, Lee Choon; Ghazali, Azrul; Ezanee Rusli, Mohd; Itam, Zarina Binti; Beddu, Salmia; Liyana Mohd Kamal, Nur
2016-03-01
This paper intends to fathom the current state of procurement system in Malaysia specifically in the construction industry in the aspect of supplier selection. This paper propose a comprehensive study on the supplier selection metrics for infrastructure building, weight the importance of each metrics assigned and to find the relationship between the metrics among initiators, decision makers, buyers and users. With the metrics hierarchy of criteria importance, a supplier selection process can be defined, repeated and audited with lesser complications or difficulties. This will help the field of procurement to improve as this research is able to develop and redefine policies and procedures that have been set in supplier selection. Developing this systematic process will enable optimization of supplier selection and thus increasing the value for every stakeholders as the process of selection is greatly simplified. With a new redefined policy and procedure, it does not only increase the company’s effectiveness and profit, but also make it available for the company to reach greater heights in the advancement of procurement in Malaysia.
Nishii, Takashi; Genkawa, Takuma; Watari, Masahiro; Ozaki, Yukihiro
2012-01-01
A new selection procedure of an informative near-infrared (NIR) region for regression model building is proposed that uses an online NIR/mid-infrared (mid-IR) dual-region spectrometer in conjunction with two-dimensional (2D) NIR/mid-IR heterospectral correlation spectroscopy. In this procedure, both NIR and mid-IR spectra of a liquid sample are acquired sequentially during a reaction process using the NIR/mid-IR dual-region spectrometer; the 2D NIR/mid-IR heterospectral correlation spectrum is subsequently calculated from the obtained spectral data set. From the calculated 2D spectrum, a NIR region is selected that includes bands of high positive correlation intensity with mid-IR bands assigned to the analyte, and used for the construction of a regression model. To evaluate the performance of this procedure, a partial least-squares (PLS) regression model of the ethanol concentration in a fermentation process was constructed. During fermentation, NIR/mid-IR spectra in the 10000 - 1200 cm(-1) region were acquired every 3 min, and a 2D NIR/mid-IR heterospectral correlation spectrum was calculated to investigate the correlation intensity between the NIR and mid-IR bands. NIR regions that include bands at 4343, 4416, 5778, 5904, and 5955 cm(-1), which result from the combinations and overtones of the C-H group of ethanol, were selected for use in the PLS regression models, by taking the correlation intensity of a mid-IR band at 2985 cm(-1) arising from the CH(3) asymmetric stretching vibration mode of ethanol as a reference. The predicted results indicate that the ethanol concentrations calculated from the PLS regression models fit well to those obtained by high-performance liquid chromatography. Thus, it can be concluded that the selection procedure using the NIR/mid-IR dual-region spectrometer combined with 2D NIR/mid-IR heterospectral correlation spectroscopy is a powerful method for the construction of a reliable regression model.
Development Of Simulation Model For Fluid Catalytic Cracking
NASA Astrophysics Data System (ADS)
Ghosh, Sobhan
2010-10-01
Fluid Catalytic Cracking (FCC) is the most widely used secondary conversion process in the refining industry, for producing gasoline, olefins, and middle distillate from heavier petroleum fractions. There are more than 500 units in the world with a total processing capacity of about 17 to 20% of the crude capacity. FCC catalyst is the highest consumed catalyst in the process industry. On one hand, FCC is quite flexible with respect to it's ability to process wide variety of crudes with a flexible product yield pattern, and on the other hand, the interdependence of the major operating parameters makes the process extremely complex. An operating unit is self balancing and some fluctuations in the independent parameters are automatically adjusted by changing the temperatures and flow rates at different sections. However, a good simulation model is very useful to the refiner to get the best out of the process, in terms of selection of the best catalyst, to cope up with the day to day changing of the feed quality and the demands of different products from FCC unit. In addition, a good model is of great help in designing the process units and peripherals. A simple empirical model is often adequate to monitor the day to day operations, but they are not of any use in handling the other problems such as, catalyst selection or, design / modification of the plant. For this, a kinetic based rigorous model is required. Considering the complexity of the process, large number of chemical species undergoing "n" number of parallel and consecutive reactions, it is virtually impossible to develop a simulation model based on the kinetic parameters. The most common approach is to settle for a semi empirical model. We shall take up the key issues for developing a FCC model and the contribution of such models in the optimum operation of the plant.
Data preprocessing methods of FT-NIR spectral data for the classification cooking oil
NASA Astrophysics Data System (ADS)
Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli
2014-12-01
This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2014-01-01
Network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium in 2 × 2 prisoner's dilemma games. Previous studies have shown that cooperation can be enhanced by using a skewed, rather than a random, selection of partners for either strategy adaptation or the gaming process. Here we show that combining both processes for selecting a gaming partner and an adaptation partner further enhances cooperation, provided that an appropriate selection rule and parameters are adopted. We also show that this combined model significantly enhances cooperation by reducing the degree of activity in the underlying network; we measure the degree of activity with a quantity called effective degree. More precisely, during the initial evolutionary stage in which the global cooperation fraction declines because initially allocated cooperators becoming defectors, the model shows that weak cooperative clusters perish and only a few strong cooperative clusters survive. This finding is the most important key to attaining significant network reciprocity.
Selective interference with image retention and generation: evidence for the workspace model.
van der Meulen, Marian; Logie, Robert H; Della Sala, Sergio
2009-08-01
We address three types of model of the relationship between working memory (WM) and long-term memory (LTM): (a) the gateway model, in which WM acts as a gateway between perceptual input and LTM; (b) the unitary model, in which WM is seen as the currently activated areas of LTM; and (c) the workspace model, in which perceptual input activates LTM, and WM acts as a separate workspace for processing and temporary retention of these activated traces. Predictions of these models were tested, focusing on visuospatial working memory and using dual-task methodology to combine two main tasks (visual short-term retention and image generation) with two interference tasks (irrelevant pictures and spatial tapping). The pictures selectively disrupted performance on the generation task, whereas the tapping selectively interfered with the retention task. Results are consistent with the predictions of the workspace model.
Modeling Selection and Extinction Mechanisms of Biological Systems
NASA Astrophysics Data System (ADS)
Amirjanov, Adil
In this paper, the behavior of a genetic algorithm is modeled to enhance its applicability as a modeling tool of biological systems. A new description model for selection mechanism is introduced which operates on a portion of individuals of population. The extinction and recolonization mechanism is modeled, and solving the dynamics analytically shows that the genetic drift in the population with extinction/recolonization is doubled. The mathematical analysis of the interaction between selection and extinction/recolonization processes is carried out to assess the dynamics of motion of the macroscopic statistical properties of population. Computer simulations confirm that the theoretical predictions of described models are in good approximations. A mathematical model of GA dynamics was also examined, which describes the anti-predator vigilance in an animal group with respect to a known analytical solution of the problem, and showed a good agreement between them to find the evolutionarily stable strategies.
Impact of selected troposphere models on Precise Point Positioning convergence
NASA Astrophysics Data System (ADS)
Kalita, Jakub; Rzepecka, Zofia
2016-04-01
The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first hour of processing. Finally, the results have been compared against results obtained during calm tropospheric conditions.
Female mating preferences determine system-level evolution in a gene network model.
Fierst, Janna L
2013-06-01
Environmental patterns of directional, stabilizing and fluctuating selection can influence the evolution of system-level properties like evolvability and mutational robustness. Intersexual selection produces strong phenotypic selection and these dynamics may also affect the response to mutation and the potential for future adaptation. In order to to assess the influence of mating preferences on these evolutionary properties, I modeled a male trait and female preference determined by separate gene regulatory networks. I studied three sexual selection scenarios: sexual conflict, a Gaussian model of the Fisher process described in Lande (in Proc Natl Acad Sci 78(6):3721-3725, 1981) and a good genes model in which the male trait signalled his mutational condition. I measured the effects these mating preferences had on the potential for traits and preferences to evolve towards new states, and mutational robustness of both the phenotype and the individual's overall viability. All types of sexual selection increased male phenotypic robustness relative to a randomly mating population. The Fisher model also reduced male evolvability and mutational robustness for viability. Under good genes sexual selection, males evolved an increased mutational robustness for viability. Females choosing their mates is a scenario that is sufficient to create selective forces that impact genetic evolution and shape the evolutionary response to mutation and environmental selection. These dynamics will inevitably develop in any population where sexual selection is operating, and affect the potential for future adaptation.
Selective laser sintering: A qualitative and objective approach
NASA Astrophysics Data System (ADS)
Kumar, Sanjay
2003-10-01
This article presents an overview of selective laser sintering (SLS) work as reported in various journals and proceedings. Selective laser sintering was first done mainly on polymers and nylon to create prototypes for audio-visual help and fit-to-form tests. Gradually it was expanded to include metals and alloys to manufacture functional prototypes and develop rapid tooling. The growth gained momentum with the entry of commercial entities such as DTM Corporation and EOS GmbH Electro Optical Systems. Computational modeling has been used to understand the SLS process, optimize the process parameters, and enhance the efficiency of the sintering machine.
The importance of selection in the evolution of blindness in cavefish.
Cartwright, Reed A; Schwartz, Rachel S; Merry, Alexandra L; Howell, Megan M
2017-02-07
Blindness has evolved repeatedly in cave-dwelling organisms, and many hypotheses have been proposed to explain this observation, including both accumulation of neutral loss-of-function mutations and adaptation to darkness. Investigating the loss of sight in cave dwellers presents an opportunity to understand the operation of fundamental evolutionary processes, including drift, selection, mutation, and migration. Here we model the evolution of blindness in caves. This model captures the interaction of three forces: (1) selection favoring alleles causing blindness, (2) immigration of sightedness alleles from a surface population, and (3) mutations creating blindness alleles. We investigated the dynamics of this model and determined selection-strength thresholds that result in blindness evolving in caves despite immigration of sightedness alleles from the surface. We estimate that the selection coefficient for blindness would need to be at least 0.005 (and maybe as high as 0.5) for blindness to evolve in the model cave-organism, Astyanax mexicanus. Our results indicate that strong selection is required for the evolution of blindness in cave-dwelling organisms, which is consistent with recent work suggesting a high metabolic cost of eye development.
Boosted structured additive regression for Escherichia coli fed-batch fermentation modeling.
Melcher, Michael; Scharl, Theresa; Luchner, Markus; Striedner, Gerald; Leisch, Friedrich
2017-02-01
The quality of biopharmaceuticals and patients' safety are of highest priority and there are tremendous efforts to replace empirical production process designs by knowledge-based approaches. Main challenge in this context is that real-time access to process variables related to product quality and quantity is severely limited. To date comprehensive on- and offline monitoring platforms are used to generate process data sets that allow for development of mechanistic and/or data driven models for real-time prediction of these important quantities. Ultimate goal is to implement model based feed-back control loops that facilitate online control of product quality. In this contribution, we explore structured additive regression (STAR) models in combination with boosting as a variable selection tool for modeling the cell dry mass, product concentration, and optical density on the basis of online available process variables and two-dimensional fluorescence spectroscopic data. STAR models are powerful extensions of linear models allowing for inclusion of smooth effects or interactions between predictors. Boosting constructs the final model in a stepwise manner and provides a variable importance measure via predictor selection frequencies. Our results show that the cell dry mass can be modeled with a relative error of about ±3%, the optical density with ±6%, the soluble protein with ±16%, and the insoluble product with an accuracy of ±12%. Biotechnol. Bioeng. 2017;114: 321-334. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Mathematical Model Of Variable-Polarity Plasma Arc Welding
NASA Technical Reports Server (NTRS)
Hung, R. J.
1996-01-01
Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.
Howard Evan Canfield; Vicente L. Lopes
2000-01-01
A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...
A Multi-Area Stochastic Model for a Covert Visual Search Task.
Schwemmer, Michael A; Feng, Samuel F; Holmes, Philip J; Gottlieb, Jacqueline; Cohen, Jonathan D
2015-01-01
Decisions typically comprise several elements. For example, attention must be directed towards specific objects, their identities recognized, and a choice made among alternatives. Pairs of competing accumulators and drift-diffusion processes provide good models of evidence integration in two-alternative perceptual choices, but more complex tasks requiring the coordination of attention and decision making involve multistage processing and multiple brain areas. Here we consider a task in which a target is located among distractors and its identity reported by lever release. The data comprise reaction times, accuracies, and single unit recordings from two monkeys' lateral interparietal area (LIP) neurons. LIP firing rates distinguish between targets and distractors, exhibit stimulus set size effects, and show response-hemifield congruence effects. These data motivate our model, which uses coupled sets of leaky competing accumulators to represent processes hypothesized to occur in feature-selective areas and limb motor and pre-motor areas, together with the visual selection process occurring in LIP. Model simulations capture the electrophysiological and behavioral data, and fitted parameters suggest that different connection weights between LIP and the other cortical areas may account for the observed behavioral differences between the animals.
NASA Astrophysics Data System (ADS)
Pons, M.; Bernard, C.; Rouch, H.; Madar, R.
1995-10-01
The purpose of this article is to present the modelling routes for the chemical vapour deposition process with a special emphasis on mass transport models with near local thermochemical equilibrium imposed in the gas-phase and at the deposition surface. The theoretical problems arising from the linking of the two selected approaches, thermodynamics and mass transport, are shown and a solution procedure is proposed. As an illustration, selected results of thermodynamic and mass transport analysis and of the coupled approach showed that, for the deposition of Si 1- xGe x solid solution at 1300 K (system SiGeClHAr), the thermodynamic heterogeneous stability of the reactive gases and the thermal diffusion led to the germanium depletion of the deposit.
Cavallo, Jaime A.; Roma, Andres A.; Jasielec, Mateusz S.; Ousley, Jenny; Creamer, Jennifer; Pichert, Matthew D.; Baalman, Sara; Frisella, Margaret M.; Matthews, Brent D.
2014-01-01
Background The purpose of this study was to evaluate the associations between patient characteristics or surgical site classifications and the histologic remodeling scores of synthetic meshes biopsied from their abdominal wall repair sites in the first attempt to generate a multivariable risk prediction model of non-constructive remodeling. Methods Biopsies of the synthetic meshes were obtained from the abdominal wall repair sites of 51 patients during a subsequent abdominal re-exploration. Biopsies were stained with hematoxylin and eosin, and evaluated according to a semi-quantitative scoring system for remodeling characteristics (cell infiltration, cell types, extracellular matrix deposition, inflammation, fibrous encapsulation, and neovascularization) and a mean composite score (CR). Biopsies were also stained with Sirius Red and Fast Green, and analyzed to determine the collagen I:III ratio. Based on univariate analyses between subject clinical characteristics or surgical site classification and the histologic remodeling scores, cohort variables were selected for multivariable regression models using a threshold p value of ≤0.200. Results The model selection process for the extracellular matrix score yielded two variables: subject age at time of mesh implantation, and mesh classification (c-statistic = 0.842). For CR score, the model selection process yielded two variables: subject age at time of mesh implantation and mesh classification (r2 = 0.464). The model selection process for the collagen III area yielded a model with two variables: subject body mass index at time of mesh explantation and pack-year history (r2 = 0.244). Conclusion Host characteristics and surgical site assessments may predict degree of remodeling for synthetic meshes used to reinforce abdominal wall repair sites. These preliminary results constitute the first steps in generating a risk prediction model that predicts the patients and clinical circumstances for which non-constructive remodeling of an abdominal wall repair site with synthetic mesh reinforcement is most likely to occur. PMID:24442681
Li, Wen-bing; Yao, Lin-tao; Liu, Mu-hua; Huang, Lin; Yao, Ming-yin; Chen, Tian-bing; He, Xiu-wen; Yang, Ping; Hu, Hui-qin; Nie, Jiang-hui
2015-05-01
Cu in navel orange was detected rapidly by laser-induced breakdown spectroscopy (LIBS) combined with partial least squares (PLS) for quantitative analysis, then the effect on the detection accuracy of the model with different spectral data ptetreatment methods was explored. Spectral data for the 52 Gannan navel orange samples were pretreated by different data smoothing, mean centralized and standard normal variable transform. Then 319~338 nm wavelength section containing characteristic spectral lines of Cu was selected to build PLS models, the main evaluation indexes of models such as regression coefficient (r), root mean square error of cross validation (RMSECV) and the root mean square error of prediction (RMSEP) were compared and analyzed. Three indicators of PLS model after 13 points smoothing and processing of the mean center were found reaching 0. 992 8, 3. 43 and 3. 4 respectively, the average relative error of prediction model is only 5. 55%, and in one word, the quality of calibration and prediction of this model are the best results. The results show that selecting the appropriate data pre-processing method, the prediction accuracy of PLS quantitative model of fruits and vegetables detected by LIBS can be improved effectively, providing a new method for fast and accurate detection of fruits and vegetables by LIBS.
SENSITIVE PARAMETER EVALUATION FOR A VADOSE ZONE FATE AND TRANSPORT MODEL
This report presents information pertaining to quantitative evaluation of the potential impact of selected parameters on output of vadose zone transport and fate models used to describe the behavior of hazardous chemicals in soil. The Vadose 2one Interactive Processes (VIP) model...
Aggression and Moral Development: Integrating Social Information Processing and Moral Domain Models
ERIC Educational Resources Information Center
Arsenio, William F.; Lemerise, Elizabeth A.
2004-01-01
Social information processing and moral domain theories have developed in relative isolation from each other despite their common focus on intentional harm and victimization, and mutual emphasis on social cognitive processes in explaining aggressive, morally relevant behaviors. This article presents a selective summary of these literatures with…
NASA Astrophysics Data System (ADS)
Bozhalkina, Yana
2017-12-01
Mathematical model of the loan portfolio structure change in the form of Markov chain is explored. This model considers in one scheme both the process of customers attraction, their selection based on the credit score, and loans repayment. The model describes the structure and volume of the loan portfolio dynamics, which allows to make medium-term forecasts of profitability and risk. Within the model corrective actions of bank management in order to increase lending volumes or to reduce the risk are formalized.
Designing Multi-target Compound Libraries with Gaussian Process Models.
Bieler, Michael; Reutlinger, Michael; Rodrigues, Tiago; Schneider, Petra; Kriegl, Jan M; Schneider, Gisbert
2016-05-01
We present the application of machine learning models to selecting G protein-coupled receptor (GPCR)-focused compound libraries. The library design process was realized by ant colony optimization. A proprietary Boehringer-Ingelheim reference set consisting of 3519 compounds tested in dose-response assays at 11 GPCR targets served as training data for machine learning and activity prediction. We compared the usability of the proprietary data with a public data set from ChEMBL. Gaussian process models were trained to prioritize compounds from a virtual combinatorial library. We obtained meaningful models for three of the targets (5-HT2c , MCH, A1), which were experimentally confirmed for 12 of 15 selected and synthesized or purchased compounds. Overall, the models trained on the public data predicted the observed assay results more accurately. The results of this study motivate the use of Gaussian process regression on public data for virtual screening and target-focused compound library design. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA. This is an open access article under the terms of the Creative Commons Attribution Non-Commercial NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
NASA Astrophysics Data System (ADS)
Tanimoto, Jun
2014-05-01
In 2 × 2 prisoner’s dilemma games, network reciprocity is one mechanism for adding social viscosity, which leads to cooperative equilibrium. Here we show that combining the process for selecting a gaming partner with the process for selecting an adaptation partner significantly enhances cooperation, even though such selection processes require additional costs to collect further information concerning which neighbor should be chosen. Based on elaborate investigations of the dynamics generated by our model, we find that high levels of cooperation result from two kinds of behavior: cooperators tend to interact with cooperators to prevent being exploited by defectors and defectors tend to choose cooperators to exploit despite the possibility that some defectors convert to cooperators.
Numerical simulation of complex part manufactured by selective laser melting process
NASA Astrophysics Data System (ADS)
Van Belle, Laurent
2017-10-01
Selective Laser Melting (SLM) process belonging to the family of the Additive Manufacturing (AM) technologies, enable to build parts layer by layer, from metallic powder and a CAD model. Physical phenomena that occur in the process have the same issues as conventional welding. Thermal gradients generate significant residual stresses and distortions in the parts. Moreover, the large and complex parts to manufacturing, accentuate the undesirable effects. Therefore, it is essential for manufacturers to offer a better understanding of the process and to ensure production reliability of parts with high added value. This paper focuses on the simulation of manufacturing turbine by SLM process in order to calculate residual stresses and distortions. Numerical results will be presented.
SEIPS-based process modeling in primary care.
Wooldridge, Abigail R; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter L T
2017-04-01
Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. Copyright © 2016 Elsevier Ltd. All rights reserved.
SEIPS-Based Process Modeling in Primary Care
Wooldridge, Abigail R.; Carayon, Pascale; Hundt, Ann Schoofs; Hoonakker, Peter
2016-01-01
Process mapping, often used as part of the human factors and systems engineering approach to improve care delivery and outcomes, should be expanded to represent the complex, interconnected sociotechnical aspects of health care. Here, we propose a new sociotechnical process modeling method to describe and evaluate processes, using the SEIPS model as the conceptual framework. The method produces a process map and supplementary table, which identify work system barriers and facilitators. In this paper, we present a case study applying this method to three primary care processes. We used purposeful sampling to select staff (care managers, providers, nurses, administrators and patient access representatives) from two clinics to observe and interview. We show the proposed method can be used to understand and analyze healthcare processes systematically and identify specific areas of improvement. Future work is needed to assess usability and usefulness of the SEIPS-based process modeling method and further refine it. PMID:28166883
Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method
Chaharsooghi, S. K.; Ashrafi, Mehdi
2014-01-01
Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267
Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.
Chaharsooghi, S K; Ashrafi, Mehdi
2014-01-01
Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.
NASA Astrophysics Data System (ADS)
Huang, Jin
Acid-gas removal is of great importance in many environmental or energy-related processes. Compared to current commercial technologies, membrane-based CO2 and H2S capture has the advantages of low energy consumption, low weight and space requirement, simplicity of installation/operation, and high process flexibility. However, the large-scale application of the membrane separation technology is limited by the relatively low transport properties. In this study, CO2 (H2S)-selective polymeric membranes with high permeability and high selectivity have been studied based on the facilitated transport mechanism. The membrane showed facilitated effect for both CO2 and H2S. A CO2 permeability of above 2000 Barrers, a CO2/H2 selectivity of greater than 40, and a CO2/N2 selectivity of greater than 200 at 100--150°C were observed. As a result of higher reaction rate and smaller diffusing compound, the H2S permeability and H2S/H2 selectivity were about three times higher than those properties for CO2. The novel CO2-selective membrane has been applied to capture CO 2 from flue gas and natural gas. In the CO2 capture experiments from a gas mixture with N2 and H2, a permeate CO 2 dry concentration of greater than 98% was obtained by using steam as the sweep gas. In CO2/CH4 separation, decent CO 2 transport properties were obtained with a feed pressure up to 500 psia. With the thin-film composite membrane structure, significant increase on the CO2 flux was achieved with the decrease of the selective layer thickness. With the continuous removal of CO2, CO2-selective water-gas-shift (WGS) membrane reactor is a promising approach to enhance CO conversion and increase the purity of H2 at process pressure under relatively low temperature. The simultaneous reaction and transport process in the countercurrent WGS membrane reactor was simulated by using a one-dimensional non-isothermal model. The modeling results show that a CO concentration of less than 10 ppm and a H2 recovery of greater than 97% are achievable from reforming syngases. In an experimental study, the reversible WGS was shifted forward by removing CO2 so that the CO concentration was significantly decreased to less than 10 ppm. The modeling results agreed well with the experimental data.
The Role of Attention in Information Processing Implications for the Design of Displays
1989-12-01
processing system. Psychological Review, J, 214-255. Neisser , U . (1967). Cognitive Rsycholo&X. New York, NY: Appleton- Century-Crofts. Neisser , U . (1969...in the visual display is now an important part of a number of attention models. A related model suggested by Neisser (1967) is that successful...to filter attenuation theory have been proposed by Neisser (1967, 1969). According to Neisser’s theory, selective attention is an active process of
Acoustic Model Testing Chronology
NASA Technical Reports Server (NTRS)
Nesman, Tom
2017-01-01
Scale models have been used for decades to replicate liftoff environments and in particular acoustics for launch vehicles. It is assumed, and analyses supports, that the key characteristics of noise generation, propagation, and measurement can be scaled. Over time significant insight was gained not just towards understanding the effects of thruster details, pad geometry, and sound mitigation but also to the physical processes involved. An overview of a selected set of scale model tests are compiled here to illustrate the variety of configurations that have been tested and the fundamental knowledge gained. The selected scale model tests are presented chronologically.
Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.
Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A
2017-04-15
Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Simonton, Dean Keith
2010-06-01
Campbell (1960) proposed that creative thought should be conceived as a blind-variation and selective-retention process (BVSR). This article reviews the developments that have taken place in the half century that has elapsed since his proposal, with special focus on the use of combinatorial models as formal representations of the general theory. After defining the key concepts of blind variants, creative thought, and disciplinary context, the combinatorial models are specified in terms of individual domain samples, variable field size, ideational combination, and disciplinary communication. Empirical implications are then derived with respect to individual, domain, and field systems. These abstract combinatorial models are next provided substantive reinforcement with respect to findings concerning the cognitive processes, personality traits, developmental factors, and social contexts that contribute to creativity. The review concludes with some suggestions regarding future efforts to explicate creativity according to BVSR theory.
NASA Astrophysics Data System (ADS)
Kaldunski, Pawel; Kukielka, Leon; Patyk, Radoslaw; Kulakowska, Agnieszka; Bohdal, Lukasz; Chodor, Jaroslaw; Kukielka, Krzysztof
2018-05-01
In this paper, the numerical analysis and computer simulation of deep drawing process has been presented. The incremental model of the process in updated Lagrangian formulation with the regard of the geometrical and physical nonlinearity has been evaluated by variational and the finite element methods. The Frederic Barlat's model taking into consideration the anisotropy of materials in three main and six tangents directions has been used. The work out application in Ansys/Ls-Dyna program allows complex step by step analysis and prognoses: the shape, dimensions and state stress and strains of drawpiece. The paper presents the influence of selected anisotropic parameter in the Barlat's model on the drawpiece shape, which includes: height, sheet thickness and maximum drawing force. The important factors determining the proper formation of drawpiece and the ways of their determination have been described.
Relaxation processes in a low-order three-dimensional magnetohydrodynamics model
NASA Technical Reports Server (NTRS)
Stribling, Troy; Matthaeus, William H.
1991-01-01
The time asymptotic behavior of a Galerkin model of 3D magnetohydrodynamics (MHD) has been interpreted using the selective decay and dynamic alignment relaxation theories. A large number of simulations has been performed that scan a parameter space defined by the rugged ideal invariants, including energy, cross helicity, and magnetic helicity. It is concluded that time asymptotic state can be interpreted as a relaxation to minimum energy. A simple decay model, based on absolute equilibrium theory, is found to predict a mapping of initial onto time asymptotic states, and to accurately describe the long time behavior of the runs when magnetic helicity is present. Attention is also given to two processes, operating on time scales shorter than selective decay and dynamic alignment, in which the ratio of kinetic to magnetic energy relaxes to values 0(1). The faster of the two processes takes states initially dominant in magnetic energy to a state of near-equipartition between kinetic and magnetic energy through power law growth of kinetic energy. The other process takes states initially dominant in kinetic energy to the near-equipartitioned state through exponential growth of magnetic energy.
Wójcicki, Tomasz; Nowicki, Michał
2016-01-01
The article presents a selected area of research and development concerning the methods of material analysis based on the automatic image recognition of the investigated metallographic sections. The objectives of the analyses of the materials for gas nitriding technology are described. The methods of the preparation of nitrided layers, the steps of the process and the construction and operation of devices for gas nitriding are given. We discuss the possibility of using the methods of digital images processing in the analysis of the materials, as well as their essential task groups: improving the quality of the images, segmentation, morphological transformations and image recognition. The developed analysis model of the nitrided layers formation, covering image processing and analysis techniques, as well as selected methods of artificial intelligence are presented. The model is divided into stages, which are formalized in order to better reproduce their actions. The validation of the presented method is performed. The advantages and limitations of the developed solution, as well as the possibilities of its practical use, are listed. PMID:28773389
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evolution of resource cycling in ecosystems and individuals.
Crombach, Anton; Hogeweg, Paulien
2009-06-01
Resource cycling is a defining process in the maintenance of the biosphere. Microbial communities, ranging from simple to highly diverse, play a crucial role in this process. Yet the evolutionary adaptation and speciation of micro-organisms have rarely been studied in the context of resource cycling. In this study, our basic questions are how does a community evolve its resource usage and how are resource cycles partitioned? We design a computational model in which a population of individuals evolves to take up nutrients and excrete waste. The waste of one individual is another's resource. Given a fixed amount of resources, this leads to resource cycles. We find that the shortest cycle dominates the ecological dynamics, and over evolutionary time its length is minimized. Initially a single lineage processes a long cycle of resources, later crossfeeding lineages arise. The evolutionary dynamics that follow are determined by the strength of indirect selection for resource cycling. We study indirect selection by changing the spatial setting and the strength of direct selection. If individuals are fixed at lattice sites or direct selection is low, indirect selection result in lineages that structure their local environment, leading to 'smart' individuals and stable patterns of resource dynamics. The individuals are good at cycling resources themselves and do this with a short cycle. On the other hand, if individuals randomly change position each time step, or direct selection is high, individuals are more prone to crossfeeding: an ecosystem based solution with turbulent resource dynamics, and individuals that are less capable of cycling resources themselves. In a baseline model of ecosystem evolution we demonstrate different eco-evolutionary trajectories of resource cycling. By varying the strength of indirect selection through the spatial setting and direct selection, the integration of information by the evolutionary process leads to qualitatively different results from individual smartness to cooperative community structures.
Burger, Joanna
2007-11-01
World War II and the Cold War have left the Unites States, and other Nations, with massive cleanup and remediation tasks for radioactive and other legacy hazardous wastes. While some sites can be cleaned up to acceptable residential risk levels, others will continue to hold hazardous wastes, which must be contained and monitored to protect human health and the environment. While media (soil, sediment, groundwater) monitoring is the usual norm at many radiological waste sites, for some situations (both biological and societal), biomonitoring may provide the necessary information to assure greater peace of mind for local and regional residents, and to protect ecologically valuable buffer lands or waters. In most cases, indicators are selected using scientific expertise and a literature review, but not all selected indicators will seem relevant to stakeholders. In this paper, I provide a model for the inclusion of stakeholders in the development of bioindicators for assessing radionuclide levels of biota in the marine environment around Amchitka Island, in the Aleutian Chain of Alaska. Amchitka was the site of three underground nuclear tests from 1965 to 1971. The process was stakeholder-initiated, stakeholder-driven, and included stakeholders during each phase. Phases included conceptualization, initial selection of biota and radionuclides, refinement of biota and radionuclide target lists, collection of biota, selection of biota and radionuclides for analysis, and selection of biota, tissues, and radionuclides for bioindicators. The process produced site-specific information on biota availability and on radionuclide levels that led to selection of site-appropriate bioindicators. I suggest that the lengthy, iterative, stakeholder-driven process described in this paper results in selection of bioindicators that are accepted by biologists, public health personnel, public-policy makers, resource agencies, regulatory agencies, subsistence hunters/fishers, and a wide range of other stakeholders. The process is applicable to other sites with ecologically important buffer lands or waters, or where contamination issues are contentious.
Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing
2017-04-15
The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bayesian model selection validates a biokinetic model for zirconium processing in humans
2012-01-01
Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152
An experimental study of factors affecting the selective inhibition of sintering process
NASA Astrophysics Data System (ADS)
Asiabanpour, Bahram
Selective Inhibition of Sintering (SIS) is a new rapid prototyping method that builds parts in a layer-by-layer fabrication basis. SIS works by joining powder particles through sintering in the part's body, and by sintering inhibition of some selected powder areas. The objective of this research has been to improve the new SIS process, which has been invented at USC. The process improvement is based on statistical design of experiments. To conduct the needed experiments a working machine and related path generator software were needed. The machine and its control software were made available prior to this research. The path generator algorithms and software had to be created. This program should obtain model geometry data from a CAD file and generate an appropriate path file for the printer nozzle. Also, the program should generate a simulation file for path file inspection using virtual prototyping. The activities related to path generator constitute the first part of this research, which has resulted in an efficient path generator. In addition, to reach an acceptable level of accuracy, strength, and surface quality in the fabricated parts, all effective factors in the SIS process should be identified and controlled. Simultaneous analytical and experimental studies were conducted to recognize effective factors and to control the SIS process. Also, it was known that polystyrene was the most appropriate polymer powder and saturated potassium iodide was the most effective inhibitor among the available candidate materials. In addition, statistical tools were applied to improve the desirable properties of the parts fabricated by the SIS process. An investigation of part strength was conducted using the Response Surface Methodology (RSM) and a region of acceptable operating conditions for the part strength was found. Then, through analysis of the experimental results, the impact of the factors on the final part surface quality and dimensional accuracy was modeled. After developing a desirability function model, process operating conditions for maximum desirability were identified. Finally, the desirability model was validated.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to set of accepted processes and products for achieving each criterion; (5) Select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
A method for tailoring the information content of a software process model
NASA Technical Reports Server (NTRS)
Perkins, Sharon; Arend, Mark B.
1990-01-01
The framework is defined for a general method for selecting a necessary and sufficient subset of a general software life cycle's information products, to support new software development process. Procedures for characterizing problem domains in general and mapping to a tailored set of life cycle processes and products is presented. An overview of the method is shown using the following steps: (1) During the problem concept definition phase, perform standardized interviews and dialogs between developer and user, and between user and customer; (2) Generate a quality needs profile of the software to be developed, based on information gathered in step 1; (3) Translate the quality needs profile into a profile of quality criteria that must be met by the software to satisfy the quality needs; (4) Map the quality criteria to a set of accepted processes and products for achieving each criterion; (5) select the information products which match or support the accepted processes and product of step 4; and (6) Select the design methodology which produces the information products selected in step 5.
Toward a Multicultural Model of the Stress Process.
ERIC Educational Resources Information Center
Slavin, Lesley A.; And Others
1991-01-01
Attempts to expand Lazarus and Folkman's stress model to include culture-relevant dimensions. Discusses cultural factors that influence each component of the stress model, including types and frequency of events experienced, appraisals of stressfulness of events, appraisals of available coping resources, selection of coping strategies, and…
NASA Astrophysics Data System (ADS)
Mohammed, Habiba Ibrahim; Majid, Zulkepli; Yusof, Norhakim Bin; Bello Yamusa, Yamusa
2018-03-01
Landfilling remains the most common systematic technique of solid waste disposal in most of the developed and developing countries. Finding a suitable site for landfill is a very challenging task. Landfill site selection process aims to provide suitable areas that will protect the environment and public health from pollution and hazards. Therefore, various factors such as environmental, physical, socio-economic, and geological criteria must be considered before siting any landfill. This makes the site selection process vigorous and tedious because it involves the processing of large amount of spatial data, rules and regulations from different agencies and also policy from decision makers. This allows the incorporation of conflicting objectives and decision maker preferences into spatial decision models. This paper particularly analyzes the multi-criteria evaluation (MCE) method of landfill site selection for solid waste management by means of literature reviews and surveys. The study will help the decision makers and waste management authorities to choose the most effective method when considering landfill site selection.
Arnoldt, Hinrich; Strogatz, Steven H; Timme, Marc
2015-01-01
It has been hypothesized that in the era just before the last universal common ancestor emerged, life on earth was fundamentally collective. Ancient life forms shared their genetic material freely through massive horizontal gene transfer (HGT). At a certain point, however, life made a transition to the modern era of individuality and vertical descent. Here we present a minimal model for stochastic processes potentially contributing to this hypothesized "Darwinian transition." The model suggests that HGT-dominated dynamics may have been intermittently interrupted by selection-driven processes during which genotypes became fitter and decreased their inclination toward HGT. Stochastic switching in the population dynamics with three-point (hypernetwork) interactions may have destabilized the HGT-dominated collective state and essentially contributed to the emergence of vertical descent and the first well-defined species in early evolution. A systematic nonlinear analysis of the stochastic model dynamics covering key features of evolutionary processes (such as selection, mutation, drift and HGT) supports this view. Our findings thus suggest a viable direction out of early collective evolution, potentially enabling the start of individuality and vertical Darwinian evolution.
Adaptive convex combination approach for the identification of improper quaternion processes.
Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P
2014-01-01
Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).
The tangled bank of amino acids
Pollock, David D.
2016-01-01
Abstract The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. PMID:27028523
Integrated control system for electron beam processes
NASA Astrophysics Data System (ADS)
Koleva, L.; Koleva, E.; Batchkova, I.; Mladenov, G.
2018-03-01
The ISO/IEC 62264 standard is widely used for integration of the business systems of a manufacturer with the corresponding manufacturing control systems based on hierarchical equipment models, functional data and manufacturing operations activity models. In order to achieve the integration of control systems, formal object communication models must be developed, together with manufacturing operations activity models, which coordinate the integration between different levels of control. In this article, the development of integrated control system for electron beam welding process is presented as part of a fully integrated control system of an electron beam plant, including also other additional processes: surface modification, electron beam evaporation, selective melting and electron beam diagnostics.
Accurate abundance determinations in S stars
NASA Astrophysics Data System (ADS)
Neyskens, P.; Van Eck, S.; Plez, B.; Goriely, S.; Siess, L.; Jorissen, A.
2011-12-01
S-type stars are thought to be the first objects, during their evolution on the asymptotic giant branch (AGB), to experience s-process nucleosynthesis and third dredge-ups, and therefore to exhibit s-process signatures in their atmospheres. Until present, the modeling of these processes is subject to large uncertainties. Precise abundance determinations in S stars are of extreme importance for constraining e.g., the depth and the formation of the 13C pocket. In this paper a large grid of MARCS model atmospheres for S stars is used to derive precise abundances of key s-process elements and iron. A first estimation of the atmospheric parameters is obtained using a set of well-chosen photometric and spectroscopic indices for selecting the best model atmosphere of each S star. Abundances are derived from spectral line synthesis, using the selected model atmosphere. Special interest is paid to technetium, an element without stable isotopes. Its detection in stars is considered as the best possible signature that the star effectively populates the thermally-pulsing AGB (TP-AGB) phase of evolution. The derived Tc/Zr abundances are compared, as a function of the derived [Zr/Fe] overabundances, with AGB stellar model predictions. The computed [Zr/Fe] overabundances are in good agreement with the AGB stellar evolution model predictions, while the Tc/Zr abundances are slightly over-predicted. This discrepancy can help to set stronger constraints on nucleosynthesis and mixing mechanisms in AGB stars.
Nonparametric Bayesian models for a spatial covariance.
Reich, Brian J; Fuentes, Montserrat
2012-01-01
A crucial step in the analysis of spatial data is to estimate the spatial correlation function that determines the relationship between a spatial process at two locations. The standard approach to selecting the appropriate correlation function is to use prior knowledge or exploratory analysis, such as a variogram analysis, to select the correct parametric correlation function. Rather that selecting a particular parametric correlation function, we treat the covariance function as an unknown function to be estimated from the data. We propose a flexible prior for the correlation function to provide robustness to the choice of correlation function. We specify the prior for the correlation function using spectral methods and the Dirichlet process prior, which is a common prior for an unknown distribution function. Our model does not require Gaussian data or spatial locations on a regular grid. The approach is demonstrated using a simulation study as well as an analysis of California air pollution data.
Distribution and avoidance of debris on epoxy resin during UV ns-laser scanning processes
NASA Astrophysics Data System (ADS)
Veltrup, Markus; Lukasczyk, Thomas; Ihde, Jörg; Mayer, Bernd
2018-05-01
In this paper the distribution of debris generated by a nanosecond UV laser (248 nm) on epoxy resin and the prevention of the corresponding re-deposition effects by parameter selection for a ns-laser scanning process were investigated. In order to understand the mechanisms behind the debris generation, in-situ particle measurements were performed during laser treatment. These measurements enabled the determination of the ablation threshold of the epoxy resin as well as the particle density and size distribution in relation to the applied laser parameters. The experiments showed that it is possible to reduce debris on the surface with an adapted selection of pulse overlap with respect to laser fluence. A theoretical model for the parameter selection was developed and tested. Based on this model, the correct choice of laser parameters with reduced laser fluence resulted in a surface without any re-deposited micro-particles.
Dynamic interactions between visual working memory and saccade target selection
Schneegans, Sebastian; Spencer, John P.; Schöner, Gregor; Hwang, Seongmin; Hollingworth, Andrew
2014-01-01
Recent psychophysical experiments have shown that working memory for visual surface features interacts with saccadic motor planning, even in tasks where the saccade target is unambiguously specified by spatial cues. Specifically, a match between a memorized color and the color of either the designated target or a distractor stimulus influences saccade target selection, saccade amplitudes, and latencies in a systematic fashion. To elucidate these effects, we present a dynamic neural field model in combination with new experimental data. The model captures the neural processes underlying visual perception, working memory, and saccade planning relevant to the psychophysical experiment. It consists of a low-level visual sensory representation that interacts with two separate pathways: a spatial pathway implementing spatial attention and saccade generation, and a surface feature pathway implementing color working memory and feature attention. Due to bidirectional coupling between visual working memory and feature attention in the model, the working memory content can indirectly exert an effect on perceptual processing in the low-level sensory representation. This in turn biases saccadic movement planning in the spatial pathway, allowing the model to quantitatively reproduce the observed interaction effects. The continuous coupling between representations in the model also implies that modulation should be bidirectional, and model simulations provide specific predictions for complementary effects of saccade target selection on visual working memory. These predictions were empirically confirmed in a new experiment: Memory for a sample color was biased toward the color of a task-irrelevant saccade target object, demonstrating the bidirectional coupling between visual working memory and perceptual processing. PMID:25228628
Optimization and Simulation of SLM Process for High Density H13 Tool Steel Parts
NASA Astrophysics Data System (ADS)
Laakso, Petri; Riipinen, Tuomas; Laukkanen, Anssi; Andersson, Tom; Jokinen, Antero; Revuelta, Alejandro; Ruusuvuori, Kimmo
This paper demonstrates the successful printing and optimization of processing parameters of high-strength H13 tool steel by Selective Laser Melting (SLM). D-Optimal Design of Experiments (DOE) approach is used for parameter optimization of laser power, scanning speed and hatch width. With 50 test samples (1×1×1cm) we establish parameter windows for these three parameters in relation to part density. The calculated numerical model is found to be in good agreement with the density data obtained from the samples using image analysis. A thermomechanical finite element simulation model is constructed of the SLM process and validated by comparing the calculated densities retrieved from the model with the experimentally determined densities. With the simulation tool one can explore the effect of different parameters on density before making any printed samples. Establishing a parameter window provides the user with freedom for parameter selection such as choosing parameters that result in fastest print speed.
Agile Implementation: A Blueprint for Implementing Evidence-Based Healthcare Solutions.
Boustani, Malaz; Alder, Catherine A; Solid, Craig A
2018-03-07
To describe the essential components of an Agile Implementation (AI) process, which rapidly and effectively implements evidence-based healthcare solutions, and present a case study demonstrating its utility. Case demonstration study. Integrated, safety net healthcare delivery system in Indianapolis. Interdisciplinary team of clinicians and administrators. Reduction in dementia symptoms and caregiver burden; inpatient and outpatient care expenditures. Implementation scientists were able to implement a collaborative care model for dementia care and sustain it for more than 9 years. The model was implemented and sustained by using the elements of the AI process: proactive surveillance and confirmation of clinical opportunities, selection of the right evidence-based healthcare solution, localization (i.e., tailoring to the local environment) of the selected solution, development of an evaluation plan and performance feedback loop, development of a minimally standardized operation manual, and updating such manual annually. The AI process provides an effective model to implement and sustain evidence-based healthcare solutions. © 2018, Copyright the Authors Journal compilation © 2018, The American Geriatrics Society.
A model for field toxicity tests
Kaiser, Mark S.; Finger, Susan E.
1996-01-01
Toxicity tests conducted under field conditions present an interesting challenge for statistical modelling. In contrast to laboratory tests, the concentrations of potential toxicants are not held constant over the test. In addition, the number and identity of toxicants that belong in a model as explanatory factors are not known and must be determined through a model selection process. We present one model to deal with these needs. This model takes the record of mortalities to form a multinomial distribution in which parameters are modelled as products of conditional daily survival probabilities. These conditional probabilities are in turn modelled as logistic functions of the explanatory factors. The model incorporates lagged values of the explanatory factors to deal with changes in the pattern of mortalities over time. The issue of model selection and assessment is approached through the use of generalized information criteria and power divergence goodness-of-fit tests. These model selection criteria are applied in a cross-validation scheme designed to assess the ability of a model to both fit data used in estimation and predict data deleted from the estimation data set. The example presented demonstrates the need for inclusion of lagged values of the explanatory factors and suggests that penalized likelihood criteria may not provide adequate protection against overparameterized models in model selection.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Phillips, Sharon A.
2013-01-01
Selecting appropriate performance improvement interventions is a critical component of a comprehensive model of performance improvement. Intervention selection is an interconnected process involving analysis of an organization's environment, definition of the performance problem, and identification of a performance gap and identification of causal…
Toward a Political-Organizational Model of Gatekeeping: The Case of Elite Colleges.
ERIC Educational Resources Information Center
Karen, David
1990-01-01
Develops a gatekeeping theory by stepping inside the black box of Harvard university's admissions process. Stresses how political and organizational contexts influence selection (gatekeeping). Analyzes how student merit and social class-based factors mutually determine selection. Links an understanding of the organizational field with process…
Perceptual-Motor and Cognitive Performance Task-Battery for Pilot Selection
1981-01-01
processing. Basic researchers in cognitive phychology have become discouraged with the inability of numerous models to consider and account for individual...attention in the use of cues in verbal problem-solving. Journal of Personality, 197?, 40, 226-241. Mensh, I. N. Pilot selection by phychological methods
Inssues for Consideration by Mathematics Educators: Selected Papers.
ERIC Educational Resources Information Center
Denmark, Tom, Ed.
This set of papers, selected from presentations at the Fourth and Fifth Annual Conferences of the Research Council for Diagnostic and Prescriptive Mathematics, are of primary interest to mathematics educators. In the first paper, Romberg describes a model for diagnosing mathematical learning difficulties which extends the diagnostic process beyond…
NASA Astrophysics Data System (ADS)
Swan, B.; Laverdiere, M.; Yang, L.
2017-12-01
In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.
The role of the electrolyte in the selective dissolution of metal alloys
NASA Astrophysics Data System (ADS)
Policastro, Steven A.
Dealloying plays an important role in several corrosion processes, including pitting corrosion through the formation of local cathodes from the selective dissolution of intermetallic particles and stress-corrosion cracking in which it is responsible for injecting cracks from the surface into the undealloyed bulk material. Additionally, directed dealloying in the laboratory to form nanoporous structures has been the subject of much recent study because of the unique structural properties that the porous layer provides. In order to better understand the physical reasons for dealloying as well as understand the parameters that influence the evolution of the microstructure, several models have been proposed. Current theoretical descriptions of dealloying have been very successful in explaining some features of selective dissolution but additional behaviors can be included into the model to improve understanding of the dealloying process. In the present work, the effects of electrolyte component interactions, temperature, alloy cohesive energies, and applied potential on the development of nanoporosity via the selective dissolution of the less-noble component from binary and ternary alloys are considered. Both a kinetic Monte-Carlo (KMC) model of the behavior of the metal atoms and the electrolyte ions at the metal-solution interface and a phase-yield model of ligament coarsening are developed. By adding these additional parameters into the KMC model, a rich set of behaviors is observed in the simulation results. From the simulation results, it is suggested that selectively dissolving a binary alloy in a very aggressive electrolyte that targeted the LN atoms could provide a porous microstructure that retained a higher concentration of the LN atoms in its ligaments and thus retain more of the mechanical properties of the bulk alloy. In addition, by adding even a small fraction of a third, noble component to form a ternary alloy the dissolution kinetics of the least noble component can be dramatically altered, providing a means of controlling dealloying depth. Some molecular dynamics calculations are used to justify the assumptions of metal atom motion in the KMC model. A recently developed parameter-space exploration technique, COERCE, is employed to optimize the process of obtaining meaningful parameter values from the KMC simulation.
Lord, Dominique; Washington, Simon P; Ivan, John N
2005-01-01
There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.
Mood disorders: neurocognitive models.
Malhi, Gin S; Byrow, Yulisha; Fritz, Kristina; Das, Pritha; Baune, Bernhard T; Porter, Richard J; Outhred, Tim
2015-12-01
In recent years, a number of neurocognitive models stemming from psychiatry and psychology schools of thought have conceptualized the pathophysiology of mood disorders in terms of dysfunctional neural mechanisms that underpin and drive neurocognitive processes. Though these models have been useful for advancing our theoretical understanding and facilitating important lines of research, translation of these models and their application within the clinical arena have been limited-partly because of lack of integration and synthesis. Cognitive neuroscience provides a novel perspective for understanding and modeling mood disorders. This selective review of influential neurocognitive models develops an integrative approach that can serve as a template for future research and the development of a clinically meaningful framework for investigating, diagnosing, and treating mood disorders. A selective literature search was conducted using PubMed and PsychINFO to identify prominent neurobiological and neurocognitive models of mood disorders. Most models identify similar neural networks and brain regions and neuropsychological processes in the neurocognition of mood, however, they differ in terms of specific functions attached to neural processes and how these interact. Furthermore, cognitive biases, reward processing and motivation, rumination, and mood stability, which play significant roles in the manner in which attention, appraisal, and response processes are deployed in mood disorders, are not sufficiently integrated. The inclusion of interactions between these additional components enhances our understanding of the etiology and pathophysiology of mood disorders. Through integration of key cognitive functions and understanding of how these interface with neural functioning within neurocognitive models of mood disorders, a framework for research can be created for translation to diagnosis and treatment of mood disorders. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Hybrid feature selection for supporting lightweight intrusion detection systems
NASA Astrophysics Data System (ADS)
Song, Jianglong; Zhao, Wentao; Liu, Qiang; Wang, Xin
2017-08-01
Redundant and irrelevant features not only cause high resource consumption but also degrade the performance of Intrusion Detection Systems (IDS), especially when coping with big data. These features slow down the process of training and testing in network traffic classification. Therefore, a hybrid feature selection approach in combination with wrapper and filter selection is designed in this paper to build a lightweight intrusion detection system. Two main phases are involved in this method. The first phase conducts a preliminary search for an optimal subset of features, in which the chi-square feature selection is utilized. The selected set of features from the previous phase is further refined in the second phase in a wrapper manner, in which the Random Forest(RF) is used to guide the selection process and retain an optimized set of features. After that, we build an RF-based detection model and make a fair comparison with other approaches. The experimental results on NSL-KDD datasets show that our approach results are in higher detection accuracy as well as faster training and testing processes.
Inference on the Strength of Balancing Selection for Epistatically Interacting Loci
Buzbas, Erkan Ozge; Joyce, Paul; Rosenberg, Noah A.
2011-01-01
Existing inference methods for estimating the strength of balancing selection in multi-locus genotypes rely on the assumption that there are no epistatic interactions between loci. Complex systems in which balancing selection is prevalent, such as sets of human immune system genes, are known to contain components that interact epistatically. Therefore, current methods may not produce reliable inference on the strength of selection at these loci. In this paper, we address this problem by presenting statistical methods that can account for epistatic interactions in making inference about balancing selection. A theoretical result due to Fearnhead (2006) is used to build a multi-locus Wright-Fisher model of balancing selection, allowing for epistatic interactions among loci. Antagonistic and synergistic types of interactions are examined. The joint posterior distribution of the selection and mutation parameters is sampled by Markov chain Monte Carlo methods, and the plausibility of models is assessed via Bayes factors. As a component of the inference process, an algorithm to generate multi-locus allele frequencies under balancing selection models with epistasis is also presented. Recent evidence on interactions among a set of human immune system genes is introduced as a motivating biological system for the epistatic model, and data on these genes are used to demonstrate the methods. PMID:21277883
Human Systems Integration Design Environment (HSIDE)
2012-04-09
quality of the resulting HSI products. 15. SUBJECT TERMS HSI , Manning Estimation and Validation , Risk Assessment, I POE, PLM, BPMN , Workflow...business process model in Business Process Modeling Notation ( BPMN ) or the actual workflow template associated with the specific functional area, again...as filtered by the user settings in the high level interface. Figure 3 shows the initial screen which allows the user to select either the BPMN or
A Darwinian approach to the origin of life cycles with group properties.
Rashidi, Armin; Shelton, Deborah E; Michod, Richard E
2015-06-01
A selective explanation for the evolution of multicellular organisms from unicellular ones requires knowledge of both selective pressures and factors affecting the response to selection. Understanding the response to selection is particularly challenging in the case of evolutionary transitions in individuality, because these transitions involve a shift in the very units of selection. We develop a conceptual framework in which three fundamental processes (growth, division, and splitting) are the scaffold for unicellular and multicellular life cycles alike. We (i) enumerate the possible ways in which these processes can be linked to create more complex life cycles, (ii) introduce three genes based on growth, division and splitting that, acting in concert, determine the architecture of the life cycles, and finally, (iii) study the evolution of the simplest five life cycles using a heuristic model of coupled ordinary differential equations in which mutations are allowed in the three genes. We demonstrate how changes in the regulation of three fundamental aspects of colonial form (cell size, colony size, and colony cell number) could lead unicellular life cycles to evolve into primitive multicellular life cycles with group properties. One interesting prediction of the model is that selection generally favors cycles with group level properties when intermediate body size is associated with lowest mortality. That is, a universal requirement for the evolution of group cycles in the model is that the size-mortality curve be U-shaped. Furthermore, growth must decelerate with size. Copyright © 2015 Elsevier Inc. All rights reserved.
An Approximate Markov Model for the Wright-Fisher Diffusion and Its Application to Time Series Data.
Ferrer-Admetlla, Anna; Leuenberger, Christoph; Jensen, Jeffrey D; Wegmann, Daniel
2016-06-01
The joint and accurate inference of selection and demography from genetic data is considered a particularly challenging question in population genetics, since both process may lead to very similar patterns of genetic diversity. However, additional information for disentangling these effects may be obtained by observing changes in allele frequencies over multiple time points. Such data are common in experimental evolution studies, as well as in the comparison of ancient and contemporary samples. Leveraging this information, however, has been computationally challenging, particularly when considering multilocus data sets. To overcome these issues, we introduce a novel, discrete approximation for diffusion processes, termed mean transition time approximation, which preserves the long-term behavior of the underlying continuous diffusion process. We then derive this approximation for the particular case of inferring selection and demography from time series data under the classic Wright-Fisher model and demonstrate that our approximation is well suited to describe allele trajectories through time, even when only a few states are used. We then develop a Bayesian inference approach to jointly infer the population size and locus-specific selection coefficients with high accuracy and further extend this model to also infer the rates of sequencing errors and mutations. We finally apply our approach to recent experimental data on the evolution of drug resistance in influenza virus, identifying likely targets of selection and finding evidence for much larger viral population sizes than previously reported. Copyright © 2016 by the Genetics Society of America.
Multi-kilowatt modularized spacecraft power processing system development
NASA Technical Reports Server (NTRS)
Andrews, R. E.; Hayden, J. H.; Hedges, R. T.; Rehmann, D. W.
1975-01-01
A review of existing information pertaining to spacecraft power processing systems and equipment was accomplished with a view towards applicability to the modularization of multi-kilowatt power processors. Power requirements for future spacecraft were determined from the NASA mission model-shuttle systems payload data study which provided the limits for modular power equipment capabilities. Three power processing systems were compared to evaluation criteria to select the system best suited for modularity. The shunt regulated direct energy transfer system was selected by this analysis for a conceptual design effort which produced equipment specifications, schematics, envelope drawings, and power module configurations.
Disciplined rubidium oscillator with GPS selective availability
NASA Technical Reports Server (NTRS)
Dewey, Wayne P.
1993-01-01
A U.S. Department of Defense decision for continuous implementation of GPS Selective Availability (S/A) has made it necessary to modify Rubidium oscillator disciplining methods. One such method for reducing the effects of S/A on the oscillator disciplining process was developed which achieves results approaching pre-S/A GPS. The Satellite Hopping algorithm used in minimizing the effects of S/A on the oscillator disciplining process is described, and the results of using this process to those obtained prior to the implementation of S/A are compared. Test results are from a TrueTime Rubidium based Model GPS-DC timing receiver.
Inferring phenomenological models of Markov processes from data
NASA Astrophysics Data System (ADS)
Rivera, Catalina; Nemenman, Ilya
Microscopically accurate modeling of stochastic dynamics of biochemical networks is hard due to the extremely high dimensionality of the state space of such networks. Here we propose an algorithm for inference of phenomenological, coarse-grained models of Markov processes describing the network dynamics directly from data, without the intermediate step of microscopically accurate modeling. The approach relies on the linear nature of the Chemical Master Equation and uses Bayesian Model Selection for identification of parsimonious models that fit the data. When applied to synthetic data from the Kinetic Proofreading process (KPR), a common mechanism used by cells for increasing specificity of molecular assembly, the algorithm successfully uncovers the known coarse-grained description of the process. This phenomenological description has been notice previously, but this time it is derived in an automated manner by the algorithm. James S. McDonnell Foundation Grant No. 220020321.
NASA Astrophysics Data System (ADS)
Kapitan, Loginn
This research created a new model which provides an integrated approach to planning the effective selection and employment of airborne sensor systems in response to accidental or intentional chemical vapor releases. The approach taken was to use systems engineering and decision analysis methods to construct a model architecture which produced a modular structure for integrating both new and existing components into a logical procedure to assess the application of airborne sensor systems to address chemical vapor hazards. The resulting integrated process model includes an internal aggregation model which allowed differentiation among alternative airborne sensor systems. Both models were developed and validated by experts and demonstrated using appropriate hazardous chemical release scenarios. The resultant prototype integrated process model or system fills a current gap in capability allowing improved planning, training and exercise for HAZMAT teams and first responders when considering the selection and employment of airborne sensor systems. Through the research process, insights into the current response structure and how current airborne capability may be most effectively used were generated. Furthermore, the resultant prototype system is tailorable for local, state, and federal application, and can potentially be modified to help evaluate investments in new airborne sensor technology and systems. Better planning, training and preparedness exercising holds the prospect for the effective application of airborne assets for improved response to large scale chemical release incidents. Improved response will result in fewer casualties and lives lost, reduced economic impact, and increased protection of critical infrastructure when faced with accidental and intentional terrorist release of hazardous industrial chemicals. With the prospect of more airborne sensor systems becoming available, this prototype system integrates existing and new tools into an effective process for the selection and employment of airborne sensors to better plan, train and exercise ahead of potential chemical release events.
Rasti, Behnam; Namazi, Mohsen; Karimi-Jafari, M H; Ghasemi, Jahan B
2017-04-01
Due to its physiological and clinical roles, carbonic anhydrase (CA) is one of the most interesting case studies. There are different classes of CAinhibitors including sulfonamides, polyamines, coumarins and dithiocarbamates (DTCs). However, many of them hardly act as a selective inhibitor against a specific isoform. Therefore, finding highly selective inhibitors for different isoforms of CA is still an ongoing project. Proteochemometrics modeling (PCM) is able to model the bioactivity of multiple compounds against different isoforms of a protein. Therefore, it would be extremely applicable when investigating the selectivity of different ligands towards different receptors. Given the facts, we applied PCM to investigate the interaction space and structural properties that lead to the selective inhibition of CA isoforms by some dithiocarbamates. Our models have provided interesting structural information that can be considered to design compounds capable of inhibiting different isoforms of CA in an improved selective manner. Validity and predictivity of the models were confirmed by both internal and external validation methods; while Y-scrambling approach was applied to assess the robustness of the models. To prove the reliability and the applicability of our findings, we showed how ligands-receptors selectivity can be affected by removing any of these critical findings from the modeling process. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Extraterrestrial materials processing and construction. [space industrialization
NASA Technical Reports Server (NTRS)
Criswell, D. R.; Waldron, R. D.; Mckenzie, J. D.
1980-01-01
Three different chemical processing schemes were identified for separating lunar soils into the major oxides and elements. Feedstock production for space industry; an HF acid leach process; electrorefining processes for lunar free metal and metal derived from chemical processing of lunar soils; production and use of silanes and spectrally selective materials; glass, ceramics, and electrochemistry workshops; and an econometric model of bootstrapping space industry are discussed.
Abbes, Aymen Ben; Gavault, Emmanuelle; Ripoll, Thierry
2014-01-01
We conducted a series of experiments to explore how the spatial configuration of objects influences the selection and the processing of these objects in a visual short-term memory task. We designed a new experiment in which participants had to memorize 4 targets presented among 4 distractors. Targets were cued during the presentation of distractor objects. Their locations varied according to 4 spatial configurations. From the first to the last configuration, the distance between targets' locations was progressively increased. The results revealed a high capacity to select and memorize targets embedded among distractors even when targets were extremely distant from each other. This capacity is discussed in relation to the unitary conception of attention, models of split attention, and the competitive interaction model. Finally, we propose that the spatial dispersion of objects has different effects on attentional allocation and processing stages. Thus, when targets are extremely distant from each other, attentional allocation becomes more difficult while processing becomes easier. This finding implicates that these 2 aspects of attention need to be more clearly distinguished in future research.
Cheng, Weiwei; Sun, Da-Wen; Pu, Hongbin; Wei, Qingyi
2017-04-15
The feasibility of hyperspectral imaging (HSI) (400-1000nm) for tracing the chemical spoilage extent of the raw meat used for two kinds of processed meats was investigated. Calibration models established separately for salted and cooked meats using full wavebands showed good results with the determination coefficient in prediction (R 2 P ) of 0.887 and 0.832, respectively. For simplifying the calibration models, two variable selection methods were used and compared. The results showed that genetic algorithm-partial least squares (GA-PLS) with as much continuous wavebands selected as possible always had better performance. The potential of HSI to develop one multispectral system for simultaneously tracing the chemical spoilage extent of the two kinds of processed meats was also studied. Good result with an R 2 P of 0.854 was obtained using GA-PLS as the dimension reduction method, which was thus used to visualize total volatile base nitrogen (TVB-N) contents corresponding to each pixel of the image. Copyright © 2016 Elsevier Ltd. All rights reserved.
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark; Selinsky, T.
2002-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user's tendencies while the user is selecting targets and to increase the user's productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
NASA Astrophysics Data System (ADS)
Belokurov, V. P.; Belokurov, S. V.; Korablev, R. A.; Shtepa, A. A.
2018-05-01
The article deals with decision making concerning transport tasks on search iterations in the management of motor transport processes. An optimal selection of the best option for specific situations is suggested in the management of complex multi-criteria transport processes.
A Decision Model for Evaluating Potential Change in Instructional Programs.
ERIC Educational Resources Information Center
Amor, J. P.; Dyer, J. S.
A statistical model designed to assist elementary school principals in the process of selection educational areas which should receive additional emphasis is presented. For each educational area, the model produces an index number which represents the expected "value" per dollar spent on an instructional program appropriate for strengthening that…
Model Educational Specifications for Technology in Schools.
ERIC Educational Resources Information Center
Maryland State Dept. of Education, College Park. Office of Administration and Finance.
This description of the Model Edspec, which can be used by itself or in conjunction with the "Format Guide of Educational Specifications," serves as a comprehensive planning tool for the selection and application of technology. The model is designed to assist schools in implementing the facilities development process, thereby making…
Exploring Autophagy in Drosophila
Juhász, Gábor
2017-01-01
Autophagy is a catabolic process in eukaryotic cells promoting bulk or selective degradation of cellular components within lysosomes. In recent decades, several model systems were utilized to dissect the molecular machinery of autophagy and to identify the impact of this cellular “self-eating” process on various physiological and pathological processes. Here we briefly discuss the advantages and limitations of using the fruit fly Drosophila melanogaster, a popular model in cell and developmental biology, to apprehend the main pathway of autophagy in a complete animal. PMID:28704946
Lower- and higher-level models of right hemisphere language. A selective survey.
Gainotti, Guido
2016-01-01
The models advanced to explain right hemisphere (RH) language function can be divided into two main types. According to the older (lower-level) models, RH language reflects the ontogenesis of conceptual and semantic-lexical development; the more recent models, on the other hand, suggest that the RH plays an important role in the use of higher-level language functions, such as metaphors, to convey complex, abstract concepts. The hypothesis that the RH may be preferentially involved in processing the semantic-lexical components of language was advanced by Zaidel in splitbrain patients and his model was confirmed by neuropsychological investigations, proving that right brain-damaged patients show selective semanticlexical disorders. The possible links between lower and higher levels of RH language are discussed, as is the hypothesis that the RH may have privileged access to the figurative aspects of novel metaphorical expressions, whereas conventionalization of metaphorical meaning could be a bilaterally-mediated process involving abstract semantic-lexical codes.
Mate-sampling costs and sexy sons.
Kokko, H; Booksmythe, I; Jennions, M D
2015-01-01
Costly female mating preferences for purely Fisherian male traits (i.e. sexual ornaments that are genetically uncorrelated with inherent viability) are not expected to persist at equilibrium. The indirect benefit of producing 'sexy sons' (Fisher process) disappears: in some models, the male trait becomes fixed; in others, a range of male trait values persist, but a larger trait confers no net fitness advantage because it lowers survival. Insufficient indirect selection to counter the direct cost of producing fewer offspring means that preferences are lost. The only well-cited exception assumes biased mutation on male traits. The above findings generally assume constant direct selection against female preferences (i.e. fixed costs). We show that if mate-sampling costs are instead derived based on an explicit account of how females acquire mates, an initially costly mating preference can coevolve with a male trait so that both persist in the presence or absence of biased mutation. Our models predict that empirically detecting selection at equilibrium will be difficult, even if selection was responsible for the location of the current equilibrium. In general, it appears useful to integrate mate sampling theory with models of genetic consequences of mating preferences: being explicit about the process by which individuals select mates can alter equilibria. © 2014 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Gonzalo Cogno, Soledad; Mato, Germán
2015-01-01
Orientation selectivity is ubiquitous in the primary visual cortex (V1) of mammals. In cats and monkeys, V1 displays spatially ordered maps of orientation preference. Instead, in mice, squirrels, and rats, orientation selective neurons in V1 are not spatially organized, giving rise to a seemingly random pattern usually referred to as a salt-and-pepper layout. The fact that such different organizations can sharpen orientation tuning leads to question the structural role of the intracortical connections; specifically the influence of plasticity and the generation of functional connectivity. In this work, we analyze the effect of plasticity processes on orientation selectivity for both scenarios. We study a computational model of layer 2/3 and a reduced one-dimensional model of orientation selective neurons, both in the balanced state. We analyze two plasticity mechanisms. The first one involves spike-timing dependent plasticity (STDP), while the second one considers the reconnection of the interactions according to the preferred orientations of the neurons. We find that under certain conditions STDP can indeed improve selectivity but it works in a somehow unexpected way, that is, effectively decreasing the modulated part of the intracortical connectivity as compared to the non-modulated part of it. For the reconnection mechanism we find that increasing functional connectivity leads, in fact, to a decrease in orientation selectivity if the network is in a stable balanced state. Both counterintuitive results are a consequence of the dynamics of the balanced state. We also find that selectivity can increase due to a reconnection process if the resulting connections give rise to an unstable balanced state. We compare these findings with recent experimental results. PMID:26347615
Scalable gastroscopic video summarization via similar-inhibition dictionary selection.
Wang, Shuai; Cong, Yang; Cao, Jun; Yang, Yunsheng; Tang, Yandong; Zhao, Huaici; Yu, Haibin
2016-01-01
This paper aims at developing an automated gastroscopic video summarization algorithm to assist clinicians to more effectively go through the abnormal contents of the video. To select the most representative frames from the original video sequence, we formulate the problem of gastroscopic video summarization as a dictionary selection issue. Different from the traditional dictionary selection methods, which take into account only the number and reconstruction ability of selected key frames, our model introduces the similar-inhibition constraint to reinforce the diversity of selected key frames. We calculate the attention cost by merging both gaze and content change into a prior cue to help select the frames with more high-level semantic information. Moreover, we adopt an image quality evaluation process to eliminate the interference of the poor quality images and a segmentation process to reduce the computational complexity. For experiments, we build a new gastroscopic video dataset captured from 30 volunteers with more than 400k images and compare our method with the state-of-the-arts using the content consistency, index consistency and content-index consistency with the ground truth. Compared with all competitors, our method obtains the best results in 23 of 30 videos evaluated based on content consistency, 24 of 30 videos evaluated based on index consistency and all videos evaluated based on content-index consistency. For gastroscopic video summarization, we propose an automated annotation method via similar-inhibition dictionary selection. Our model can achieve better performance compared with other state-of-the-art models and supplies more suitable key frames for diagnosis. The developed algorithm can be automatically adapted to various real applications, such as the training of young clinicians, computer-aided diagnosis or medical report generation. Copyright © 2015 Elsevier B.V. All rights reserved.
Development of an automated energy audit protocol for office buildings
NASA Astrophysics Data System (ADS)
Deb, Chirag
This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.
Cong, Fengyu; Puoliväli, Tuomas; Alluri, Vinoo; Sipola, Tuomo; Burunat, Iballa; Toiviainen, Petri; Nandi, Asoke K; Brattico, Elvira; Ristaniemi, Tapani
2014-02-15
Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps. Copyright © 2013 Elsevier B.V. All rights reserved.
An object-based visual attention model for robotic applications.
Yu, Yuanlong; Mann, George K I; Gosine, Raymond G
2010-10-01
By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.
On selecting evidence to test hypotheses: A theory of selection tasks.
Ragni, Marco; Kola, Ilir; Johnson-Laird, Philip N
2018-05-21
How individuals choose evidence to test hypotheses is a long-standing puzzle. According to an algorithmic theory that we present, it is based on dual processes: individuals' intuitions depending on mental models of the hypothesis yield selections of evidence matching instances of the hypothesis, but their deliberations yield selections of potential counterexamples to the hypothesis. The results of 228 experiments using Wason's selection task corroborated the theory's predictions. Participants made dependent choices of items of evidence: the selections in 99 experiments were significantly more redundant (using Shannon's measure) than those of 10,000 simulations of each experiment based on independent selections. Participants tended to select evidence corresponding to instances of hypotheses, or to its counterexamples, or to both. Given certain contents, instructions, or framings of the task, they were more likely to select potential counterexamples to the hypothesis. When participants received feedback about their selections in the "repeated" selection task, they switched from selections of instances of the hypothesis to selection of potential counterexamples. These results eliminated most of the 15 alternative theories of selecting evidence. In a meta-analysis, the model theory yielded a better fit of the results of 228 experiments than the one remaining theory based on reasoning rather than meaning. We discuss the implications of the model theory for hypothesis testing and for a well-known paradox of confirmation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
NASA Astrophysics Data System (ADS)
Olson, R.; Evans, J. P.; Fan, Y.
2015-12-01
NARCliM (NSW/ACT Regional Climate Modelling Project) is a regional climate project for Australia and the surrounding region. It dynamically downscales 4 General Circulation Models (GCMs) using three Regional Climate Models (RCMs) to provide climate projections for the CORDEX-AustralAsia region at 50 km resolution, and for south-east Australia at 10 km resolution. The project differs from previous work in the level of sophistication of model selection. Specifically, the selection process for GCMs included (i) conducting literature review to evaluate model performance, (ii) analysing model independence, and (iii) selecting models that span future temperature and precipitation change space. RCMs for downscaling the GCMs were chosen based on their performance for several precipitation events over South-East Australia, and on model independence.Bayesian Model Averaging (BMA) provides a statistically consistent framework for weighing the models based on their likelihood given the available observations. These weights are used to provide probability distribution functions (pdfs) for model projections. We develop a BMA framework for constructing probabilistic climate projections for spatially-averaged variables from the NARCliM project. The first step in the procedure is smoothing model output in order to exclude the influence of internal climate variability. Our statistical model for model-observations residuals is a homoskedastic iid process. Comparing RCMs with Australian Water Availability Project (AWAP) observations is used to determine model weights through Monte Carlo integration. Posterior pdfs of statistical parameters of model-data residuals are obtained using Markov Chain Monte Carlo. The uncertainty in the properties of the model-data residuals is fully accounted for when constructing the projections. We present the preliminary results of the BMA analysis for yearly maximum temperature for New South Wales state planning regions for the period 2060-2079.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
NASA Astrophysics Data System (ADS)
Iqbal, M.; Islam, A.; Hossain, A.; Mustaque, S.
2016-12-01
Multi-Criteria Decision Making(MCDM) is advanced analytical method to evaluate appropriate result or decision from multiple criterion environment. Present time in advanced research, MCDM technique is progressive analytical process to evaluate a logical decision from various conflict. In addition, Present day Geospatial approach (e.g. Remote sensing and GIS) also another advanced technical approach in a research to collect, process and analyze various spatial data at a time. GIS and Remote sensing together with the MCDM technique could be the best platform to solve a complex decision making process. These two latest process combined very effectively used in site selection for solid waste management in urban policy. The most popular MCDM technique is Weighted Linear Method (WLC) where Analytical Hierarchy Process (AHP) is another popular and consistent techniques used in worldwide as dependable decision making. Consequently, the main objective of this study is improving a AHP model as MCDM technique with Geographic Information System (GIS) to select a suitable landfill site for urban solid waste management. Here AHP technique used as a MCDM tool to select the best suitable landfill location for urban solid waste management. To protect the urban environment in a sustainable way municipal waste needs an appropriate landfill site considering environmental, geological, social and technical aspect of the region. A MCDM model generate from five class related which related to environmental, geological, social and technical using AHP method and input the result set in GIS for final model location for urban solid waste management. The final suitable location comes out that 12.2% of the area corresponds to 22.89 km2 considering the total study area. In this study, Keraniganj sub-district of Dhaka district in Bangladesh is consider as study area which is densely populated city currently undergoes an unmanaged waste management system especially the suitable landfill sites for waste dumping site.
The long-term evolution of multilocus traits under frequency-dependent disruptive selection.
van Doorn, G Sander; Dieckmann, Ulf
2006-11-01
Frequency-dependent disruptive selection is widely recognized as an important source of genetic variation. Its evolutionary consequences have been extensively studied using phenotypic evolutionary models, based on quantitative genetics, game theory, or adaptive dynamics. However, the genetic assumptions underlying these approaches are highly idealized and, even worse, predict different consequences of frequency-dependent disruptive selection. Population genetic models, by contrast, enable genotypic evolutionary models, but traditionally assume constant fitness values. Only a minority of these models thus addresses frequency-dependent selection, and only a few of these do so in a multilocus context. An inherent limitation of these remaining studies is that they only investigate the short-term maintenance of genetic variation. Consequently, the long-term evolution of multilocus characters under frequency-dependent disruptive selection remains poorly understood. We aim to bridge this gap between phenotypic and genotypic models by studying a multilocus version of Levene's soft-selection model. Individual-based simulations and deterministic approximations based on adaptive dynamics theory provide insights into the underlying evolutionary dynamics. Our analysis uncovers a general pattern of polymorphism formation and collapse, likely to apply to a wide variety of genetic systems: after convergence to a fitness minimum and the subsequent establishment of genetic polymorphism at multiple loci, genetic variation becomes increasingly concentrated on a few loci, until eventually only a single polymorphic locus remains. This evolutionary process combines features observed in quantitative genetics and adaptive dynamics models, and it can be explained as a consequence of changes in the selection regime that are inherent to frequency-dependent disruptive selection. Our findings demonstrate that the potential of frequency-dependent disruptive selection to maintain polygenic variation is considerably smaller than previously expected.
Decision Support Model for Selection Technologies in Processing of Palm Oil Industrial Liquid Waste
NASA Astrophysics Data System (ADS)
Ishak, Aulia; Ali, Amir Yazid bin
2017-12-01
The palm oil industry continues to grow from year to year. Processing of the palm oil industry into crude palm oil (CPO) and palm kernel oil (PKO). The ratio of the amount of oil produced by both products is 30% of the raw material. This means that 70% is palm oil waste. The amount of palm oil waste will increase in line with the development of the palm oil industry. The amount of waste generated by the palm oil industry if it is not handled properly and effectively will contribute significantly to environmental damage. Industrial activities ranging from raw materials to produce products will disrupt the lives of people around the factory. There are many alternative technologies available to process other industries, but problems that often occur are difficult to implement the most appropriate technology. The purpose of this research is to develop a database of waste processing technology, looking for qualitative and quantitative criteria to select technology and develop Decision Support System (DSS) that can help make decisions. The method used to achieve the objective of this research is to develop a questionnaire to identify waste processing technology and develop the questionnaire to find appropriate database technology. Methods of data analysis performed on the system by using Analytic Hierarchy Process (AHP) and to build the model by using the MySQL Software that can be used as a tool in the evaluation and selection of palm oil mill processing technology.
NASA Astrophysics Data System (ADS)
Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad
2014-10-01
Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.
Patient's decision making in selecting a hospital for elective orthopaedic surgery.
Moser, Albine; Korstjens, Irene; van der Weijden, Trudy; Tange, Huibert
2010-12-01
The admission to a hospital for elective surgery, like arthroplasty, can be planned ahead. The elective nature of arthroplasty and the increasing stimulus of the public to critically select a hospital raise the issue of how patients actually take such decisions. The aim of this paper is to describe the decision-making process of selecting a hospital as experienced by people who underwent elective joint arthroplasty and to understand what factors influenced the decision-making process. Qualitative descriptive study with 18 participants who had a hip or knee replacement within the last 5 years. Data were gathered from eight individual interviews and four focus group interviews and analysed by content analysis. Three categories that influenced the selection of a hospital were revealed: information sources, criteria in decision making and decision-making styles within the GP- patient relationship. Various contextual aspects influenced the decision-making process. Most participants gave higher priority to the selection of a medical specialist than to the selection of a hospital. Selecting a hospital for arthroplasty is extremely complex. The decision-making process is a highly individualized process because patients have to consider and assimilate a diversity of aspects, which are relevant to their specific situation. Our findings support the model of shared decision making, which indicates that general practitioners should be attuned to the distinct needs of each patient at various moments during the decision making, taking into account personal, medical and contextual factors. © 2010 Blackwell Publishing Ltd.
Everyone knows what is interesting: Salient locations which should be fixated
Masciocchi, Christopher Michael; Mihalas, Stefan; Parkhurst, Derrick; Niebur, Ernst
2010-01-01
Most natural scenes are too complex to be perceived instantaneously in their entirety. Observers therefore have to select parts of them and process these parts sequentially. We study how this selection and prioritization process is performed by humans at two different levels. One is the overt attention mechanism of saccadic eye movements in a free-viewing paradigm. The second is a conscious decision process in which we asked observers which points in a scene they considered the most interesting. We find in a very large participant population (more than one thousand) that observers largely agree on which points they consider interesting. Their selections are also correlated with the eye movement pattern of different subjects. Both are correlated with predictions of a purely bottom–up saliency map model. Thus, bottom–up saliency influences cognitive processes as far removed from the sensory periphery as in the conscious choice of what an observer considers interesting. PMID:20053088
Meirelles, S L C; Mokry, F B; Espasandín, A C; Dias, M A D; Baena, M M; de A Regitano, L C
2016-06-10
Correlation between genetic parameters and factors such as backfat thickness (BFT), rib eye area (REA), and body weight (BW) were estimated for Canchim beef cattle raised in natural pastures of Brazil. Data from 1648 animals were analyzed using multi-trait (BFT, REA, and BW) animal models by the Bayesian approach. This model included the effects of contemporary group, age, and individual heterozygosity as covariates. In addition, direct additive genetic and random residual effects were also analyzed. Heritability estimated for BFT (0.16), REA (0.50), and BW (0.44) indicated their potential for genetic improvements and response to selection processes. Furthermore, genetic correlations between BW and the remaining traits were high (P > 0.50), suggesting that selection for BW could improve REA and BFT. On the other hand, genetic correlation between BFT and REA was low (P = 0.39 ± 0.17), and included considerable variations, suggesting that these traits can be jointly included as selection criteria without influencing each other. We found that REA and BFT responded to the selection processes, as measured by ultrasound. Therefore, selection for yearling weight results in changes in REA and BFT.
Proposed standards for peer-reviewed publication of computer code
USDA-ARS?s Scientific Manuscript database
Computer simulation models are mathematical abstractions of physical systems. In the area of natural resources and agriculture, these physical systems encompass selected interacting processes in plants, soils, animals, or watersheds. These models are scientific products and have become important i...
Fixation Probability in a Haploid-Diploid Population.
Bessho, Kazuhiro; Otto, Sarah P
2017-01-01
Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright-Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. Copyright © 2017 by the Genetics Society of America.
Natural selection in chemical evolution.
Fernando, Chrisantha; Rowe, Jonathan
2007-07-07
We propose that chemical evolution can take place by natural selection if a geophysical process is capable of heterotrophic formation of liposomes that grow at some base rate, divide by external agitation, and are subject to stochastic chemical avalanches, in the absence of nucleotides or any monomers capable of modular heredity. We model this process using a simple hill-climbing algorithm, and an artificial chemistry that is unique in exhibiting conservation of mass and energy in an open thermodynamic system. Selection at the liposome level results in the stabilization of rarely occurring molecular autocatalysts that either catalyse or are consumed in reactions that confer liposome level fitness; typically they contribute in parallel to an increasingly conserved intermediary metabolism. Loss of competing autocatalysts can sometimes be adaptive. Steady-state energy flux by the individual increases due to the energetic demands of growth, but also of memory, i.e. maintaining variations in the chemical network. Self-organizing principles such as those proposed by Kauffman, Fontana, and Morowitz have been hypothesized as an ordering principle in chemical evolution, rather than chemical evolution by natural selection. We reject those notions as either logically flawed or at best insufficient in the absence of natural selection. Finally, a finite population model without elitism shows the practical evolutionary constraints for achieving chemical evolution by natural selection in the lab.
TreePOD: Sensitivity-Aware Selection of Pareto-Optimal Decision Trees.
Muhlbacher, Thomas; Linhardt, Lorenz; Moller, Torsten; Piringer, Harald
2018-01-01
Balancing accuracy gains with other objectives such as interpretability is a key challenge when building decision trees. However, this process is difficult to automate because it involves know-how about the domain as well as the purpose of the model. This paper presents TreePOD, a new approach for sensitivity-aware model selection along trade-offs. TreePOD is based on exploring a large set of candidate trees generated by sampling the parameters of tree construction algorithms. Based on this set, visualizations of quantitative and qualitative tree aspects provide a comprehensive overview of possible tree characteristics. Along trade-offs between two objectives, TreePOD provides efficient selection guidance by focusing on Pareto-optimal tree candidates. TreePOD also conveys the sensitivities of tree characteristics on variations of selected parameters by extending the tree generation process with a full-factorial sampling. We demonstrate how TreePOD supports a variety of tasks involved in decision tree selection and describe its integration in a holistic workflow for building and selecting decision trees. For evaluation, we illustrate a case study for predicting critical power grid states, and we report qualitative feedback from domain experts in the energy sector. This feedback suggests that TreePOD enables users with and without statistical background a confident and efficient identification of suitable decision trees.
Validation of Western North America Models based on finite-frequency and ray theory imaging methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmat, Carene; Maceira, Monica; Porritt, Robert W.
2015-02-02
We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-wave FF approach and the RT approach, when the data selection, processing and reference models are the same.
Tigers on trails: occupancy modeling for cluster sampling.
Hines, J E; Nichols, J D; Royle, J A; MacKenzie, D I; Gopalaswamy, A M; Kumar, N Samba; Karanth, K U
2010-07-01
Occupancy modeling focuses on inference about the distribution of organisms over space, using temporal or spatial replication to allow inference about the detection process. Inference based on spatial replication strictly requires that replicates be selected randomly and with replacement, but the importance of these design requirements is not well understood. This paper focuses on an increasingly popular sampling design based on spatial replicates that are not selected randomly and that are expected to exhibit Markovian dependence. We develop two new occupancy models for data collected under this sort of design, one based on an underlying Markov model for spatial dependence and the other based on a trap response model with Markovian detections. We then simulated data under the model for Markovian spatial dependence and fit the data to standard occupancy models and to the two new models. Bias of occupancy estimates was substantial for the standard models, smaller for the new trap response model, and negligible for the new spatial process model. We also fit these models to data from a large-scale tiger occupancy survey recently conducted in Karnataka State, southwestern India. In addition to providing evidence of a positive relationship between tiger occupancy and habitat, model selection statistics and estimates strongly supported the use of the model with Markovian spatial dependence. This new model provides another tool for the decomposition of the detection process, which is sometimes needed for proper estimation and which may also permit interesting biological inferences. In addition to designs employing spatial replication, we note the likely existence of temporal Markovian dependence in many designs using temporal replication. The models developed here will be useful either directly, or with minor extensions, for these designs as well. We believe that these new models represent important additions to the suite of modeling tools now available for occupancy estimation in conservation monitoring. More generally, this work represents a contribution to the topic of cluster sampling for situations in which there is a need for specific modeling (e.g., reflecting dependence) for the distribution of the variable(s) of interest among subunits.
Targeted versus statistical approaches to selecting parameters for modelling sediment provenance
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick
2017-04-01
One effective field-based approach to modelling sediment provenance is the source fingerprinting technique. Arguably, one of the most important steps for this approach is selecting the appropriate suite of parameters or fingerprints used to model source contributions. Accordingly, approaches to selecting parameters for sediment source fingerprinting will be reviewed. Thereafter, opportunities and limitations of these approaches and some future research directions will be presented. For properties to be effective tracers of sediment, they must discriminate between sources whilst behaving conservatively. Conservative behavior is characterized by constancy in sediment properties, where the properties of sediment sources remain constant, or at the very least, any variation in these properties should occur in a predictable and measurable way. Therefore, properties selected for sediment source fingerprinting should remain constant through sediment detachment, transportation and deposition processes, or vary in a predictable and measurable way. One approach to select conservative properties for sediment source fingerprinting is to identify targeted tracers, such as caesium-137, that provide specific source information (e.g. surface versus subsurface origins). A second approach is to use statistical tests to select an optimal suite of conservative properties capable of modelling sediment provenance. In general, statistical approaches use a combination of a discrimination (e.g. Kruskal Wallis H-test, Mann-Whitney U-test) and parameter selection statistics (e.g. Discriminant Function Analysis or Principle Component Analysis). The challenge is that modelling sediment provenance is often not straightforward and there is increasing debate in the literature surrounding the most appropriate approach to selecting elements for modelling. Moving forward, it would be beneficial if researchers test their results with multiple modelling approaches, artificial mixtures, and multiple lines of evidence to provide secondary support to their initial modelling results. Indeed, element selection can greatly impact modelling results and having multiple lines of evidence will help provide confidence when modelling sediment provenance.
Three probes for diagnosing photochemical dynamics are presented and applied to specialized ambient surface-level observations and to a numerical photochemical model to better understand rates of production and other process information in the atmosphere and in the model. Howeve...
Using the Modification Index and Standardized Expected Parameter Change for Model Modification
ERIC Educational Resources Information Center
Whittaker, Tiffany A.
2012-01-01
Model modification is oftentimes conducted after discovering a badly fitting structural equation model. During the modification process, the modification index (MI) and the standardized expected parameter change (SEPC) are 2 statistics that may be used to aid in the selection of parameters to add to a model to improve the fit. The purpose of this…
Selective laser melting of Inconel super alloy-a review
NASA Astrophysics Data System (ADS)
Karia, M. C.; Popat, M. A.; Sangani, K. B.
2017-07-01
Additive manufacturing is a relatively young technology that uses the principle of layer by layer addition of material in solid, liquid or powder form to develop a component or product. The quality of additive manufactured part is one of the challenges to be addressed. Researchers are continuously working at various levels of additive manufacturing technologies. One of the significant powder bed processes for met als is Selective Laser Melting (SLM). Laser based processes are finding more attention of researchers and industrial world. The potential of this technique is yet to be fully explored. Due to very high strength and creep resistance Inconel is extensively used nickel based super alloy for manufacturing components for aerospace, automobile and nuclear industries. Due to law content of Aluminum and Titanium, it exhibits good fabricability too. Therefore the alloy is ideally suitable for selective laser melting to manufacture intricate components with high strength requirements. The selection of suitable process for manufacturing for a specific component depends on geometrical complexity, production quantity, and cost and required strength. There are numerous researchers working on various aspects like metallurgical and micro structural investigations and mechanical properties, geometrical accuracy, effects of process parameters and its optimization and mathematical modeling etc. The present paper represents a comprehensive overview of selective laser melting process for Inconel group of alloys.
Intercohort density dependence drives brown trout habitat selection
NASA Astrophysics Data System (ADS)
Ayllón, Daniel; Nicola, Graciela G.; Parra, Irene; Elvira, Benigno; Almodóvar, Ana
2013-01-01
Habitat selection can be viewed as an emergent property of the quality and availability of habitat but also of the number of individuals and the way they compete for its use. Consequently, habitat selection can change across years due to fluctuating resources or to changes in population numbers. However, habitat selection predictive models often do not account for ecological dynamics, especially density dependent processes. In stage-structured population, the strength of density dependent interactions between individuals of different age classes can exert a profound influence on population trajectories and evolutionary processes. In this study, we aimed to assess the effects of fluctuating densities of both older and younger competing life stages on the habitat selection patterns (described as univariate and multivariate resource selection functions) of young-of-the-year, juvenile and adult brown trout Salmo trutta. We observed all age classes were selective in habitat choice but changed their selection patterns across years consistently with variations in the densities of older but not of younger age classes. Trout of an age increased selectivity for positions highly selected by older individuals when their density decreased, but this pattern did not hold when the density of younger age classes varied. It suggests that younger individuals are dominated by older ones but can expand their range of selected habitats when density of competitors decreases, while older trout do not seem to consider the density of younger individuals when distributing themselves even though they can negatively affect their final performance. Since these results may entail critical implications for conservation and management practices based on habitat selection models, further research should involve a wider range of river typologies and/or longer time frames to fully understand the patterns of and the mechanisms underlying the operation of density dependence on brown trout habitat selection.
Meta-analysis using Dirichlet process.
Muthukumarana, Saman; Tiwari, Ram C
2016-02-01
This article develops a Bayesian approach for meta-analysis using the Dirichlet process. The key aspect of the Dirichlet process in meta-analysis is the ability to assess evidence of statistical heterogeneity or variation in the underlying effects across study while relaxing the distributional assumptions. We assume that the study effects are generated from a Dirichlet process. Under a Dirichlet process model, the study effects parameters have support on a discrete space and enable borrowing of information across studies while facilitating clustering among studies. We illustrate the proposed method by applying it to a dataset on the Program for International Student Assessment on 30 countries. Results from the data analysis, simulation studies, and the log pseudo-marginal likelihood model selection procedure indicate that the Dirichlet process model performs better than conventional alternative methods. © The Author(s) 2012.
Jørgensen, Søren; Dau, Torsten
2011-09-01
A model for predicting the intelligibility of processed noisy speech is proposed. The speech-based envelope power spectrum model has a similar structure as the model of Ewert and Dau [(2000). J. Acoust. Soc. Am. 108, 1181-1196], developed to account for modulation detection and masking data. The model estimates the speech-to-noise envelope power ratio, SNR(env), at the output of a modulation filterbank and relates this metric to speech intelligibility using the concept of an ideal observer. Predictions were compared to data on the intelligibility of speech presented in stationary speech-shaped noise. The model was further tested in conditions with noisy speech subjected to reverberation and spectral subtraction. Good agreement between predictions and data was found in all cases. For spectral subtraction, an analysis of the model's internal representation of the stimuli revealed that the predicted decrease of intelligibility was caused by the estimated noise envelope power exceeding that of the speech. The classical concept of the speech transmission index fails in this condition. The results strongly suggest that the signal-to-noise ratio at the output of a modulation frequency selective process provides a key measure of speech intelligibility. © 2011 Acoustical Society of America
Children's selective trust decisions: rational competence and limiting performance factors.
Hermes, Jonas; Behne, Tanya; Bich, Anna Elisa; Thielert, Christa; Rakoczy, Hannes
2018-03-01
Recent research has amply documented that even preschoolers learn selectively from others, preferring, for example, reliable over unreliable and competent over incompetent models. It remains unclear, however, what the cognitive foundations of such selective learning are, in particular, whether it builds on rational inferences or on less sophisticated processes. The current study, therefore, was designed to test directly the possibility that children are in principle capable of selective learning based on rational inference, yet revert to simpler strategies such as global impression formation under certain circumstances. Preschoolers (N = 75) were shown pairs of models that either differed in their degree of competence within one domain (strong vs. weak or knowledgeable vs. ignorant) or were both highly competent, but in different domains (e.g., strong vs. knowledgeable model). In the test trials, children chose between the models for strength- or knowledge-related tasks. The results suggest that, in fact, children are capable of rational inference-based selective trust: when both models were highly competent, children preferred the model with the competence most predictive and relevant for a given task. However, when choosing between two models that differed in competence on one dimension, children reverted to halo-style wide generalizations and preferred the competent models for both relevant and irrelevant tasks. These findings suggest that the rational strategies for selective learning, that children master in principle, can get masked by various performance factors. © 2017 John Wiley & Sons Ltd.
Rival approaches to mathematical modelling in immunology
NASA Astrophysics Data System (ADS)
Andrew, Sarah M.; Baker, Christopher T. H.; Bocharov, Gennady A.
2007-08-01
In order to formulate quantitatively correct mathematical models of the immune system, one requires an understanding of immune processes and familiarity with a range of mathematical techniques. Selection of an appropriate model requires a number of decisions to be made, including a choice of the modelling objectives, strategies and techniques and the types of model considered as candidate models. The authors adopt a multidisciplinary perspective.
Selection and study performance: comparing three admission processes within one medical school.
Schripsema, Nienke R; van Trigt, Anke M; Borleffs, Jan C C; Cohen-Schotanus, Janke
2014-12-01
This study was conducted to: (i) analyse whether students admitted to one medical school based on top pre-university grades, a voluntary multifaceted selection process, or lottery, respectively, differed in study performance; (ii) examine whether students who were accepted in the multifaceted selection process outperformed their rejected peers, and (iii) analyse whether participation in the multifaceted selection procedure was related to performance. We examined knowledge test and professionalism scores, study progress and dropout in three cohorts of medical students admitted to the University of Groningen, the Netherlands in 2009, 2010 and 2011 (n = 1055). We divided the lottery-admitted group into, respectively, students who had not participated and students who had been rejected in the multifaceted selection process. We used ancova modelling, logistic regression and Bonferroni post hoc multiple-comparison tests and controlled for gender and cohort. The top pre-university grade group achieved higher knowledge test scores and more Year 1 course credits than all other groups (p < 0.05). This group received the highest possible professionalism score more often than the lottery-admitted group that had not participated in the multifaceted selection process (p < 0.05). The group of students accepted in the multifaceted selection process obtained higher written test scores than the lottery-admitted group that had not participated (p < 0.05) and achieved the highest possible professionalism score more often than both lottery-admitted groups. The lottery-admitted group that had not participated in the multifaceted selection process earned fewer Year 1 and 2 course credits than all other groups (p < 0.05). Dropout rates differed among the groups (p < 0.05), but correction for multiple comparisons rendered all pairwise differences non-significant. A top pre-university grade point average was the best predictor of performance. For so-called non-academic performance, the multifaceted selection process was efficient in identifying applicants with suitable skills. Participation in the multifaceted selection procedure seems to be predictive of higher performance. Further research is needed to assess whether our results are generalisable to other medical schools. © 2014 John Wiley & Sons Ltd.
Process-Improvement Cost Model for the Emergency Department.
Dyas, Sheila R; Greenfield, Eric; Messimer, Sherri; Thotakura, Swati; Gholston, Sampson; Doughty, Tracy; Hays, Mary; Ivey, Richard; Spalding, Joseph; Phillips, Robin
2015-01-01
The objective of this report is to present a simplified, activity-based costing approach for hospital emergency departments (EDs) to use with Lean Six Sigma cost-benefit analyses. The cost model complexity is reduced by removing diagnostic and condition-specific costs, thereby revealing the underlying process activities' cost inefficiencies. Examples are provided for evaluating the cost savings from reducing discharge delays and the cost impact of keeping patients in the ED (boarding) after the decision to admit has been made. The process-improvement cost model provides a needed tool in selecting, prioritizing, and validating Lean process-improvement projects in the ED and other areas of patient care that involve multiple dissimilar diagnoses.
Research on manufacturing service behavior modeling based on block chain theory
NASA Astrophysics Data System (ADS)
Zhao, Gang; Zhang, Guangli; Liu, Ming; Yu, Shuqin; Liu, Yali; Zhang, Xu
2018-04-01
According to the attribute characteristics of processing craft, the manufacturing service behavior is divided into service attribute, basic attribute, process attribute, resource attribute. The attribute information model of manufacturing service is established. The manufacturing service behavior information is successfully divided into public and private domain. Additionally, the block chain technology is introduced, and the information model of manufacturing service based on block chain principle is established, which solves the problem of sharing and secreting information of processing behavior, and ensures that data is not tampered with. Based on the key pairing verification relationship, the selective publishing mechanism for manufacturing information is established, achieving the traceability of product data, guarantying the quality of processing quality.
Visual analytics in cheminformatics: user-supervised descriptor selection for QSAR methods.
Martínez, María Jimena; Ponzoni, Ignacio; Díaz, Mónica F; Vazquez, Gustavo E; Soto, Axel J
2015-01-01
The design of QSAR/QSPR models is a challenging problem, where the selection of the most relevant descriptors constitutes a key step of the process. Several feature selection methods that address this step are concentrated on statistical associations among descriptors and target properties, whereas the chemical knowledge is left out of the analysis. For this reason, the interpretability and generality of the QSAR/QSPR models obtained by these feature selection methods are drastically affected. Therefore, an approach for integrating domain expert's knowledge in the selection process is needed for increase the confidence in the final set of descriptors. In this paper a software tool, which we named Visual and Interactive DEscriptor ANalysis (VIDEAN), that combines statistical methods with interactive visualizations for choosing a set of descriptors for predicting a target property is proposed. Domain expertise can be added to the feature selection process by means of an interactive visual exploration of data, and aided by statistical tools and metrics based on information theory. Coordinated visual representations are presented for capturing different relationships and interactions among descriptors, target properties and candidate subsets of descriptors. The competencies of the proposed software were assessed through different scenarios. These scenarios reveal how an expert can use this tool to choose one subset of descriptors from a group of candidate subsets or how to modify existing descriptor subsets and even incorporate new descriptors according to his or her own knowledge of the target property. The reported experiences showed the suitability of our software for selecting sets of descriptors with low cardinality, high interpretability, low redundancy and high statistical performance in a visual exploratory way. Therefore, it is possible to conclude that the resulting tool allows the integration of a chemist's expertise in the descriptor selection process with a low cognitive effort in contrast with the alternative of using an ad-hoc manual analysis of the selected descriptors. Graphical abstractVIDEAN allows the visual analysis of candidate subsets of descriptors for QSAR/QSPR. In the two panels on the top, users can interactively explore numerical correlations as well as co-occurrences in the candidate subsets through two interactive graphs.
Interactive model evaluation tool based on IPython notebook
NASA Astrophysics Data System (ADS)
Balemans, Sophie; Van Hoey, Stijn; Nopens, Ingmar; Seuntjes, Piet
2015-04-01
In hydrological modelling, some kind of parameter optimization is mostly performed. This can be the selection of a single best parameter set, a split in behavioural and non-behavioural parameter sets based on a selected threshold or a posterior parameter distribution derived with a formal Bayesian approach. The selection of the criterion to measure the goodness of fit (likelihood or any objective function) is an essential step in all of these methodologies and will affect the final selected parameter subset. Moreover, the discriminative power of the objective function is also dependent from the time period used. In practice, the optimization process is an iterative procedure. As such, in the course of the modelling process, an increasing amount of simulations is performed. However, the information carried by these simulation outputs is not always fully exploited. In this respect, we developed and present an interactive environment that enables the user to intuitively evaluate the model performance. The aim is to explore the parameter space graphically and to visualize the impact of the selected objective function on model behaviour. First, a set of model simulation results is loaded along with the corresponding parameter sets and a data set of the same variable as the model outcome (mostly discharge). The ranges of the loaded parameter sets define the parameter space. A selection of the two parameters visualised can be made by the user. Furthermore, an objective function and a time period of interest need to be selected. Based on this information, a two-dimensional parameter response surface is created, which actually just shows a scatter plot of the parameter combinations and assigns a color scale corresponding with the goodness of fit of each parameter combination. Finally, a slider is available to change the color mapping of the points. Actually, the slider provides a threshold to exclude non behaviour parameter sets and the color scale is only attributed to the remaining parameter sets. As such, by interactively changing the settings and interpreting the graph, the user gains insight in the model structural behaviour. Moreover, a more deliberate choice of objective function and periods of high information content can be identified. The environment is written in an IPython notebook and uses the available interactive functions provided by the IPython community. As such, the power of the IPython notebook as a development environment for scientific computing is illustrated (Shen, 2014).
NASA Astrophysics Data System (ADS)
Jahangoshai Rezaee, Mustafa; Yousefi, Samuel; Hayati, Jamileh
2017-06-01
Supplier selection and allocation of optimal order quantity are two of the most important processes in closed-loop supply chain (CLSC) and reverse logistic (RL). So that providing high quality raw material is considered as a basic requirement for a manufacturer to produce popular products, as well as achieve more market shares. On the other hand, considering the existence of competitive environment, suppliers have to offer customers incentives like discounts and enhance the quality of their products in a competition with other manufacturers. Therefore, in this study, a model is presented for CLSC optimization, efficient supplier selection, as well as orders allocation considering quantity discount policy. It is modeled using multi-objective programming based on the integrated simultaneous data envelopment analysis-Nash bargaining game. In this study, maximizing profit and efficiency and minimizing defective and functions of delivery delay rate are taken into accounts. Beside supplier selection, the suggested model selects refurbishing sites, as well as determining the number of products and parts in each network's sector. The suggested model's solution is carried out using global criteria method. Furthermore, based on related studies, a numerical example is examined to validate it.
NASA Astrophysics Data System (ADS)
Lanuru, Mahatma; Mashoreng, S.; Amri, K.
2018-03-01
The success of seagrass transplantation is very much depending on the site selection and suitable transplantation methods. The main objective of this study is to develop and use a site-selection model to identify the suitability of sites for seagrass (Enhalus acoroides) transplantation. Model development was based on the physical and biological characteristics of the transplantation site. The site-selection process is divided into 3 phases: Phase I identifies potential seagrass habitat using available knowledge, removes unnecessary sites before the transplantation test is performed. Phase II involves field assessment and transplantation test of the best scoring areas identified in Phase I. Phase III is the final calculation of the TSI (Transplant Suitability Index), based on results from Phases I and II. The model was used to identify the suitability of sites for seagrass transplantation in the West coast of South Sulawesi (3 sites at Labakkang Coast, 3 sites at Awerange Bay, and 3 sites at Lale-Lae Island). Of the 9 sites, two sites were predicted by the site-selection model to be the most suitable sites for seagrass transplantation: Site II at Labakkang Coast and Site III at Lale-Lae Island.
Cox process representation and inference for stochastic reaction-diffusion processes
NASA Astrophysics Data System (ADS)
Schnoerr, David; Grima, Ramon; Sanguinetti, Guido
2016-05-01
Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
He, Xingrong; Yang, Yongqiang; Wu, Weihui; Wang, Di; Ding, Huanwen; Huang, Weihong
2010-06-01
In order to simplify the distal femoral comminuted fracture surgery and improve the accuracy of the parts to be reset, a kind of surgery orienting model for the surgery operation was designed according to the scanning data of computer tomography and the three-dimensional reconstruction image. With the use of DiMetal-280 selective laser melting rapid prototyping system, the surgery orienting model of 316L stainless steel was made through orthogonal experiment for processing parameter optimization. The technology of direct manufacturing of surgery orienting model by selective laser melting was noted to have obvious superiority with high speed, precise profile and good accuracy in size when compared with the conventional one. The model was applied in a real surgical operation for thighbone replacement; it worked well. The successful development of the model provides a new method for the automatic manufacture of customized surgery model, thus building a foundation for more clinical applications in the future.
ERIC Educational Resources Information Center
Moser, Gene W.
Reported is one of a series of investigations of the Project on an Information Memory Model. This study was done to test an information memory model for identifying the unit of information structure involved in task cognitions by humans. Four groups of 30 randomly selected subjects (ages 7, 9, 11 and 15 years) performed a sorting task of 14…
Ball, B Hunter; Aschenbrenner, Andrew J
2017-06-09
Event-based prospective memory (PM) refers to relying on environmental cues to trigger retrieval of a deferred action plan from long-term memory. Considerable research has demonstrated PM declines with increased age. Despite efforts to better characterize the attentional processes that underlie these decrements, the majority of research has relied on measures of central tendency to inform theoretical accounts of PM that may not entirely capture the underlying dynamics involved in allocating attention to intention-relevant information. The purpose of the current study was to examine the utility of the diffusion model to better understand the cognitive processes underlying age-related differences in PM. Results showed that emphasizing the importance of the PM intention increased cue detection selectively for older adults. Standard cost analyses revealed that PM importance increased mean response times and accuracy, but not differentially for young and older adults. Consistent with this finding, diffusion model analyses demonstrated that PM importance increased response caution as evidenced by increased boundary separation. However, the selective benefit in cue detection for older adults may reflect peripheral target-checking processes as indicated by changes in nondecision time. These findings highlight the use of modeling techniques to better characterize the processes underlying the relations among aging, attention, and PM.
Modernizing Selection and Promotion Procedures in the State Employment Security Service Agency.
ERIC Educational Resources Information Center
Derryck, Dennis A.; Leyes, Richard
The purpose of this feasibility study was to discover the types ofselection and promotion models, strategies, and processes that must be employed if current State Employment Security Service Agency selection practices are to be made more directly relevant to the various populations currently being served. Specifically, the study sought to…
Propensity Score Estimation with Data Mining Techniques: Alternatives to Logistic Regression
ERIC Educational Resources Information Center
Keller, Bryan S. B.; Kim, Jee-Seon; Steiner, Peter M.
2013-01-01
Propensity score analysis (PSA) is a methodological technique which may correct for selection bias in a quasi-experiment by modeling the selection process using observed covariates. Because logistic regression is well understood by researchers in a variety of fields and easy to implement in a number of popular software packages, it has…
Research on Correlation between Vehicle Cycle and Engine Cycle in Heavy-duty commercial vehicle
NASA Astrophysics Data System (ADS)
lin, Chen; Zhong, Wang; Shuai, Liu
2017-12-01
In order to study the correlation between vehicle cycle and engine cycle in heavy commercial vehicles, the conversion model of vehicle cycle to engine cycle is constructed based on the vehicle power system theory and shift strategy, which considers the verification on diesel truck. The results show that the model has high rationality and reliability in engine operation. In the acceleration process of high speed, the difference of model gear selection leads to the actual deviation. Compared with the drum test, the engine speed distribution obtained by the model deviates to right, which fits to the lower grade. The grade selection has high influence on the model.
Process-driven selection of information systems for healthcare
NASA Astrophysics Data System (ADS)
Mills, Stephen F.; Yeh, Raymond T.; Giroir, Brett P.; Tanik, Murat M.
1995-05-01
Integration of networking and data management technologies such as PACS, RIS and HIS into a healthcare enterprise in a clinically acceptable manner is a difficult problem. Data within such a facility are generally managed via a combination of manual hardcopy systems and proprietary, special-purpose data processing systems. Process modeling techniques have been successfully applied to engineering and manufacturing enterprises, but have not generally been applied to service-based enterprises such as healthcare facilities. The use of process modeling techniques can provide guidance for the placement, configuration and usage of PACS and other informatics technologies within the healthcare enterprise, and thus improve the quality of healthcare. Initial process modeling activities conducted within the Pediatric ICU at Children's Medical Center in Dallas, Texas are described. The ongoing development of a full enterprise- level model for the Pediatric ICU is also described.
Estimating and mapping ecological processes influencing microbial community assembly
Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.
2015-01-01
Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725
NASA Astrophysics Data System (ADS)
Bobojć, Andrzej; Drożyner, Andrzej; Rzepecka, Zofia
2017-04-01
The work includes the comparison of performance of selected geopotential models in the dynamic orbit estimation of the satellite of the Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) mission. This was realized by fitting estimated orbital arcs to the official centimeter-accuracy GOCE kinematic orbit which is provided by the European Space Agency. The Cartesian coordinates of kinematic orbit were treated as observations in the orbit estimation. The initial satellite state vector components were corrected in an iterative process with respect to the J2000.0 inertial reference frame using the given geopotential model, the models describing the remaining gravitational perturbations and the solar radiation pressure. Taking the obtained solutions into account, the RMS values of orbital residuals were computed. These residuals result from the difference between the determined orbit and the reference one - the GOCE kinematic orbit. The performance of selected gravity models was also determined using various orbital arc lengths. Additionally, the RMS fit values were obtained for some gravity models truncated at given degree and order of spherical harmonic coefficients. The advantage of using the kinematic orbit is its independence from any a priori dynamical models. For the research such GOCE-independent gravity models as HUST-Grace2016s, ITU_GRACE16, ITSG-Grace2014s, ITSG-Grace2014k, GGM05S, Tongji-GRACE01, ULUX_CHAMP2013S, ITG-GRACE2010S, EIGEN-51C, EIGEN5S, EGM2008 and EGM96 were adopted.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
Veneri, Giacomo; Federico, Antonio; Rufa, Alessandra
2014-01-01
Attention allows us to selectively process the vast amount of information with which we are confronted, prioritizing some aspects of information and ignoring others by focusing on a certain location or aspect of the visual scene. Selective attention is guided by two cognitive mechanisms: saliency of the image (bottom up) and endogenous mechanisms (top down). These two mechanisms interact to direct attention and plan eye movements; then, the movement profile is sent to the motor system, which must constantly update the command needed to produce the desired eye movement. A new approach is described here to study how the eye motor control could influence this selection mechanism in clinical behavior: two groups of patients (SCA2 and late onset cerebellar ataxia LOCA) with well-known problems of motor control were studied; patients performed a cognitively demanding task; the results were compared to a stochastic model based on Monte Carlo simulations and a group of healthy subjects. The analytical procedure evaluated some energy functions for understanding the process. The implemented model suggested that patients performed an optimal visual search, reducing intrinsic noise sources. Our findings theorize a strict correlation between the "optimal motor system" and the "optimal stimulus encoders."
Tumor morphology and phenotypic evolution driven by selective pressure from the microenvironment.
Anderson, Alexander R A; Weaver, Alissa M; Cummings, Peter T; Quaranta, Vito
2006-12-01
Emergence of invasive behavior in cancer is life-threatening, yet ill-defined due to its multifactorial nature. We present a multiscale mathematical model of cancer invasion, which considers cellular and microenvironmental factors simultaneously and interactively. Unexpectedly, the model simulations predict that harsh tumor microenvironment conditions (e.g., hypoxia, heterogenous extracellular matrix) exert a dramatic selective force on the tumor, which grows as an invasive mass with fingering margins, dominated by a few clones with aggressive traits. In contrast, mild microenvironment conditions (e.g., normoxia, homogeneous matrix) allow clones with similar aggressive traits to coexist with less aggressive phenotypes in a heterogeneous tumor mass with smooth, noninvasive margins. Thus, the genetic make-up of a cancer cell may realize its invasive potential through a clonal evolution process driven by definable microenvironmental selective forces. Our mathematical model provides a theoretical/experimental framework to quantitatively characterize this selective pressure for invasion and test ways to eliminate it.
Jala, Ram Chandra Reddy; Xu, Xuebing; Guo, Zheng
2013-12-01
Development of an advanced process/production technology for healthful fats constitutes a major interest of plant oil refinery industry. In this work, a strategy to produce trans fatty acid (TFA) free (or low TFA) products from partially hydrogenated soybean oil by lipase-catalysed selective hydrolysis was proposed, where a physically founded mathematic model to delineate the multi-responses of the reaction as a function of selectivity factor was defined for the first time. The practicability of this strategy was assessed with commercial trans-selective Candida antarctica lipase A (CAL-A) as a model biocatalyst based on a parameter study and fitting to the model. CAL-A was found to have a selectivity factor 4.26 and to maximally remove 73.3% of total TFAs at 46.5% hydrolysis degree. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multi-Criteria selection of technology for processing ore raw materials
NASA Astrophysics Data System (ADS)
Gorbatova, E. A.; Emelianenko, E. A.; Zaretckii, M. V.
2017-10-01
The development of Computer-Aided Process Planning (CAPP) for the Ore Beneficiation process is considered. The set of parameters to define the quality of the Ore Beneficiation process is identified. The ontological model of CAPP for the Ore Beneficiation process is described. The hybrid choice method of the most appropriate variant of the Ore Beneficiation process based on the Logical Conclusion Rules and the Fuzzy Multi-Criteria Decision Making (MCDM) approach is proposed.
Wickman, Jonas; Diehl, Sebastian; Blasius, Bernd; Klausmeier, Christopher A; Ryabov, Alexey B; Brännström, Åke
2017-04-01
Spatial structure can decisively influence the way evolutionary processes unfold. To date, several methods have been used to study evolution in spatial systems, including population genetics, quantitative genetics, moment-closure approximations, and individual-based models. Here we extend the study of spatial evolutionary dynamics to eco-evolutionary models based on reaction-diffusion equations and adaptive dynamics. Specifically, we derive expressions for the strength of directional and stabilizing/disruptive selection that apply both in continuous space and to metacommunities with symmetrical dispersal between patches. For directional selection on a quantitative trait, this yields a way to integrate local directional selection across space and determine whether the trait value will increase or decrease. The robustness of this prediction is validated against quantitative genetics. For stabilizing/disruptive selection, we show that spatial heterogeneity always contributes to disruptive selection and hence always promotes evolutionary branching. The expression for directional selection is numerically very efficient and hence lends itself to simulation studies of evolutionary community assembly. We illustrate the application and utility of the expressions for this purpose with two examples of the evolution of resource utilization. Finally, we outline the domain of applicability of reaction-diffusion equations as a modeling framework and discuss their limitations.
Study on the intelligent decision making of soccer robot side-wall behavior
NASA Astrophysics Data System (ADS)
Zhang, Xiaochuan; Shao, Guifang; Tan, Zhi; Li, Zushu
2007-12-01
Side-wall is the static obstacle in soccer robot game, reasonably making use of the Side-wall can improve soccer robot competitive ability. As a kind of artificial life, the Side-wall processing strategy of soccer robot is influenced by many factors, such as game state, field region, attacking and defending situation and so on, each factor also has different influence degree, so, the Side-wall behavior selection is an intelligent selecting process. From the view point of human simulated, based on the idea of Side-wall processing priority[1], this paper builds the priority function for Side-wall processing, constructs the action predicative model for Side-wall obstacle, puts forward the Side-wall processing strategy, and forms the Side-wall behavior selection mechanism. Through the contrasting experiment between the strategy applied and none, proves that this strategy can improve the soccer robot capacity, it is feasible and effective, and has positive meaning for soccer robot stepped study.
Two different mechanisms support selective attention at different phases of training.
Itthipuripat, Sirawaj; Cha, Kexin; Byers, Anna; Serences, John T
2017-06-01
Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes.
Two different mechanisms support selective attention at different phases of training
Cha, Kexin; Byers, Anna; Serences, John T.
2017-01-01
Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes. PMID:28654635
NASA Astrophysics Data System (ADS)
Özel, Tuğrul; Arısoy, Yiğit M.; Criales, Luis E.
Computational modelling of Laser Powder Bed Fusion (L-PBF) processes such as Selective laser Melting (SLM) can reveal information that is hard to obtain or unobtainable by in-situ experimental measurements. A 3D thermal field that is not visible by the thermal camera can be obtained by solving the 3D heat transfer problem. Furthermore, microstructural modelling can be used to predict the quality and mechanical properties of the product. In this paper, a nonlinear 3D Finite Element Method based computational code is developed to simulate the SLM process with different process parameters such as laser power and scan velocity. The code is further improved by utilizing an in-situ thermal camera recording to predict spattering which is in turn included as a stochastic heat loss. Then, thermal gradients extracted from the simulations applied to predict growth directions in the resulting microstructure.
Acquisition Management for Systems-of-Systems: Exploratory Model Development and Experimentation
2009-04-22
outputs of the Requirements Development and Logical Analysis processes into alternative design solutions and selects a final design solution. Decision...Analysis Provides the basis for evaluating and selecting alternatives when decisions need to be made. Implementation Yields the lowest-level system... Dependenc y Matrix 1 ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ 011 100 110 2 ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ 000 100 100 a) Example of SoS b) Model Structure for Example SoS
SAFARI, an On-Line Text-Processing System User's Manual.
ERIC Educational Resources Information Center
Chapin, P.G.; And Others.
This report describes for the potential user a set of procedures for processing textual materials on-line. In this preliminary model an information analyst can scan through messages, reports, and other documents on a display scope and select relevant facts, which are processed linguistically and then stored in the computer in the form of logical…
Optimal control of raw timber production processes
Ivan Kolenka
1978-01-01
This paper demonstrates the possibility of optimal planning and control of timber harvesting activ-ities with mathematical optimization models. The separate phases of timber harvesting are represented by coordinated models which can be used to select the optimal decision for the execution of any given phase. The models form a system whose components are connected and...
Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.
Forstmann, B U; Ratcliff, R; Wagenmakers, E-J
2016-01-01
Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.
A game-based decision support methodology for competitive systems design
NASA Astrophysics Data System (ADS)
Briceno, Simon Ignacio
This dissertation describes the development of a game-based methodology that facilitates the exploration and selection of research and development (R&D) projects under uncertain competitive scenarios. The proposed method provides an approach that analyzes competitor positioning and formulates response strategies to forecast the impact of technical design choices on a project's market performance. A critical decision in the conceptual design phase of propulsion systems is the selection of the best architecture, centerline, core size, and technology portfolio. This selection can be challenging when considering evolving requirements from both the airframe manufacturing company and the airlines in the market. Furthermore, the exceedingly high cost of core architecture development and its associated risk makes this strategic architecture decision the most important one for an engine company. Traditional conceptual design processes emphasize performance and affordability as their main objectives. These areas alone however, do not provide decision-makers with enough information as to how successful their engine will be in a competitive market. A key objective of this research is to examine how firm characteristics such as their relative differences in completing R&D projects, differences in the degree of substitutability between different project types, and first/second-mover advantages affect their product development strategies. Several quantitative methods are investigated that analyze business and engineering strategies concurrently. In particular, formulations based on the well-established mathematical field of game theory are introduced to obtain insights into the project selection problem. The use of game theory is explored in this research as a method to assist the selection process of R&D projects in the presence of imperfect market information. The proposed methodology focuses on two influential factors: the schedule uncertainty of project completion times and the uncertainty associated with competitive reactions. A normal-form matrix is created to enumerate players, their moves and payoffs, and to formulate a process by which an optimal decision can be achieved. The non-cooperative model is tested using the concept of a Nash equilibrium to identify potential strategies that are robust to uncertain market fluctuations (e.g: uncertainty in airline demand, airframe requirements and competitor positioning). A first/second-mover advantage parameter is used as a scenario dial to adjust market rewards and firms' payoffs. The methodology is applied to a commercial aircraft engine selection study where engine firms must select an optimal engine project for development. An engine modeling and simulation framework is developed to generate a broad engine project portfolio. The creation of a customer value model enables designers to incorporate airline operation characteristics into the engine modeling and simulation process to improve the accuracy of engine/customer matching. Summary. Several key findings are made that provide recommendations on project selection strategies for firms uncertain as to when they will enter the market. The proposed study demonstrates that within a technical design environment, a rational and analytical means of modeling project development strategies is beneficial in high market risk situations.
Determination of suitable drying curve model for bread moisture loss during baking
NASA Astrophysics Data System (ADS)
Soleimani Pour-Damanab, A. R.; Jafary, A.; Rafiee, S.
2013-03-01
This study presents mathematical modelling of bread moisture loss or drying during baking in a conventional bread baking process. In order to estimate and select the appropriate moisture loss curve equation, 11 different models, semi-theoretical and empirical, were applied to the experimental data and compared according to their correlation coefficients, chi-squared test and root mean square error which were predicted by nonlinear regression analysis. Consequently, of all the drying models, a Page model was selected as the best one, according to the correlation coefficients, chi-squared test, and root mean square error values and its simplicity. Mean absolute estimation error of the proposed model by linear regression analysis for natural and forced convection modes was 2.43, 4.74%, respectively.
NASA Technical Reports Server (NTRS)
Arduini, R. F.; Aherron, R. M.; Samms, R. W.
1984-01-01
A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.
Kuruoglu, Emel; Guldal, Dilek; Mevsim, Vildan; Gunvar, Tolga
2015-08-05
Choosing the most appropriate family physician (FP) for the individual, plays a fundamental role in primary care. The aim of this study is to determine the selection criteria for the patients in choosing their family doctors and priority ranking of these criteria by using the multi-criteria decision-making method of the Analytic Hierarchy Process (AHP) model. The study was planned and conducted in two phases. In the first phase, factors affecting the patients' decisions were revealed with a qualitative research. In the next phase, the priorities of FP selection criteria were determined by using AHP model. Criteria were compared in pairs. 96 patient were asked to fill the information forms which contains comparison scores in the Family Health Centres. According to the analysis of focus group discussions FP selection criteria were congregated in to five groups: Individual Characteristics, Patient-Doctor relationship, Professional characteristics, the Setting, and Ethical Characteristics. For each of the 96 participants, comparison matrixes were formed based on the scores of their information forms. Of these, models of only 5 (5.2 %) of the participants were consistent, in other words, they have been able to score consistent ranking. The consistency ratios (CR) were found to be smaller than 0.10. Therefore the comparison matrix of this new model, which was formed based on the medians of scores only given by these 5 participants, was consistent (CR = 0.06 < 0.10). According to comparison results; with a 0.467 value-weight, the most important criterion for choosing a family physician is his/her 'Professional characteristics'. Selection criteria for choosing a FP were put in a priority order by using AHP model. These criteria can be used as measures for selecting alternative FPs in further researches.
Simpkins, Sandra D.; Schaefer, David R.; Price, Chara D.; Vest, Andrea E.
2012-01-01
Bioecological theory suggests that adolescents’ health is a result of selection and socialization processes occurring between adolescents and their microsettings. This study examines the association between adolescents’ friends and health using a social network model and data from the National Longitudinal Study of Adolescent Health (N = 1,896, mean age = 15.97 years). Results indicated evidence of friend influence on BMI and physical activity. Friendships were more likely among adolescents who engaged in greater physical activity and who were similar to one another in BMI and physical activity. These effects emerged after controlling for alternative friend selection factors, such as endogenous social network processes and propinquity through courses and activities. Some selection effects were moderated by gender, popularity, and reciprocity. PMID:24222971
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Microstructure and Magnetic Properties of Magnetic Material Fabricated by Selective Laser Melting
NASA Astrophysics Data System (ADS)
Jhong, Kai Jyun; Huang, Wei-Chin; Lee, Wen Hsi
Selective Laser Melting (SLM) is a powder-based additive manufacturing which is capable of producing parts layer-by-layer from a 3D CAD model. The aim of this study is to adopt the selective laser melting technique to magnetic material fabrication. [1]For the SLM process to be practical in industrial use, highly specific mechanical properties of the final product must be achieved. The integrity of the manufactured components depend strongly on each single laser-melted track and every single layer, as well as the strength of the connections between them. In this study, effects of the processing parameters, such as the space distance of surface morphology is analyzed. Our hypothesis is that when a magnetic product is made by the selective laser melting techniques instead of traditional techniques, the finished component will have more precise and effective properties. This study analyzed the magnitudes of magnetic properties in comparison with different parameters in the SLM process and compiled a completed product to investigate the efficiency in contrast with products made with existing manufacturing processes.
Rojo, Marcial García; Rolón, Elvira; Calahorra, Luis; García, Felix Oscar; Sánchez, Rosario Paloma; Ruiz, Francisco; Ballester, Nieves; Armenteros, María; Rodríguez, Teresa; Espartero, Rafael Martín
2008-07-15
Process orientation is one of the essential elements of quality management systems, including those in use in healthcare. Business processes in hospitals are very complex and variable. BPMN (Business Process Modelling Notation) is a user-oriented language specifically designed for the modelling of business (organizational) processes. Previous experiences of the use of this notation in the processes modelling within the Pathology in Spain or another country are not known. We present our experience in the elaboration of the conceptual models of Pathology processes, as part of a global programmed surgical patient process, using BPMN. With the objective of analyzing the use of BPMN notation in real cases, a multidisciplinary work group was created, including software engineers from the Dep. of Technologies and Information Systems from the University of Castilla-La Mancha and health professionals and administrative staff from the Hospital General de Ciudad Real. The work in collaboration was carried out in six phases: informative meetings, intensive training, process selection, definition of the work method, process describing by hospital experts, and process modelling. The modelling of the processes of Anatomic Pathology is presented using BPMN. The presented subprocesses are those corresponding to the surgical pathology examination of the samples coming from operating theatre, including the planning and realization of frozen studies. The modelling of Anatomic Pathology subprocesses has allowed the creation of an understandable graphical model, where management and improvements are more easily implemented by health professionals.
Rojo, Marcial García; Rolón, Elvira; Calahorra, Luis; García, Felix Óscar; Sánchez, Rosario Paloma; Ruiz, Francisco; Ballester, Nieves; Armenteros, María; Rodríguez, Teresa; Espartero, Rafael Martín
2008-01-01
Background Process orientation is one of the essential elements of quality management systems, including those in use in healthcare. Business processes in hospitals are very complex and variable. BPMN (Business Process Modelling Notation) is a user-oriented language specifically designed for the modelling of business (organizational) processes. Previous experiences of the use of this notation in the processes modelling within the Pathology in Spain or another country are not known. We present our experience in the elaboration of the conceptual models of Pathology processes, as part of a global programmed surgical patient process, using BPMN. Methods With the objective of analyzing the use of BPMN notation in real cases, a multidisciplinary work group was created, including software engineers from the Dep. of Technologies and Information Systems from the University of Castilla-La Mancha and health professionals and administrative staff from the Hospital General de Ciudad Real. The work in collaboration was carried out in six phases: informative meetings, intensive training, process selection, definition of the work method, process describing by hospital experts, and process modelling. Results The modelling of the processes of Anatomic Pathology is presented using BPMN. The presented subprocesses are those corresponding to the surgical pathology examination of the samples coming from operating theatre, including the planning and realization of frozen studies. Conclusion The modelling of Anatomic Pathology subprocesses has allowed the creation of an understandable graphical model, where management and improvements are more easily implemented by health professionals. PMID:18673511
Modeling and system design for the LOFAR station digital processing
NASA Astrophysics Data System (ADS)
Alliot, Sylvain; van Veelen, Martijn
2004-09-01
In the context of the LOFAR preliminary design phase and in particular for the specification of the Station Digital Processing (SDP), a performance/cost model of the system was used. We present here the framework and the trajectory followed in this phase when going from requirements to specification. In the phased array antenna concepts for the next generation of radio telescopes (LOFAR, ATA, SKA) signal processing (multi-beaming and RFI mitigation) replaces the large antenna dishes. The embedded systems for these telescopes are major infrastructure cost items. Moreover, the flexibility and overall performance of the instrument depend greatly on them, therefore alternative solutions need to be investigated. In particular, the technology and the various data transport selections play a fundamental role in the optimization of the architecture. We proposed a formal method [1] of exploring these alternatives that has been followed during the SDP developments. Different scenarios were compared for the specification of the application (selection of the algorithms as well as detailed signal processing techniques) and in the specification of the system architecture (selection of high level topologies, platforms and components). It gave us inside knowledge on the possible trade-offs in the application and architecture domains. This was successful in providing firm basis for the design choices that are demanded by technical review committees.
Regulatory ozone modeling: status, directions, and research needs.
Georgopoulos, P G
1995-01-01
The Clean Air Act Amendments (CAAA) of 1990 have established selected comprehensive, three-dimensional, Photochemical Air Quality Simulation Models (PAQSMs) as the required regulatory tools for analyzing the urban and regional problem of high ambient ozone levels across the United States. These models are currently applied to study and establish strategies for meeting the National Ambient Air Quality Standard (NAAQS) for ozone in nonattainment areas; State Implementation Plans (SIPs) resulting from these efforts must be submitted to the U.S. Environmental Protection Agency (U.S. EPA) in November 1994. The following presentation provides an overview and discussion of the regulatory ozone modeling process and its implications. First, the PAQSM-based ozone attainment demonstration process is summarized in the framework of the 1994 SIPs. Then, following a brief overview of the representation of physical and chemical processes in PAQSMs, the essential attributes of standard modeling systems currently in regulatory use are presented in a nonmathematical, self-contained format, intended to provide a basic understanding of both model capabilities and limitations. The types of air quality, emission, and meteorological data needed for applying and evaluating PAQSMs are discussed, as well as the sources, availability, and limitations of existing databases. The issue of evaluating a model's performance in order to accept it as a tool for policy making is discussed, and various methodologies for implementing this objective are summarized. Selected interim results from diagnostic analyses, which are performed as a component of the regulatory ozone modeling process for the Philadelphia-New Jersey region, are also presented to provide some specific examples related to the general issues discussed in this work. Finally, research needs related to a) the evaluation and refinement of regulatory ozone modeling, b) the characterization of uncertainty in photochemical modeling, and c) the improvement of the model-based ozone-attainment demonstration process are presented to identify future directions in this area. Images Figure 7. Figure 7. Figure 7. Figure 8. Figure 9. PMID:7614934
Storytelling, behavior planning, and language evolution in context.
McBride, Glen
2014-01-01
An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes.
Storytelling, behavior planning, and language evolution in context
McBride, Glen
2014-01-01
An attempt is made to specify the structure of the hominin bands that began steps to language. Storytelling could evolve without need for language yet be strongly subject to natural selection and could provide a major feedback process in evolving language. A storytelling model is examined, including its effects on the evolution of consciousness and the possible timing of language evolution. Behavior planning is presented as a model of language evolution from storytelling. The behavior programming mechanism in both directions provide a model of creating and understanding behavior and language. Culture began with societies, then family evolution, family life in troops, but storytelling created a culture of experiences, a final step in the long process of achieving experienced adults by natural selection. Most language evolution occurred in conversations where evolving non-verbal feedback ensured mutual agreements on understanding. Natural language evolved in conversations with feedback providing understanding of changes. PMID:25360123
Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.
Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana
2016-01-01
The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.
Instances selection algorithm by ensemble margin
NASA Astrophysics Data System (ADS)
Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine
2018-05-01
The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.
Self-paced model learning for robust visual tracking
NASA Astrophysics Data System (ADS)
Huang, Wenhui; Gu, Jason; Ma, Xin; Li, Yibin
2017-01-01
In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.
Nakajima, Toshiyuki
2017-12-01
Evolution by natural selection requires the following conditions: (1) a particular selective environment; (2) variation of traits in the population; (3) differential survival/reproduction among the types of organisms; and (4) heritable traits. However, the traditional (standard) model does not clearly explain how and why these conditions are generated or determined. What generates a selective environment? What generates new types? How does a certain type replace, or coexist with, others? In this paper, based on the holistic philosophy of Western and Eastern traditions, I focus on the ecosystem as a higher-level system and generator of conditions that induce the evolution of component populations; I also aim to identify the ecosystem processes that generate those conditions. In particular, I employ what I call the scientific principle of dependent-arising (SDA), which is tailored for scientific use and is based on Buddhism principle called "pratītya-samutpāda" in Sanskrit. The SDA principle asserts that there exists a higher-level system, or entity, which includes a focal process of a system as a part within it; this determines or generates the conditions required for the focal process to work in a particular way. I conclude that the ecosystem generates (1) selective environments for component species through ecosystem dynamics; (2) new genetic types through lateral gene transfer, hybridization, and symbiogenesis among the component species of the ecosystem; (3) mechanistic processes of replacement of an old type with a new one. The results of this study indicate that the ecological extension of the theoretical model of adaptive evolution is required for better understanding of adaptive evolution. Copyright © 2017 Elsevier Ltd. All rights reserved.
Multiphysics modeling of selective laser sintering/melting
NASA Astrophysics Data System (ADS)
Ganeriwala, Rishi Kumar
A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon cooling are calculated using the finite difference method. Different case studies are performed and general trends can be seen. This work concludes by discussing future extensions of this model and the need for a multi-scale approach to achieve comprehensive part-level models of the SLS/SLM process.
Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis
Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas
2016-01-01
The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246
NASA Astrophysics Data System (ADS)
Pound, Marc W.; Wolfire, Mark G.; Mundy, Lee G.; Teuben, Peter; Lord, Steve
2011-02-01
DIRT is a Java applet for modelling astrophysical processes in circumstellar dust shells around young and evolved stars. With DIRT, you can: select and display over 500,000 pre-run model spectral energy distributions (SEDs) find the best-fit model to your data set account for beam size in model fitting manipulate data and models with an interactive viewer display gas and dust density and temperature profiles display model intensity profiles at various wavelengths
ERIC Educational Resources Information Center
St. John, Edward P.; Loescher, Siri; Jacob, Stacy; Cekic, Osman; Kupersmith, Leigh; Musoba, Glenda Droogsma
A growing number of schools are exploring the prospect of applying for funding to implement a Comprehensive School Reform (CSR) model. But the process of selecting a CSR model can be complicated because it frequently involves self-study and a review of models to determine which models best meet the needs of the school. This study guide is intended…
Discrimination of dynamical system models for biological and chemical processes.
Lorenz, Sönke; Diederichs, Elmar; Telgmann, Regina; Schütte, Christof
2007-06-01
In technical chemistry, systems biology and biotechnology, the construction of predictive models has become an essential step in process design and product optimization. Accurate modelling of the reactions requires detailed knowledge about the processes involved. However, when concerned with the development of new products and production techniques for example, this knowledge often is not available due to the lack of experimental data. Thus, when one has to work with a selection of proposed models, the main tasks of early development is to discriminate these models. In this article, a new statistical approach to model discrimination is described that ranks models wrt. the probability with which they reproduce the given data. The article introduces the new approach, discusses its statistical background, presents numerical techniques for its implementation and illustrates the application to examples from biokinetics.
Statistical Development and Application of Cultural Consensus Theory
2012-03-31
Bulletin & Review , 17, 275-286. Schmittmann, V.D., Dolan, C.V., Raijmakers, M.E.J., and Batchelder, W.H. (2010). Parameter identification in...Wu, H., Myung, J.I., and Batchelder, W.H. (2010). Minimum description length model selection of multinomial processing tree models. Psychonomic
Rapid Processing of Turner Designs Model 10-Au-005 Internally Logged Fluorescence Data
Continuous recording of dye fluorescence using field fluorometers at selected sampling sites facilitates acquisition of real-time dye tracing data. The Turner Designs Model 10-AU-005 field fluorometer allows for frequent fluorescence readings, data logging, and easy downloading t...
The tangled bank of amino acids.
Goldstein, Richard A; Pollock, David D
2016-07-01
The use of amino acid substitution matrices to model protein evolution has yielded important insights into both the evolutionary process and the properties of specific protein families. In order to make these models tractable, standard substitution matrices represent the average results of the evolutionary process rather than the underlying molecular biophysics and population genetics, treating proteins as a set of independently evolving sites rather than as an integrated biomolecular entity. With advances in computing and the increasing availability of sequence data, we now have an opportunity to move beyond current substitution matrices to more interpretable mechanistic models with greater fidelity to the evolutionary process of mutation and selection and the holistic nature of the selective constraints. As part of this endeavour, we consider how epistatic interactions induce spatial and temporal rate heterogeneity, and demonstrate how these generally ignored factors can reconcile standard substitution rate matrices and the underlying biology, allowing us to better understand the meaning of these substitution rates. Using computational simulations of protein evolution, we can demonstrate the importance of both spatial and temporal heterogeneity in modelling protein evolution. © 2016 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.
2016-01-01
We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322
Using selection bias to explain the observed structure of Internet diffusions
Golub, Benjamin; Jackson, Matthew O.
2010-01-01
Recently, large datasets stored on the Internet have enabled the analysis of processes, such as large-scale diffusions of information, at new levels of detail. In a recent study, Liben-Nowell and Kleinberg [(2008) Proc Natl Acad Sci USA 105:4633–4638] observed that the flow of information on the Internet exhibits surprising patterns whereby a chain letter reaches its typical recipient through long paths of hundreds of intermediaries. We show that a basic Galton–Watson epidemic model combined with the selection bias of observing only large diffusions suffices to explain these patterns. Thus, selection biases of which data we observe can radically change the estimation of classical diffusion processes. PMID:20534439
Combining Feature Selection and Integration—A Neural Model for MT Motion Selectivity
Beck, Cornelia; Neumann, Heiko
2011-01-01
Background The computation of pattern motion in visual area MT based on motion input from area V1 has been investigated in many experiments and models attempting to replicate the main mechanisms. Two different core conceptual approaches were developed to explain the findings. In integrationist models the key mechanism to achieve pattern selectivity is the nonlinear integration of V1 motion activity. In contrast, selectionist models focus on the motion computation at positions with 2D features. Methodology/Principal Findings Recent experiments revealed that neither of the two concepts alone is sufficient to explain all experimental data and that most of the existing models cannot account for the complex behaviour found. MT pattern selectivity changes over time for stimuli like type II plaids from vector average to the direction computed with an intersection of constraint rule or by feature tracking. Also, the spatial arrangement of the stimulus within the receptive field of a MT cell plays a crucial role. We propose a recurrent neural model showing how feature integration and selection can be combined into one common architecture to explain these findings. The key features of the model are the computation of 1D and 2D motion in model area V1 subpopulations that are integrated in model MT cells using feedforward and feedback processing. Our results are also in line with findings concerning the solution of the aperture problem. Conclusions/Significance We propose a new neural model for MT pattern computation and motion disambiguation that is based on a combination of feature selection and integration. The model can explain a range of recent neurophysiological findings including temporally dynamic behaviour. PMID:21814543
NASA Astrophysics Data System (ADS)
Dijkstra, Yoeri M.; Brouwer, Ronald L.; Schuttelaars, Henk M.; Schramkowski, George P.
2017-07-01
The iFlow modelling framework is a width-averaged model for the systematic analysis of the water motion and sediment transport processes in estuaries and tidal rivers. The distinctive solution method, a mathematical perturbation method, used in the model allows for identification of the effect of individual physical processes on the water motion and sediment transport and study of the sensitivity of these processes to model parameters. This distinction between processes provides a unique tool for interpreting and explaining hydrodynamic interactions and sediment trapping. iFlow also includes a large number of options to configure the model geometry and multiple choices of turbulence and salinity models. Additionally, the model contains auxiliary components, including one that facilitates easy and fast sensitivity studies. iFlow has a modular structure, which makes it easy to include, exclude or change individual model components, called modules. Depending on the required functionality for the application at hand, modules can be selected to construct anything from very simple quasi-linear models to rather complex models involving multiple non-linear interactions. This way, the model complexity can be adjusted to the application. Once the modules containing the required functionality are selected, the underlying model structure automatically ensures modules are called in the correct order. The model inserts iteration loops over groups of modules that are mutually dependent. iFlow also ensures a smooth coupling of modules using analytical and numerical solution methods. This way the model combines the speed and accuracy of analytical solutions with the versatility of numerical solution methods. In this paper we present the modular structure, solution method and two examples of the use of iFlow. In the examples we present two case studies, of the Yangtze and Scheldt rivers, demonstrating how iFlow facilitates the analysis of model results, the understanding of the underlying physics and the testing of parameter sensitivity. A comparison of the model results to measurements shows a good qualitative agreement. iFlow is written in Python and is available as open source code under the LGPL license.
NASA Astrophysics Data System (ADS)
Köhler, Mandy; Haendel, Falk; Epting, Jannis; Binder, Martin; Müller, Matthias; Huggenberger, Peter; Liedl, Rudolf
2015-04-01
Increasing groundwater temperatures have been observed in many urban areas such as London (UK), Tokyo (Japan) and also in Basel (Switzerland). Elevated groundwater temperatures are a result of different direct and indirect thermal impacts. Groundwater heat pumps, building structures located within the groundwater and district heating pipes, among others, can be addressed to direct impacts, whereas indirect impacts result from the change in climate in urban regions (i.e. reduced wind, diffuse heat sources). A better understanding of the thermal processes within the subsurface is urgently needed for decision makers as a basis for the selection of appropriate measures to reduce the ongoing increase of groundwater temperatures. However, often only limited temperature data is available that derives from measurements in conventional boreholes, which differ in construction and instrumental setup resulting in measurements that are often biased and not comparable. For three locations in the City of Basel models were implemented to study selected thermal processes and to investigate if heat-transport models can reproduce thermal measurements. Therefore, and to overcome the limitations of conventional borehole measurements, high-resolution depth-oriented temperature measurement systems have been introduced in the urban area of Basel. In total seven devices were installed with up to 16 sensors which are located in the unsaturated and saturated zone (0.5 to 1 m separation distance). Measurements were performed over a period of 4 years (ongoing) and provide sufficient data to set up and calibrate high-resolution local numerical heat transport models which allow studying selected local thermal processes. In a first setup two- and three-dimensional models were created to evaluate the impact of the atmosphere boundary on groundwater temperatures (see EGU Poster EGU2013-9230: Modelling Strategies for the Thermal Management of Shallow Rural and Urban Groundwater bodies). For Basel, where the mean thickness of the unsaturated zone amounts to 19 m, it could be observed that atmospheric seasonal temperature variations are small compared to advective groundwater heat transport. At chosen locations: i) near the river Rhine to study river-groundwater interaction processes, ii) downstream of a thermal groundwater user who uses water for cooling and infiltrates water with elevated temperatures and iii) downstream of a building structure reaching into the groundwater saturated zone, models were further extended to study selected thermal processes in detail and to investigate if these models can reproduce thermal impacts in the vicinity of the temperature measurement devices. Calibration, based on the depth-oriented temperature measurements, was performed for the saturated and unsaturated zone, respectively. Model results show that, although depth-oriented measurements provide valuable insights into local thermal processes, the identification of the governing impacts is strongly dependent on an appropriate positioning of the measurement device. Numerical simulations based on existing flow- and heat transport models, considering the site specific local hydraulic and thermal boundary conditions, allow optimizing the location of such systems before installation. Furthermore, the results of the local heat transport models can be transferred to regional scale models which are an important tool for thermal management in urban areas.
Rossoni, Daniela M; Assis, Ana Paula A; Giannini, Norberto P; Marroig, Gabriel
2017-09-11
The family Phyllostomidae, which evolved in the New World during the last 30 million years, represents one of the largest and most morphologically diverse mammal families. Due to its uniquely diverse functional morphology, the phyllostomid skull is presumed to have evolved under strong directional selection; however, quantitative estimation of the strength of selection in this extraordinary lineage has not been reported. Here, we used comparative quantitative genetics approaches to elucidate the processes that drove cranial evolution in phyllostomids. We also quantified the strength of selection and explored its association with dietary transitions and specialization along the phyllostomid phylogeny. Our results suggest that natural selection was the evolutionary process responsible for cranial diversification in phyllostomid bats. Remarkably, the strongest selection in the phyllostomid phylogeny was associated with dietary specialization and the origination of novel feeding habits, suggesting that the adaptive diversification of phyllostomid bats was triggered by ecological opportunities. These findings are consistent with Simpson's quantum evolutionary model of transitions between adaptive zones. The multivariate analyses used in this study provides a powerful tool for understanding the role of evolutionary processes in shaping phenotypic diversity in any group on both micro- and macroevolutionary scales.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
An Annotated Bibliography on Tactical Map Display Symbology
1989-08-01
failure of attention to be focused on one element selectively in filtering tasks where only that one element was relevant to the discrimination. Failure of...The present study evaluates a class of models of human information processing made popular by Broadbent . A brief tachistoscopic display of one or two...213-219. Two experiments were performed to test Neisser’s two-stage model of recognition as applied to matching. Evidence of parallel processing was
Vatsavai, Ranga Raju; Graesser, Jordan B.; Bhaduri, Budhendra L.
2016-07-05
A programmable media includes a graphical processing unit in communication with a memory element. The graphical processing unit is configured to detect one or more settlement regions from a high resolution remote sensed image based on the execution of programming code. The graphical processing unit identifies one or more settlements through the execution of the programming code that executes a multi-instance learning algorithm that models portions of the high resolution remote sensed image. The identification is based on spectral bands transmitted by a satellite and on selected designations of the image patches.
Bao, Le; Gu, Hong; Dunn, Katherine A; Bielawski, Joseph P
2007-02-08
Models of codon evolution have proven useful for investigating the strength and direction of natural selection. In some cases, a priori biological knowledge has been used successfully to model heterogeneous evolutionary dynamics among codon sites. These are called fixed-effect models, and they require that all codon sites are assigned to one of several partitions which are permitted to have independent parameters for selection pressure, evolutionary rate, transition to transversion ratio or codon frequencies. For single gene analysis, partitions might be defined according to protein tertiary structure, and for multiple gene analysis partitions might be defined according to a gene's functional category. Given a set of related fixed-effect models, the task of selecting the model that best fits the data is not trivial. In this study, we implement a set of fixed-effect codon models which allow for different levels of heterogeneity among partitions in the substitution process. We describe strategies for selecting among these models by a backward elimination procedure, Akaike information criterion (AIC) or a corrected Akaike information criterion (AICc). We evaluate the performance of these model selection methods via a simulation study, and make several recommendations for real data analysis. Our simulation study indicates that the backward elimination procedure can provide a reliable method for model selection in this setting. We also demonstrate the utility of these models by application to a single-gene dataset partitioned according to tertiary structure (abalone sperm lysin), and a multi-gene dataset partitioned according to the functional category of the gene (flagellar-related proteins of Listeria). Fixed-effect models have advantages and disadvantages. Fixed-effect models are desirable when data partitions are known to exhibit significant heterogeneity or when a statistical test of such heterogeneity is desired. They have the disadvantage of requiring a priori knowledge for partitioning sites. We recommend: (i) selection of models by using backward elimination rather than AIC or AICc, (ii) use a stringent cut-off, e.g., p = 0.0001, and (iii) conduct sensitivity analysis of results. With thoughtful application, fixed-effect codon models should provide a useful tool for large scale multi-gene analyses.
A three-layer model of natural image statistics.
Gutmann, Michael U; Hyvärinen, Aapo
2013-11-01
An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Brown, Andrew D; Marotta, Thomas R
2017-02-01
Incorrect imaging protocol selection can contribute to increased healthcare cost and waste. To help healthcare providers improve the quality and safety of medical imaging services, we developed and evaluated three natural language processing (NLP) models to determine whether NLP techniques could be employed to aid in clinical decision support for protocoling and prioritization of magnetic resonance imaging (MRI) brain examinations. To test the feasibility of using an NLP model to support clinical decision making for MRI brain examinations, we designed three different medical imaging prediction tasks, each with a unique outcome: selecting an examination protocol, evaluating the need for contrast administration, and determining priority. We created three models for each prediction task, each using a different classification algorithm-random forest, support vector machine, or k-nearest neighbor-to predict outcomes based on the narrative clinical indications and demographic data associated with 13,982 MRI brain examinations performed from January 1, 2013 to June 30, 2015. Test datasets were used to calculate the accuracy, sensitivity and specificity, predictive values, and the area under the curve. Our optimal results show an accuracy of 82.9%, 83.0%, and 88.2% for the protocol selection, contrast administration, and prioritization tasks, respectively, demonstrating that predictive algorithms can be used to aid in clinical decision support for examination protocoling. NLP models developed from the narrative clinical information provided by referring clinicians and demographic data are feasible methods to predict the protocol and priority of MRI brain examinations. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szoka de Valladares, M.R.; Mack, S.
The DOE Hydrogen Program needs to develop criteria as part of a systematic evaluation process for proposal identification, evaluation and selection. The H Scan component of this process provides a framework in which a project proposer can fully describe their candidate technology system and its components. The H Scan complements traditional methods of capturing cost and technical information. It consists of a special set of survey forms designed to elicit information so expert reviewers can assess the proposal relative to DOE specified selection criteria. The Analytic Hierarchy Process (AHP) component of the decision process assembles the management defined evaluation andmore » selection criteria into a coherent multi-level decision construct by which projects can be evaluated in pair-wise comparisons. The AHP model will reflect management`s objectives and it will assist in the ranking of individual projects based on the extent to which each contributes to management`s objectives. This paper contains a detailed description of the products and activities associated with the planning and evaluation process: The objectives or criteria; the H Scan; and The Analytic Hierarchy Process (AHP).« less
NASA Astrophysics Data System (ADS)
Kurchatkin, I. V.; Gorshkalev, A. A.; Blagin, E. V.
2017-01-01
This article deals with developed methods of the working processes modelling in the combustion chamber of an internal combustion engine (ICE). Methods includes description of the preparation of a combustion chamber 3-d model, setting of the finite-element mesh, boundary condition setting and solution customization. Aircraft radial engine M-14 was selected for modelling. The cycle of cold blowdown in the ANSYS IC Engine software was carried out. The obtained data were compared to results of known calculation methods. A method of engine’s induction port improvement was suggested.
Hencky's model for elastomer forming process
NASA Astrophysics Data System (ADS)
Oleinikov, A. A.; Oleinikov, A. I.
2016-08-01
In the numerical simulation of elastomer forming process, Henckys isotropic hyperelastic material model can guarantee relatively accurate prediction of strain range in terms of large deformations. It is shown, that this material model prolongate Hooke's law from the area of infinitesimal strains to the area of moderate ones. New representation of the fourth-order elasticity tensor for Hencky's hyperelastic isotropic material is obtained, it possesses both minor symmetries, and the major symmetry. Constitutive relations of considered model is implemented into MSC.Marc code. By calculating and fitting curves, the polyurethane elastomer material constants are selected. Simulation of equipment for elastomer sheet forming are considered.
Impact of auditory selective attention on verbal short-term memory and vocabulary development.
Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial
2009-05-01
This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing of item and serial order information because recent studies have shown this distinction to be critical when exploring relations between STM and lexical development. Multiple regression and variance partitioning analyses highlighted two variables as determinants of vocabulary development: (a) a serial order processing variable shared by STM order recall and a selective attention task for sequence information and (b) an attentional variable shared by selective attention measures targeting item or sequence information. The current study highlights the need for integrative STM models, accounting for conjoined influences of attentional capacities and serial order processing capacities on STM performance and the establishment of the lexical language network.
ERIC Educational Resources Information Center
Blair, Mark R.; Watson, Marcus R.; Walshe, R. Calen; Maj, Fillip
2009-01-01
Humans have an extremely flexible ability to categorize regularities in their environment, in part because of attentional systems that allow them to focus on important perceptual information. In formal theories of categorization, attention is typically modeled with weights that selectively bias the processing of stimulus features. These theories…
An Examination of Factors Influencing Students Selection of Business Majors Using TRA Framework
ERIC Educational Resources Information Center
Kumar, Anil; Kumar, Poonam
2013-01-01
Making decisions regarding the selection of a business major is both very important and challenging for students. An understanding of this decision-making process can be valuable for students, parents, and university programs. The current study applies the Theory of Reasoned Action (TRA) consumer decision-making model to examine factors that…
Seismic depth imaging of sequence boundaries beneath the New Jersey shelf
NASA Astrophysics Data System (ADS)
Riedel, M.; Reiche, S.; Aßhoff, K.; Buske, S.
2018-06-01
Numerical modelling of fluid flow and transport processes relies on a well-constrained geological model, which is usually provided by seismic reflection surveys. In the New Jersey shelf area a large number of 2D seismic profiles provide an extensive database for constructing a reliable geological model. However, for the purpose of modelling groundwater flow, the seismic data need to be depth-converted which is usually accomplished using complementary data from borehole logs. Due to the limited availability of such data in the New Jersey shelf, we propose a two-stage processing strategy with particular emphasis on reflection tomography and pre-stack depth imaging. We apply this workflow to a seismic section crossing the entire New Jersey shelf. Due to the tomography-based velocity modelling, the processing flow does not depend on the availability of borehole logging data. Nonetheless, we validate our results by comparing the migrated depths of selected geological horizons to borehole core data from the IODP expedition 313 drill sites, located at three positions along our seismic line. The comparison yields that in the top 450 m of the migrated section, most of the selected reflectors were positioned with an accuracy close to the seismic resolution limit (≈ 4 m) for that data. For deeper layers the accuracy still remains within one seismic wavelength for the majority of the tested horizons. These results demonstrate that the processed seismic data provide a reliable basis for constructing a hydrogeological model. Furthermore, the proposed workflow can be applied to other seismic profiles in the New Jersey shelf, which will lead to an even better constrained model.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
da Silveira, Christian L; Mazutti, Marcio A; Salau, Nina P G
2016-07-08
Process modeling can lead to of advantages such as helping in process control, reducing process costs and product quality improvement. This work proposes a solid-state fermentation distributed parameter model composed by seven differential equations with seventeen parameters to represent the process. Also, parameters estimation with a parameters identifyability analysis (PIA) is performed to build an accurate model with optimum parameters. Statistical tests were made to verify the model accuracy with the estimated parameters considering different assumptions. The results have shown that the model assuming substrate inhibition better represents the process. It was also shown that eight from the seventeen original model parameters were nonidentifiable and better results were obtained with the removal of these parameters from the estimation procedure. Therefore, PIA can be useful to estimation procedure, since it may reduce the number of parameters that can be evaluated. Further, PIA improved the model results, showing to be an important procedure to be taken. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:905-917, 2016. © 2016 American Institute of Chemical Engineers.
NASA Technical Reports Server (NTRS)
Lahoti, G. D.; Akgerman, N.; Altan, T.
1978-01-01
Mild steel (AISI 1018) was selected as model cold-rolling material and Ti-6Al-4V and INCONEL 718 were selected as typical hot-rolling and cold-rolling alloys, respectively. The flow stress and workability of these alloys were characterized and friction factor at the roll/workpiece interface was determined at their respective working conditions by conducting ring tests. Computer-aided mathematical models for predicting metal flow and stresses, and for simulating the shape-rolling process were developed. These models utilize the upper-bound and the slab methods of analysis, and are capable of predicting the lateral spread, roll-separating force, roll torque and local stresses, strains and strain rates. This computer-aided design (CAD) system is also capable of simulating the actual rolling process and thereby designing roll-pass schedule in rolling of an airfoil or similar shape. The predictions from the CAD system were verified with respect to cold rolling of mild steel plates. The system is being applied to cold and hot isothermal rolling of an airfoil shape, and will be verified with respect to laboratory experiments under controlled conditions.
The processive kinetics of gene conversion in bacteria
Paulsson, Johan; El Karoui, Meriem; Lindell, Monica
2017-01-01
Summary Gene conversion, non‐reciprocal transfer from one homologous sequence to another, is a major force in evolutionary dynamics, promoting co‐evolution in gene families and maintaining similarities between repeated genes. However, the properties of the transfer – where it initiates, how far it proceeds and how the resulting conversion tracts are affected by mismatch repair – are not well understood. Here, we use the duplicate tuf genes in Salmonella as a quantitatively tractable model system for gene conversion. We selected for conversion in multiple different positions of tuf, and examined the resulting distributions of conversion tracts in mismatch repair‐deficient and mismatch repair‐proficient strains. A simple stochastic model accounting for the essential steps of conversion showed excellent agreement with the data for all selection points using the same value of the conversion processivity, which is the only kinetic parameter of the model. The analysis suggests that gene conversion effectively initiates uniformly at any position within a tuf gene, and proceeds with an effectively uniform conversion processivity in either direction limited by the bounds of the gene. PMID:28256783
Approach to in-process tool wear monitoring in drilling: Application of Kalman filter theory
NASA Astrophysics Data System (ADS)
He, Ning; Zhang, Youzhen; Pan, Liangxian
1993-05-01
The two parameters often used in adaptive control, tool wear and wear rate, are the important factors affecting machinability. In this paper, it is attempted to use the modern cybernetics to solve the in-process tool wear monitoring problem by applying the Kalman filter theory to monitor drill wear quantitatively. Based on the experimental results, a dynamic model, a measuring model and a measurement conversion model suitable for Kalman filter are established. It is proved that the monitoring system possesses complete observability but does not possess complete controllability. A discriminant for selecting the characteristic parameters is put forward. The thrust force Fz is selected as the characteristic parameter in monitoring the tool wear by this discriminant. The in-process Kalman filter drill wear monitoring system composed of force sensor microphotography and microcomputer is well established. The results obtained by the Kalman filter, the common indirect measuring method and the real drill wear measured by the aid of microphotography are compared. The result shows that the Kalman filter has high precision of measurement and the real time requirement can be satisfied.
Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi
2007-01-01
The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.
Methods, media, and systems for detecting attack on a digital processing device
Stolfo, Salvatore J.; Li, Wei-Jen; Keromylis, Angelos D.; Androulaki, Elli
2014-07-22
Methods, media, and systems for detecting attack are provided. In some embodiments, the methods include: comparing at least part of a document to a static detection model; determining whether attacking code is included in the document based on the comparison of the document to the static detection model; executing at least part of the document; determining whether attacking code is included in the document based on the execution of the at least part of the document; and if attacking code is determined to be included in the document based on at least one of the comparison of the document to the static detection model and the execution of the at least part of the document, reporting the presence of an attack. In some embodiments, the methods include: selecting a data segment in at least one portion of an electronic document; determining whether the arbitrarily selected data segment can be altered without causing the electronic document to result in an error when processed by a corresponding program; in response to determining that the arbitrarily selected data segment can be altered, arbitrarily altering the data segment in the at least one portion of the electronic document to produce an altered electronic document; and determining whether the corresponding program produces an error state when the altered electronic document is processed by the corresponding program.