Selecting the process variables for filament winding
NASA Technical Reports Server (NTRS)
Calius, E.; Springer, G. S.
1986-01-01
A model is described which can be used to determine the appropriate values of the process variables for filament winding cylinders. The process variables which can be selected by the model include the winding speed, fiber tension, initial resin degree of cure, and the temperatures applied during winding, curing, and post-curing. The effects of these process variables on the properties of the cylinder during and after manufacture are illustrated by a numerical example.
Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan
2017-09-01
In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
Clustering Words to Match Conditions: An Algorithm for Stimuli Selection in Factorial Designs
ERIC Educational Resources Information Center
Guasch, Marc; Haro, Juan; Boada, Roger
2017-01-01
With the increasing refinement of language processing models and the new discoveries about which variables can modulate these processes, stimuli selection for experiments with a factorial design is becoming a tough task. Selecting sets of words that differ in one variable, while matching these same words into dozens of other confounding variables…
Galea, Joseph M.; Ruge, Diane; Buijink, Arthur; Bestmann, Sven; Rothwell, John C.
2013-01-01
Action selection describes the high-level process which selects between competing movements. In animals, behavioural variability is critical for the motor exploration required to select the action which optimizes reward and minimizes cost/punishment, and is guided by dopamine (DA). The aim of this study was to test in humans whether low-level movement parameters are affected by punishment and reward in ways similar to high-level action selection. Moreover, we addressed the proposed dependence of behavioural and neurophysiological variability on DA, and whether this may underpin the exploration of kinematic parameters. Participants performed an out-and-back index finger movement and were instructed that monetary reward and punishment were based on its maximal acceleration (MA). In fact, the feedback was not contingent on the participant’s behaviour but pre-determined. Blocks highly-biased towards punishment were associated with increased MA variability relative to blocks with either reward or without feedback. This increase in behavioural variability was positively correlated with neurophysiological variability, as measured by changes in cortico-spinal excitability with transcranial magnetic stimulation over the primary motor cortex. Following the administration of a DA-antagonist, the variability associated with punishment diminished and the correlation between behavioural and neurophysiological variability no longer existed. Similar changes in variability were not observed when participants executed a pre-determined MA, nor did DA influence resting neurophysiological variability. Thus, under conditions of punishment, DA-dependent processes influence the selection of low-level movement parameters. We propose that the enhanced behavioural variability reflects the exploration of kinematic parameters for less punishing, or conversely more rewarding, outcomes. PMID:23447607
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-01-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Jiang, Hui; Zhang, Hang; Chen, Quansheng; Mei, Congli; Liu, Guohai
2015-10-01
The use of wavelength variable selection before partial least squares discriminant analysis (PLS-DA) for qualitative identification of solid state fermentation degree by FT-NIR spectroscopy technique was investigated in this study. Two wavelength variable selection methods including competitive adaptive reweighted sampling (CARS) and stability competitive adaptive reweighted sampling (SCARS) were employed to select the important wavelengths. PLS-DA was applied to calibrate identified model using selected wavelength variables by CARS and SCARS for identification of solid state fermentation degree. Experimental results showed that the number of selected wavelength variables by CARS and SCARS were 58 and 47, respectively, from the 1557 original wavelength variables. Compared with the results of full-spectrum PLS-DA, the two wavelength variable selection methods both could enhance the performance of identified models. Meanwhile, compared with CARS-PLS-DA model, the SCARS-PLS-DA model achieved better results with the identification rate of 91.43% in the validation process. The overall results sufficiently demonstrate the PLS-DA model constructed using selected wavelength variables by a proper wavelength variable method can be more accurate identification of solid state fermentation degree.
de Paula, Lauro C. M.; Soares, Anderson S.; de Lima, Telma W.; Delbem, Alexandre C. B.; Coelho, Clarimar J.; Filho, Arlindo R. G.
2014-01-01
Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation. PMID:25493625
de Paula, Lauro C M; Soares, Anderson S; de Lima, Telma W; Delbem, Alexandre C B; Coelho, Clarimar J; Filho, Arlindo R G
2014-01-01
Several variable selection algorithms in multivariate calibration can be accelerated using Graphics Processing Units (GPU). Among these algorithms, the Firefly Algorithm (FA) is a recent proposed metaheuristic that may be used for variable selection. This paper presents a GPU-based FA (FA-MLR) with multiobjective formulation for variable selection in multivariate calibration problems and compares it with some traditional sequential algorithms in the literature. The advantage of the proposed implementation is demonstrated in an example involving a relatively large number of variables. The results showed that the FA-MLR, in comparison with the traditional algorithms is a more suitable choice and a relevant contribution for the variable selection problem. Additionally, the results also demonstrated that the FA-MLR performed in a GPU can be five times faster than its sequential implementation.
Study of process variables associated with manufacturing hermetically-sealed nickel-cadmium cells
NASA Technical Reports Server (NTRS)
Miller, L.; Doan, D. J.; Carr, E. S.
1971-01-01
A program to determine and study the critical process variables associated with the manufacture of aerospace, hermetically-sealed, nickel-cadmium cells is described. The determination and study of the process variables associated with the positive and negative plaque impregnation/polarization process are emphasized. The experimental data resulting from the implementation of fractional factorial design experiments are analyzed by means of a linear multiple regression analysis technique. This analysis permits the selection of preferred levels for certain process variables to achieve desirable impregnated plaque characteristics.
Impact of auditory selective attention on verbal short-term memory and vocabulary development.
Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial
2009-05-01
This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing of item and serial order information because recent studies have shown this distinction to be critical when exploring relations between STM and lexical development. Multiple regression and variance partitioning analyses highlighted two variables as determinants of vocabulary development: (a) a serial order processing variable shared by STM order recall and a selective attention task for sequence information and (b) an attentional variable shared by selective attention measures targeting item or sequence information. The current study highlights the need for integrative STM models, accounting for conjoined influences of attentional capacities and serial order processing capacities on STM performance and the establishment of the lexical language network.
Estimating and mapping ecological processes influencing microbial community assembly
Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; Konopka, Allan E.
2015-01-01
Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recently developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth. PMID:25983725
Estimating and mapping ecological processes influencing microbial community assembly
Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.; ...
2015-05-01
Ecological community assembly is governed by a combination of (i) selection resulting from among-taxa differences in performance; (ii) dispersal resulting from organismal movement; and (iii) ecological drift resulting from stochastic changes in population sizes. The relative importance and nature of these processes can vary across environments. Selection can be homogeneous or variable, and while dispersal is a rate, we conceptualize extreme dispersal rates as two categories; dispersal limitation results from limited exchange of organisms among communities, and homogenizing dispersal results from high levels of organism exchange. To estimate the influence and spatial variation of each process we extend a recentlymore » developed statistical framework, use a simulation model to evaluate the accuracy of the extended framework, and use the framework to examine subsurface microbial communities over two geologic formations. For each subsurface community we estimate the degree to which it is influenced by homogeneous selection, variable selection, dispersal limitation, and homogenizing dispersal. Our analyses revealed that the relative influences of these ecological processes vary substantially across communities even within a geologic formation. We further identify environmental and spatial features associated with each ecological process, which allowed mapping of spatial variation in ecological-process-influences. The resulting maps provide a new lens through which ecological systems can be understood; in the subsurface system investigated here they revealed that the influence of variable selection was associated with the rate at which redox conditions change with subsurface depth.« less
Cider fermentation process monitoring by Vis-NIR sensor system and chemometrics.
Villar, Alberto; Vadillo, Julen; Santos, Jose I; Gorritxategi, Eneko; Mabe, Jon; Arnaiz, Aitor; Fernández, Luis A
2017-04-15
Optimization of a multivariate calibration process has been undertaken for a Visible-Near Infrared (400-1100nm) sensor system, applied in the monitoring of the fermentation process of the cider produced in the Basque Country (Spain). The main parameters that were monitored included alcoholic proof, l-lactic acid content, glucose+fructose and acetic acid content. The multivariate calibration was carried out using a combination of different variable selection techniques and the most suitable pre-processing strategies were selected based on the spectra characteristics obtained by the sensor system. The variable selection techniques studied in this work include Martens Uncertainty test, interval Partial Least Square Regression (iPLS) and Genetic Algorithm (GA). This procedure arises from the need to improve the calibration models prediction ability for cider monitoring. Copyright © 2016 Elsevier Ltd. All rights reserved.
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
Klingner, Thomas D; Boeniger, Mark F
2002-05-01
Wearing chemical-resistant gloves and clothing is the primary method used to prevent skin exposure to toxic chemicals in the workplace. The process for selecting gloves is usually based on manufacturers' laboratory-generated chemical permeation data. However, such data may not reflect conditions in the workplace where many variables are encountered (e.g., elevated temperature, flexing, pressure, and product variation between suppliers). Thus, the reliance on this selection process is questionable. Variables that may influence the performance of chemical-resistant gloves are identified and discussed. Passive dermal monitoring is recommended to evaluate glove performance under actual-use conditions and can bridge the gap between laboratory data and real-world performance.
NASA Astrophysics Data System (ADS)
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; Mello, Paola de Azevedo; Ferrão, Marco Flores; dos Santos, Maria de Fátima Pereira; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm-1). This model produced a RMSECV of 400 mg kg-1 S and RMSEP of 420 mg kg-1 S, showing a correlation coefficient of 0.990.
Mental health courts and their selection processes: modeling variation for consistency.
Wolff, Nancy; Fabrikant, Nicole; Belenko, Steven
2011-10-01
Admission into mental health courts is based on a complicated and often variable decision-making process that involves multiple parties representing different expertise and interests. To the extent that eligibility criteria of mental health courts are more suggestive than deterministic, selection bias can be expected. Very little research has focused on the selection processes underpinning problem-solving courts even though such processes may dominate the performance of these interventions. This article describes a qualitative study designed to deconstruct the selection and admission processes of mental health courts. In this article, we describe a multi-stage, complex process for screening and admitting clients into mental health courts. The selection filtering model that is described has three eligibility screening stages: initial, assessment, and evaluation. The results of this study suggest that clients selected by mental health courts are shaped by the formal and informal selection criteria, as well as by the local treatment system.
Development of an automated energy audit protocol for office buildings
NASA Astrophysics Data System (ADS)
Deb, Chirag
This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.
Reconfigurable pipelined processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saccardi, R.J.
1989-09-19
This patent describes a reconfigurable pipelined processor for processing data. It comprises: a plurality of memory devices for storing bits of data; a plurality of arithmetic units for performing arithmetic functions with the data; cross bar means for connecting the memory devices with the arithmetic units for transferring data therebetween; at least one counter connected with the cross bar means for providing a source of addresses to the memory devices; at least one variable tick delay device connected with each of the memory devices and arithmetic units; and means for providing control bits to the variable tick delay device formore » variably controlling the input and output operations thereof to selectively delay the memory devices and arithmetic units to align the data for processing in a selected sequence.« less
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Müller, Aline Lima Hermes; Picoloto, Rochele Sogari; de Azevedo Mello, Paola; Ferrão, Marco Flores; de Fátima Pereira dos Santos, Maria; Guimarães, Regina Célia Lourenço; Müller, Edson Irineu; Flores, Erico Marlon Moraes
2012-04-01
Total sulfur concentration was determined in atmospheric residue (AR) and vacuum residue (VR) samples obtained from petroleum distillation process by Fourier transform infrared spectroscopy with attenuated total reflectance (FT-IR/ATR) in association with chemometric methods. Calibration and prediction set consisted of 40 and 20 samples, respectively. Calibration models were developed using two variable selection models: interval partial least squares (iPLS) and synergy interval partial least squares (siPLS). Different treatments and pre-processing steps were also evaluated for the development of models. The pre-treatment based on multiplicative scatter correction (MSC) and the mean centered data were selected for models construction. The use of siPLS as variable selection method provided a model with root mean square error of prediction (RMSEP) values significantly better than those obtained by PLS model using all variables. The best model was obtained using siPLS algorithm with spectra divided in 20 intervals and combinations of 3 intervals (911-824, 823-736 and 737-650 cm(-1)). This model produced a RMSECV of 400 mg kg(-1) S and RMSEP of 420 mg kg(-1) S, showing a correlation coefficient of 0.990. Copyright © 2011 Elsevier B.V. All rights reserved.
Mathematical Model Of Variable-Polarity Plasma Arc Welding
NASA Technical Reports Server (NTRS)
Hung, R. J.
1996-01-01
Mathematical model of variable-polarity plasma arc (VPPA) welding process developed for use in predicting characteristics of welds and thus serves as guide for selection of process parameters. Parameters include welding electric currents in, and durations of, straight and reverse polarities; rates of flow of plasma and shielding gases; and sizes and relative positions of welding electrode, welding orifice, and workpiece.
Constructing Proxy Variables to Measure Adult Learners' Time Management Strategies in LMS
ERIC Educational Resources Information Center
Jo, Il-Hyun; Kim, Dongho; Yoon, Meehyun
2015-01-01
This study describes the process of constructing proxy variables from recorded log data within a Learning Management System (LMS), which represents adult learners' time management strategies in an online course. Based on previous research, three variables of total login time, login frequency, and regularity of login interval were selected as…
Mujalli, Randa Oqab; de Oña, Juan
2011-10-01
This study describes a method for reducing the number of variables frequently considered in modeling the severity of traffic accidents. The method's efficiency is assessed by constructing Bayesian networks (BN). It is based on a two stage selection process. Several variable selection algorithms, commonly used in data mining, are applied in order to select subsets of variables. BNs are built using the selected subsets and their performance is compared with the original BN (with all the variables) using five indicators. The BNs that improve the indicators' values are further analyzed for identifying the most significant variables (accident type, age, atmospheric factors, gender, lighting, number of injured, and occupant involved). A new BN is built using these variables, where the results of the indicators indicate, in most of the cases, a statistically significant improvement with respect to the original BN. It is possible to reduce the number of variables used to model traffic accidents injury severity through BNs without reducing the performance of the model. The study provides the safety analysts a methodology that could be used to minimize the number of variables used in order to determine efficiently the injury severity of traffic accidents without reducing the performance of the model. Copyright © 2011 Elsevier Ltd. All rights reserved.
van Manen, Janine; Kamphuis, Jan Henk; Visbach, Geny; Ziegler, Uli; Gerritsen, Ad; Van Rossum, Bert; Rijnierse, Piet; Timman, Reinier; Verheul, Roel
2008-11-01
Treatment selection in clinical practice is a poorly understood, often largely implicit decision process, perhaps especially for patients with personality disorders. This study, therefore, investigated how intake clinicians use information about patient characteristics to select psychotherapeutic treatment for patients with personality disorder. A structured interview with a forced-choice format was administered to 27 experienced intake clinicians working in five specialist mental health care institutes in the Netherlands. Substantial consensus was evident among intake clinicians. The results revealed that none of the presented patient characteristics were deemed relevant for the selection of the suitable treatment setting. The appropriate duration and intensity are selected using severity or personal strength variables. The theoretical orientation is selected using personal strength variables.
The Effects of Age, Years of Experience, and Type of Experience in the Teacher Selection Process
ERIC Educational Resources Information Center
Vail, David Scott
2010-01-01
Paper screening in the pre-selection process of hiring teachers has been the focus in an ongoing series of similar studies starting with Allison in 1981. There have been many independent variables, including, but not limited to, age, gender, ethnic background, years of experience, type of experience, and grade point average, introduced into the…
The Effects of Age, Years of Experience, and Type of Experience in the Teacher Selection Process
ERIC Educational Resources Information Center
Place, A. William; Vail, David S.
2013-01-01
Paper screening in the pre-selection process of hiring teachers has been an established line of research starting with Young and Allison (1982). Administrators were asked to rate hypothetical candidates based on the information provided by the researcher. The dependent variable in several of these studies (e.g. Young & Fox, 2002; Young & Schmidt,…
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-11
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
NASA Astrophysics Data System (ADS)
Juszczyk, Michał; Leśniak, Agnieszka; Zima, Krzysztof
2013-06-01
Conceptual cost estimation is important for construction projects. Either underestimation or overestimation of building raising cost may lead to failure of a project. In the paper authors present application of a multicriteria comparative analysis (MCA) in order to select factors influencing residential building raising cost. The aim of the analysis is to indicate key factors useful in conceptual cost estimation in the early design stage. Key factors are being investigated on basis of the elementary information about the function, form and structure of the building, and primary assumptions of technological and organizational solutions applied in construction process. The mentioned factors are considered as variables of the model which aim is to make possible conceptual cost estimation fast and with satisfying accuracy. The whole analysis included three steps: preliminary research, choice of a set of potential variables and reduction of this set to select the final set of variables. Multicriteria comparative analysis is applied in problem solution. Performed analysis allowed to select group of factors, defined well enough at the conceptual stage of the design process, to be used as a describing variables of the model.
De la Fuente, Jesus; Zapata, Lucía; Martínez-Vicente, Jose M.; Sander, Paul; Cardelle-Elawar, María
2014-01-01
The present investigation examines how personal self-regulation (presage variable) and regulatory teaching (process variable of teaching) relate to learning approaches, strategies for coping with stress, and self-regulated learning (process variables of learning) and, finally, how they relate to performance and satisfaction with the learning process (product variables). The objective was to clarify the associative and predictive relations between these variables, as contextualized in two different models that use the presage-process-product paradigm (the Biggs and DEDEPRO models). A total of 1101 university students participated in the study. The design was cross-sectional and retrospective with attributional (or selection) variables, using correlations and structural analysis. The results provide consistent and significant empirical evidence for the relationships hypothesized, incorporating variables that are part of and influence the teaching–learning process in Higher Education. Findings confirm the importance of interactive relationships within the teaching–learning process, where personal self-regulation is assumed to take place in connection with regulatory teaching. Variables that are involved in the relationships validated here reinforce the idea that both personal factors and teaching and learning factors should be taken into consideration when dealing with a formal teaching–learning context at university. PMID:25964764
ERIC Educational Resources Information Center
Yao, Engui
1998-01-01
Determines the relationships between ATM (Asynchronous Transfer Mode) adoption and four organizational variables: university size, type, finances, and information-processing maturity. Identifies the current status of ATM adoption in campus networking in the United States. Contains 33 references. (DDR)
Utilizing multiple state variables to improve the dynamic range of analog switching in a memristor
NASA Astrophysics Data System (ADS)
Jeong, YeonJoo; Kim, Sungho; Lu, Wei D.
2015-10-01
Memristors and memristive systems have been extensively studied for data storage and computing applications such as neuromorphic systems. To act as synapses in neuromorphic systems, the memristor needs to exhibit analog resistive switching (RS) behavior with incremental conductance change. In this study, we show that the dynamic range of the analog RS behavior can be significantly enhanced in a tantalum-oxide-based memristor. By controlling different state variables enabled by different physical effects during the RS process, the gradual filament expansion stage can be selectively enhanced without strongly affecting the abrupt filament length growth stage. Detailed physics-based modeling further verified the observed experimental effects and revealed the roles of oxygen vacancy drift and diffusion processes, and how the diffusion process can be selectively enhanced during the filament expansion stage. These findings lead to more desirable and reliable memristor behaviors for analog computing applications. Additionally, the ability to selectively control different internal physical processes demonstrated in the current study provides guidance for continued device optimization of memristor devices in general.
Variable Selection for Regression Models of Percentile Flows
NASA Astrophysics Data System (ADS)
Fouad, G.
2017-12-01
Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high degree of multicollinearity, possibly illustrating the co-evolution of climatic and physiographic conditions. Given the ineffectiveness of many variables used here, future work should develop new variables that target specific processes associated with percentile flows.
Can Geostatistical Models Represent Nature's Variability? An Analysis Using Flume Experiments
NASA Astrophysics Data System (ADS)
Scheidt, C.; Fernandes, A. M.; Paola, C.; Caers, J.
2015-12-01
The lack of understanding in the Earth's geological and physical processes governing sediment deposition render subsurface modeling subject to large uncertainty. Geostatistics is often used to model uncertainty because of its capability to stochastically generate spatially varying realizations of the subsurface. These methods can generate a range of realizations of a given pattern - but how representative are these of the full natural variability? And how can we identify the minimum set of images that represent this natural variability? Here we use this minimum set to define the geostatistical prior model: a set of training images that represent the range of patterns generated by autogenic variability in the sedimentary environment under study. The proper definition of the prior model is essential in capturing the variability of the depositional patterns. This work starts with a set of overhead images from an experimental basin that showed ongoing autogenic variability. We use the images to analyze the essential characteristics of this suite of patterns. In particular, our goal is to define a prior model (a minimal set of selected training images) such that geostatistical algorithms, when applied to this set, can reproduce the full measured variability. A necessary prerequisite is to define a measure of variability. In this study, we measure variability using a dissimilarity distance between the images. The distance indicates whether two snapshots contain similar depositional patterns. To reproduce the variability in the images, we apply an MPS algorithm to the set of selected snapshots of the sedimentary basin that serve as training images. The training images are chosen from among the initial set by using the distance measure to ensure that only dissimilar images are chosen. Preliminary investigations show that MPS can reproduce fairly accurately the natural variability of the experimental depositional system. Furthermore, the selected training images provide process information. They fall into three basic patterns: a channelized end member, a sheet flow end member, and one intermediate case. These represent the continuum between autogenic bypass or erosion, and net deposition.
ERIC Educational Resources Information Center
Aydin, Oya Tamtekin; Bayir, Firat
2016-01-01
By examining the relevant literature, many factors can be determined as effecting factors on university choice process. However, existing literature does not fully explore the effect of demographic variables on these factors. This research is aimed at identifying the relationship between university selection criteria and demographic variables,…
NASA Astrophysics Data System (ADS)
Chen, Jie; Brissette, François P.; Lucas-Picher, Philippe
2016-11-01
Given the ever increasing number of climate change simulations being carried out, it has become impractical to use all of them to cover the uncertainty of climate change impacts. Various methods have been proposed to optimally select subsets of a large ensemble of climate simulations for impact studies. However, the behaviour of optimally-selected subsets of climate simulations for climate change impacts is unknown, since the transfer process from climate projections to the impact study world is usually highly non-linear. Consequently, this study investigates the transferability of optimally-selected subsets of climate simulations in the case of hydrological impacts. Two different methods were used for the optimal selection of subsets of climate scenarios, and both were found to be capable of adequately representing the spread of selected climate model variables contained in the original large ensemble. However, in both cases, the optimal subsets had limited transferability to hydrological impacts. To capture a similar variability in the impact model world, many more simulations have to be used than those that are needed to simply cover variability from the climate model variables' perspective. Overall, both optimal subset selection methods were better than random selection when small subsets were selected from a large ensemble for impact studies. However, as the number of selected simulations increased, random selection often performed better than the two optimal methods. To ensure adequate uncertainty coverage, the results of this study imply that selecting as many climate change simulations as possible is the best avenue. Where this was not possible, the two optimal methods were found to perform adequately.
Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L
2013-01-01
Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. We have hypothesized that the changes in neural activity observed during increased cholinergic function reflect an increase in neural efficiency that leads to improved task performance. The current study tested this hypothesis by assessing neural efficiency based on cholinergically-mediated effects on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover fMRI study. Following an infusion of physostigmine (1 mg/h) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Physostigmine administration also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus processing regions support the hypothesis that cholinergic augmentation results in enhanced neural efficiency. This article is part of a Special Issue entitled 'Cognitive Enhancers'. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ricciardi, Emiliano; Handjaras, Giacomo; Bernardi, Giulio; Pietrini, Pietro; Furey, Maura L.
2012-01-01
Enhancing cholinergic function improves performance on various cognitive tasks and alters neural responses in task specific brain regions. Previous findings by our group strongly suggested that the changes in neural activity observed during increased cholinergic function may reflect an increase in neural efficiency that leads to improved task performance. The current study was designed to assess the effects of cholinergic enhancement on regional brain connectivity and BOLD signal variability. Nine subjects participated in a double-blind, placebo-controlled crossover functional magnetic resonance imaging (fMRI) study. Following an infusion of physostigmine (1mg/hr) or placebo, echo-planar imaging (EPI) was conducted as participants performed a selective attention task. During the task, two images comprised of superimposed pictures of faces and houses were presented. Subjects were instructed periodically to shift their attention from one stimulus component to the other and to perform a matching task using hand held response buttons. A control condition included phase-scrambled images of superimposed faces and houses that were presented in the same temporal and spatial manner as the attention task; participants were instructed to perform a matching task. Cholinergic enhancement improved performance during the selective attention task, with no change during the control task. Functional connectivity analyses showed that the strength of connectivity between ventral visual processing areas and task-related occipital, parietal and prefrontal regions was reduced significantly during cholinergic enhancement, exclusively during the selective attention task. Cholinergic enhancement also reduced BOLD signal temporal variability relative to placebo throughout temporal and occipital visual processing areas, again during the selective attention task only. Together with the observed behavioral improvement, the decreases in connectivity strength throughout task-relevant regions and BOLD variability within stimulus processing regions provide further support to the hypothesis that cholinergic augmentation results in enhanced neural efficiency. PMID:22906685
An Individual Differences Analysis of Double-Aspect Stimulus Perception.
ERIC Educational Resources Information Center
Forsyth, G. Alfred; Huber, R. John
Any theory of information processing must address both what is processed and how that processing takes place. Most studies investigating variables which alter physical dimension utilization have ignored the large individual differences in selective attention or cue utilization. A paradigm was developed using an individual focus on information…
Fluctuating environments, sexual selection and the evolution of flexible mate choice in birds.
Botero, Carlos A; Rubenstein, Dustin R
2012-01-01
Environmentally-induced fluctuation in the form and strength of natural selection can drive the evolution of morphology, physiology, and behavior. Here we test the idea that fluctuating climatic conditions may also influence the process of sexual selection by inducing unexpected reversals in the relative quality or sexual attractiveness of potential breeding partners. Although this phenomenon, known as 'ecological cross-over', has been documented in a variety of species, it remains unclear the extent to which it has driven the evolution of major interspecific differences in reproductive behavior. We show that after controlling for potentially influential life history and demographic variables, there are significant positive associations between the variability and predictability of annual climatic cycles and the prevalence of infidelity and divorce within populations of a taxonomically diverse array of socially monogamous birds. Our results are consistent with the hypothesis that environmental factors have shaped the evolution of reproductive flexibility and suggest that in the absence of severe time constraints, secondary mate choice behaviors can help prevent, correct, or minimize the negative consequences of ecological cross-overs. Our findings also illustrate how a basic evolutionary process like sexual selection is susceptible to the increasing variability and unpredictability of climatic conditions that is resulting from climate change.
ERIC Educational Resources Information Center
Miehle, Caroline
This guide provides an overview of the language process, with sections focusing on the physiological aspects (organization of the brain), language development, environmental variables, cognition, language deficits and evaluation, language remediation, and implications of the reading process. Appendixes provide selected listings of developmental…
Fontana, Silvia Alicia; Raimondi, Waldina; Rizzo, María Laura
2014-09-05
Sleep quality not only refers to sleeping well at night, but also includes appropriate daytime functioning. Poor quality of sleep can affect a variety of attention processes. The aim of this investigation was to evaluate the relationship between the perceived quality of sleep and selective focus in a group of college students. A descriptive cross-sectional study was carried out in a group of 52 Argentinian college students of the Universidad Adventista del Plata. The Pittsburgh Sleep Quality Index, the Continuous Performance Test and the Trail Making Test were applied. The main results indicate that students sleep an average of 6.48 hours. Generally half of the population tested had a good quality of sleep. However, the dispersion seen in some components demonstrates the heterogeneity of the sample in these variables. It was observed that the evaluated attention processes yielded different levels of alteration in the total sample: major variability in the process of process and in the divided-attention processes were detected. A lower percentage of alteration was observed in the process of attention support. Poor quality of sleep has more impact in the sub processes with greater participation of corticocortical circuits (selective and divided attention) and greater involvement of the prefrontal cortex. Fewer difficulties were found in the attention-support processes that rely on subcortical regions and have less frontal involvement.
Rahman, Ziyaur; Xu, Xiaoming; Katragadda, Usha; Krishnaiah, Yellela S R; Yu, Lawrence; Khan, Mansoor A
2014-03-03
Restasis is an ophthalmic cyclosporine emulsion used for the treatment of dry eye syndrome. There are no generic products for this product, probably because of the limitations on establishing in vivo bioequivalence methods and lack of alternative in vitro bioequivalence testing methods. The present investigation was carried out to understand and identify the appropriate in vitro methods that can discriminate the effect of formulation and process variables on critical quality attributes (CQA) of cyclosporine microemulsion formulations having the same qualitative (Q1) and quantitative (Q2) composition as that of Restasis. Quality by design (QbD) approach was used to understand the effect of formulation and process variables on critical quality attributes (CQA) of cyclosporine microemulsion. The formulation variables chosen were mixing order method, phase volume ratio, and pH adjustment method, while the process variables were temperature of primary and raw emulsion formation, microfluidizer pressure, and number of pressure cycles. The responses selected were particle size, turbidity, zeta potential, viscosity, osmolality, surface tension, contact angle, pH, and drug diffusion. The selected independent variables showed statistically significant (p < 0.05) effect on droplet size, zeta potential, viscosity, turbidity, and osmolality. However, the surface tension, contact angle, pH, and drug diffusion were not significantly affected by independent variables. In summary, in vitro methods can detect formulation and manufacturing changes and would thus be important for quality control or sameness of cyclosporine ophthalmic products.
USDA-ARS?s Scientific Manuscript database
The objective was to quantify the effect of marketing group (MG) on the variability of primal quality. Pigs (N=7,684) were slaughtered in 3 MGs from 8 barns. Pigs were from genetic selection programs focused on lean growth (L; group 1 n=1,131; group 2 n=1,466; group 3 n=1,030) or superior meat qua...
Region-to-area screening methodology for the Crystalline Repository Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1985-04-01
The purpose of this document is to describe the Crystalline Repository Project's (CRP) process for region-to-area screening of exposed and near-surface crystalline rock bodies in the three regions of the conterminous United States where crystalline rock is being evaluated as a potential host for the second nuclear waste repository (i.e., in the North Central, Northeastern, and Southeastern Regions). This document indicates how the US Department of Energy's (DOE) General Guidelines for the Recommendation of Sites for Nuclear Waste Repositories (10 CFR 960) were used to select and apply factors and variables for the region-to-area screening, explains how these factors andmore » variable are to be applied in the region-to-area screening, and indicates how this methodology relates to the decision process leading to the selection of candidate areas. A brief general discussion of the screening process from the national survey through area screening and site recommendation is presented. This discussion sets the scene for detailed discussions which follow concerning the region-to-area screening process, the guidance provided by the DOE Siting Guidelines for establishing disqualifying factors and variables for screening, and application of the disqualifying factors and variables in the screening process. This document is complementary to the regional geologic and environmental characterization reports to be issued in the summer of 1985 as final documents. These reports will contain the geologic and environmental data base that will be used in conjunction with the methodology to conduct region-to-area screening.« less
[Development and validation of quality standards for colonoscopy].
Sánchez Del Río, Antonio; Baudet, Juan Salvador; Naranjo Rodríguez, Antonio; Campo Fernández de Los Ríos, Rafael; Salces Franco, Inmaculada; Aparicio Tormo, Jose Ramón; Sánchez Muñoz, Diego; Llach, Joseph; Hervás Molina, Antonio; Parra-Blanco, Adolfo; Díaz Acosta, Juan Antonio
2010-01-30
Before starting programs for colorectal cancer screening it is necessary to evaluate the quality of colonoscopy. Our objectives were to develop a group of quality indicators of colonoscopy easily applicable and to determine the variability of their achievement. After reviewing the bibliography we prepared 21 potential indicators of quality that were submitted to a process of selection in which we measured their facial validity, content validity, reliability and viability of their measurement. We estimated the variability of their achievement by means of the coefficient of variability (CV) and the variability of the achievement of the standards by means of chi(2). Six indicators overcome the selection process: informed consent, medication administered, completed colonoscopy, complications, every polyp removed and recovered, and adenoma detection rate in patients older than 50 years. 1928 colonoscopies were included from eight endoscopy units. Every unit included the same number of colonoscopies selected by means of simple random sampling with substitution. There was an important variability in the achievement of some indicators and standards: medication administered (CV 43%, p<0.01), complications registered (CV 37%, p<0.01), every polyp removed and recovered (CV 12%, p<0.01) and adenoma detection rate in older than fifty years (CV 2%, p<0.01). We have validated six quality indicators for colonoscopy which are easily measurable. An important variability exists in the achievement of some indicators and standards. Our data highlight the importance of the development of continuous quality improvement programmes for colonoscopy before starting colorectal cancer screening. Copyright (c) 2009 Elsevier España, S.L. All rights reserved.
Crew Interface Analysis: Selected Articles on Space Human Factors Research, 1987 - 1991
1993-07-01
recognitions to that distractor ) suggest that the perceptual type of the graph has a strong representation in memory . We found that both training with... processing strategy. If my goal were to compare the value of variables or (possibly) to compare a trend, I would select a perceptual strategy. If...be needed to determine specific processing models for different questions using the perceptual strategy. In addition, predictions about the memory
Conjoint Analysis: A Study of the Effects of Using Person Variables.
ERIC Educational Resources Information Center
Fraas, John W.; Newman, Isadore
Three statistical techniques--conjoint analysis, a multiple linear regression model, and a multiple linear regression model with a surrogate person variable--were used to estimate the relative importance of five university attributes for students in the process of selecting a college. The five attributes include: availability and variety of…
Simulating tracer transport in variably saturated soils and shallow groundwater
USDA-ARS?s Scientific Manuscript database
The objective of this study was to develop a realistic model to simulate the complex processes of flow and tracer transport in variably saturated soils and to compare simulation results with the detailed monitoring observations. The USDA-ARS OPE3 field site was selected for the case study due to ava...
Role of environmental variability in the evolution of life history strategies.
Hastings, A; Caswell, H
1979-09-01
We reexamine the role of environmental variability in the evolution of life history strategies. We show that normally distributed deviations in the quality of the environment should lead to normally distributed deviations in the logarithm of year-to-year survival probabilities, which leads to interesting consequences for the evolution of annual and perennial strategies and reproductive effort. We also examine the effects of using differing criteria to determine the outcome of selection. Some predictions of previous theory are reversed, allowing distinctions between r and K theory and a theory based on variability. However, these distinctions require information about both the environment and the selection process not required by current theory.
An Evaluation of the Air Force Institute of Technology Student Selection Criteria
1989-09-01
total score on the GMAT TOEFL Student’s score on the Test of English as a Foreign Language • denotes an indicator variable 2( Criterion Variable The...students, scores on the TOEFL . Any other possible predictors were not chosen because they are not used in the student selection process and therefore would...GMATQ 722 32.66 6.46 11.00 54.00 GMATT 731 537.07 68.84 275.00 740.00 TOEFL 59 521.46 128.87 80.00 780.00 * Effective October 1,1981, the maximimum
Cheng, Weiwei; Sun, Da-Wen; Pu, Hongbin; Wei, Qingyi
2017-04-15
The feasibility of hyperspectral imaging (HSI) (400-1000nm) for tracing the chemical spoilage extent of the raw meat used for two kinds of processed meats was investigated. Calibration models established separately for salted and cooked meats using full wavebands showed good results with the determination coefficient in prediction (R 2 P ) of 0.887 and 0.832, respectively. For simplifying the calibration models, two variable selection methods were used and compared. The results showed that genetic algorithm-partial least squares (GA-PLS) with as much continuous wavebands selected as possible always had better performance. The potential of HSI to develop one multispectral system for simultaneously tracing the chemical spoilage extent of the two kinds of processed meats was also studied. Good result with an R 2 P of 0.854 was obtained using GA-PLS as the dimension reduction method, which was thus used to visualize total volatile base nitrogen (TVB-N) contents corresponding to each pixel of the image. Copyright © 2016 Elsevier Ltd. All rights reserved.
Roopa, N; Chauhan, O P; Raju, P S; Das Gupta, D K; Singh, R K R; Bawa, A S
2014-10-01
An osmotic-dehydration process protocol for Carambola (Averrhoacarambola L.,), an exotic star shaped tropical fruit, was developed. The process was optimized using Response Surface Methodology (RSM) following Central Composite Rotatable Design (CCRD). The experimental variables selected for the optimization were soak solution concentration (°Brix), soaking temperature (°C) and soaking time (min) with 6 experiments at central point. The effect of process variables was studied on solid gain and water loss during osmotic dehydration process. The data obtained were analyzed employing multiple regression technique to generate suitable mathematical models. Quadratic models were found to fit well (R(2), 95.58 - 98.64 %) in describing the effect of variables on the responses studied. The optimized levels of the process variables were achieved at 70°Brix, 48 °C and 144 min for soak solution concentration, soaking temperature and soaking time, respectively. The predicted and experimental results at optimized levels of variables showed high correlation. The osmo-dehydrated product prepared at optimized conditions showed a shelf-life of 10, 8 and 6 months at 5 °C, ambient (30 ± 2 °C) and 37 °C, respectively.
Relating brain signal variability to knowledge representation.
Heisz, Jennifer J; Shedden, Judith M; McIntosh, Anthony R
2012-11-15
We assessed the hypothesis that brain signal variability is a reflection of functional network reconfiguration during memory processing. In the present experiments, we use multiscale entropy to capture the variability of human electroencephalogram (EEG) while manipulating the knowledge representation associated with faces stored in memory. Across two experiments, we observed increased variability as a function of greater knowledge representation. In Experiment 1, individuals with greater familiarity for a group of famous faces displayed more brain signal variability. In Experiment 2, brain signal variability increased with learning after multiple experimental exposures to previously unfamiliar faces. The results demonstrate that variability increases with face familiarity; cognitive processes during the perception of familiar stimuli may engage a broader network of regions, which manifests as higher complexity/variability in spatial and temporal domains. In addition, effects of repetition suppression on brain signal variability were observed, and the pattern of results is consistent with a selectivity model of neural adaptation. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ampomah, William; Balch, Robert; Will, Robert
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty
Ampomah, William; Balch, Robert; Will, Robert; ...
2017-07-01
This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less
Craig, Marlies H; Sharp, Brian L; Mabaso, Musawenkosi LH; Kleinschmidt, Immo
2007-01-01
Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa) project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have produced a highly plausible and parsimonious model of historical malaria risk for Botswana from point-referenced data from a 1961/2 prevalence survey of malaria infection in 1–14 year old children. After starting with a list of 50 potential variables we ended with three highly plausible predictors, by applying a systematic and repeatable staged variable selection procedure that included a spatial analysis, which has application for other environmentally determined infectious diseases. All this was accomplished using general-purpose statistical software. PMID:17892584
A Socialization Perspective on Selected Consumer Characteristics of the Elderly.
ERIC Educational Resources Information Center
Smith, Ruth Belk; Moschis, George P.
1985-01-01
Examines the effects of selected antecedent variables and communication processes on the consumer behavior of the elderly. Results suggest that the mass media and the family may be instrumental in reinforcing or developing traditional sex-role stereotypes among the elderly, whereas consumer education may help them filter puffery in advertisements.…
A Case Study Analysis of Middle School Principals' Teacher Selection Criteria
ERIC Educational Resources Information Center
Woodburn, Jane Lai
2012-01-01
The hiring of middle school teachers to positively impact student achievement--is this a process of teacher selection or teacher attraction for schools, respectively, with low teacher turnover and schools with high teacher turnover? Since research indicates that the most important variable influencing student achievement is having a highly…
Lecours, Vincent; Brown, Craig J; Devillers, Rodolphe; Lucieer, Vanessa L; Edinger, Evan N
2016-01-01
Selecting appropriate environmental variables is a key step in ecology. Terrain attributes (e.g. slope, rugosity) are routinely used as abiotic surrogates of species distribution and to produce habitat maps that can be used in decision-making for conservation or management. Selecting appropriate terrain attributes for ecological studies may be a challenging process that can lead users to select a subjective, potentially sub-optimal combination of attributes for their applications. The objective of this paper is to assess the impacts of subjectively selecting terrain attributes for ecological applications by comparing the performance of different combinations of terrain attributes in the production of habitat maps and species distribution models. Seven different selections of terrain attributes, alone or in combination with other environmental variables, were used to map benthic habitats of German Bank (off Nova Scotia, Canada). 29 maps of potential habitats based on unsupervised classifications of biophysical characteristics of German Bank were produced, and 29 species distribution models of sea scallops were generated using MaxEnt. The performances of the 58 maps were quantified and compared to evaluate the effectiveness of the various combinations of environmental variables. One of the combinations of terrain attributes-recommended in a related study and that includes a measure of relative position, slope, two measures of orientation, topographic mean and a measure of rugosity-yielded better results than the other selections for both methodologies, confirming that they together best describe terrain properties. Important differences in performance (up to 47% in accuracy measurement) and spatial outputs (up to 58% in spatial distribution of habitats) highlighted the importance of carefully selecting variables for ecological applications. This paper demonstrates that making a subjective choice of variables may reduce map accuracy and produce maps that do not adequately represent habitats and species distributions, thus having important implications when these maps are used for decision-making.
Instrument Selection for Randomized Controlled Trials Why This and Not That?
Records, Kathie; Keller, Colleen; Ainsworth, Barbara; Permana, Paska
2011-01-01
A fundamental linchpin for obtaining rigorous findings in quantitative research involves the selection of survey instruments. Psychometric recommendations are available for the processes for scale development and testing and guidance for selection of established scales. These processes are necessary to address the validity link between the phenomena under investigation, the empirical measures and, ultimately, the theoretical ties between these and the world views of the participants. Detailed information is most often provided about study design and protocols, but far less frequently is a detailed theoretical explanation provided for why specific instruments are chosen. Guidance to inform choices is often difficult to find when scales are needed for specific cultural, ethnic, or racial groups. This paper details the rationale underlying instrument selection for measurement of the major processes (intervention, mediator and moderator variables, outcome variables) in an ongoing study of postpartum Latinas, Madres para la Salud [Mothers for Health]. The rationale underpinning our choices includes a discussion of alternatives, when appropriate. These exemplars may provide direction for other intervention researchers who are working with specific cultural, racial, or ethnic groups or for other investigators who are seeking to select the ‘best’ instrument. Thoughtful consideration of measurement and articulation of the rationale underlying our choices facilitates the maintenance of rigor within the study design and improves our ability to assess study outcomes. PMID:21986392
Every Equation Tells a Story: Using Equation Dictionaries in Introductory Geophysics
ERIC Educational Resources Information Center
Caplan-Auerbach, Jacqueline
2009-01-01
Many students view equations as a series of variables and operators into which numbers should be plugged rather than as representative of a physical process. To solve a problem they may simply look for an equation with the correct variables and assume it meets their needs, rather than selecting an equation that represents the appropriate physical…
ERIC Educational Resources Information Center
Benfari, Robert C.; Eaker, Elaine
1984-01-01
Studied male smokers (N=182) at high risk of coronary heart disease to determine variables that discriminated between successful and nonsuccessful quitters. Analysis revealed that baseline level of smoking, life events, personal security, and selected group process variables were predictive of success or failure in the intervention program.…
Baldissera, Ronei; Rodrigues, Everton N L; Hartz, Sandra M
2012-01-01
The distribution of beta diversity is shaped by factors linked to environmental and spatial control. The relative importance of both processes in structuring spider metacommunities has not yet been investigated in the Atlantic Forest. The variance explained by purely environmental, spatially structured environmental, and purely spatial components was compared for a metacommunity of web spiders. The study was carried out in 16 patches of Atlantic Forest in southern Brazil. Field work was done in one landscape mosaic representing a slight gradient of urbanization. Environmental variables encompassed plot- and patch-level measurements and a climatic matrix, while principal coordinates of neighbor matrices (PCNMs) acted as spatial variables. A forward selection procedure was carried out to select environmental and spatial variables influencing web-spider beta diversity. Variation partitioning was used to estimate the contribution of pure environmental and pure spatial effects and their shared influence on beta-diversity patterns, and to estimate the relative importance of selected environmental variables. Three environmental variables (bush density, land use in the surroundings of patches, and shape of patches) and two spatial variables were selected by forward selection procedures. Variation partitioning revealed that 15% of the variation of beta diversity was explained by a combination of environmental and PCNM variables. Most of this variation (12%) corresponded to pure environmental and spatially environmental structure. The data indicated that (1) spatial legacy was not important in explaining the web-spider beta diversity; (2) environmental predictors explained a significant portion of the variation in web-spider composition; (3) one-third of environmental variation was due to a spatial structure that jointly explains variation in species distributions. We were able to detect important factors related to matrix management influencing the web-spider beta-diversity patterns, which are probably linked to historical deforestation events.
Gaia DR1 documentation Chapter 6: Variability
NASA Astrophysics Data System (ADS)
Eyer, L.; Rimoldini, L.; Guy, L.; Holl, B.; Clementini, G.; Cuypers, J.; Mowlavi, N.; Lecoeur-Taïbi, I.; De Ridder, J.; Charnas, J.; Nienartowicz, K.
2017-12-01
This chapter describes the photometric variability processing of the Gaia DR1 data. Coordination Unit 7 is responsible for the variability analysis of over a billion celestial sources. In particular the definition, design, development, validation and provision of a software package for the data processing of photometrically variable objects. Data Processing Centre Geneva (DPCG) responsibilities cover all issues related to the computational part of the CU7 analysis. These span: hardware provisioning, including selection, deployment and optimisation of suitable hardware, choosing and developing software architecture, defining data and scientific workflows as well as operational activities such as configuration management, data import, time series reconstruction, storage and processing handling, visualisation and data export. CU7/DPCG is also responsible for interaction with other DPCs and CUs, software and programming training for the CU7 members, scientific software quality control and management of software and data lifecycle. Details about the specific data treatment steps of the Gaia DR1 data products are found in Eyer et al. (2017) and are not repeated here. The variability content of the Gaia DR1 focusses on a subsample of Cepheids and RR Lyrae stars around the South ecliptic pole, showcasing the performance of the Gaia photometry with respect to variable objects.
Nixtamalized flour from quality protein maize (Zea mays L). optimization of alkaline processing.
Milán-Carrillo, J; Gutiérrez-Dorado, R; Cuevas-Rodríguez, E O; Garzón-Tiznado, J A; Reyes-Moreno, C
2004-01-01
Quality of maize proteins is poor, they are deficient in the essential amino acids lysine and tryptophan. Recently, in Mexico were successfully developed nutritionally improved 26 new hybrids and cultivars called quality protein maize (QPM) which contain greater amounts of lysine and tryptophan. Alkaline cooking of maize with lime (nixtamalization) is the first step for producing several maize products (masa, tortillas, flours, snacks). Processors adjust nixtamalization variables based on experience. The objective of this work was to determine the best combination of nixtamalization process variables for producing nixtamalized maize flour (NMF) from QPM V-537 variety. Nixtamalization conditions were selected from factorial combinations of process variables: nixtamalization time (NT, 20-85 min), lime concentration (LC, 3.3-6.7 g Ca(OH)2/l, in distilled water), and steep time (ST, 8-16 hours). Nixtamalization temperature and ratio of grain to cooking medium were 85 degrees C and 1:3 (w/v), respectively. At the end of each cooking treatment the steeping started for the required time. Steeping was finished by draining the cooking liquor (nejayote). Nixtamal (alkaline-cooked maize kernels) was washed with running tap water. Wet nixtamal was dried (24 hours, 55 degrees C) and milled to pass through 80-US mesh screen to obtain NMF. Response surface methodology (RSM) was applied as optimization technique, over four response variables: In vitro protein digestibility (PD), total color difference (deltaE), water absorption index (WAI), and pH. Predictive models for response variables were developed as a function of process variables. Conventional graphical method was applied to obtain maximum PD, WAI and minimum deltaE, pH. Contour plots of each of the response variables were utilized applying superposition surface methodology, to obtain three contour plots for observation and selection of best combination of NT (31 min), LC (5.4 g Ca(OH)2/l), and ST (8.1 hours) for producing optimized NMF from QPM.
Boosted structured additive regression for Escherichia coli fed-batch fermentation modeling.
Melcher, Michael; Scharl, Theresa; Luchner, Markus; Striedner, Gerald; Leisch, Friedrich
2017-02-01
The quality of biopharmaceuticals and patients' safety are of highest priority and there are tremendous efforts to replace empirical production process designs by knowledge-based approaches. Main challenge in this context is that real-time access to process variables related to product quality and quantity is severely limited. To date comprehensive on- and offline monitoring platforms are used to generate process data sets that allow for development of mechanistic and/or data driven models for real-time prediction of these important quantities. Ultimate goal is to implement model based feed-back control loops that facilitate online control of product quality. In this contribution, we explore structured additive regression (STAR) models in combination with boosting as a variable selection tool for modeling the cell dry mass, product concentration, and optical density on the basis of online available process variables and two-dimensional fluorescence spectroscopic data. STAR models are powerful extensions of linear models allowing for inclusion of smooth effects or interactions between predictors. Boosting constructs the final model in a stepwise manner and provides a variable importance measure via predictor selection frequencies. Our results show that the cell dry mass can be modeled with a relative error of about ±3%, the optical density with ±6%, the soluble protein with ±16%, and the insoluble product with an accuracy of ±12%. Biotechnol. Bioeng. 2017;114: 321-334. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Cavallo, Jaime A.; Roma, Andres A.; Jasielec, Mateusz S.; Ousley, Jenny; Creamer, Jennifer; Pichert, Matthew D.; Baalman, Sara; Frisella, Margaret M.; Matthews, Brent D.
2014-01-01
Background The purpose of this study was to evaluate the associations between patient characteristics or surgical site classifications and the histologic remodeling scores of synthetic meshes biopsied from their abdominal wall repair sites in the first attempt to generate a multivariable risk prediction model of non-constructive remodeling. Methods Biopsies of the synthetic meshes were obtained from the abdominal wall repair sites of 51 patients during a subsequent abdominal re-exploration. Biopsies were stained with hematoxylin and eosin, and evaluated according to a semi-quantitative scoring system for remodeling characteristics (cell infiltration, cell types, extracellular matrix deposition, inflammation, fibrous encapsulation, and neovascularization) and a mean composite score (CR). Biopsies were also stained with Sirius Red and Fast Green, and analyzed to determine the collagen I:III ratio. Based on univariate analyses between subject clinical characteristics or surgical site classification and the histologic remodeling scores, cohort variables were selected for multivariable regression models using a threshold p value of ≤0.200. Results The model selection process for the extracellular matrix score yielded two variables: subject age at time of mesh implantation, and mesh classification (c-statistic = 0.842). For CR score, the model selection process yielded two variables: subject age at time of mesh implantation and mesh classification (r2 = 0.464). The model selection process for the collagen III area yielded a model with two variables: subject body mass index at time of mesh explantation and pack-year history (r2 = 0.244). Conclusion Host characteristics and surgical site assessments may predict degree of remodeling for synthetic meshes used to reinforce abdominal wall repair sites. These preliminary results constitute the first steps in generating a risk prediction model that predicts the patients and clinical circumstances for which non-constructive remodeling of an abdominal wall repair site with synthetic mesh reinforcement is most likely to occur. PMID:24442681
ERIC Educational Resources Information Center
Tadlock, James; Nesbit, Lamar
The Jackson Municipal Separate School District, Mississippi, has instituted a mixed-criteria reduction-in-force procedure emphasizing classroom performance to a greater degree than seniority, certification, and staff development participation. The district evaluation process--measuring classroom teaching performance--generated data for the present…
Song, Chuan-xia; Chen, Hong-mei; Dai, Yu; Kang, Min; Hu, Jia; Deng, Yun
2014-11-01
To optimize the process of Icraiin be hydrolyzed to Baohuoside I by cellulase by Plackett-Burman design combined with Central Composite Design (CCD) response surface methodology. To select the main influencing factors by Plackett-Burman design, using CCD response surface methodology to optimize the process of Icraiin be hydrolyzed to Baohuoside I by cellulase. Taking substrate concentration, the pH of buffer and reaction time as independent variables, with conversion rate of icariin as dependent variable,using regression fitting of completely quadratic response surface between independent variable and dependent variable,the optimum process of Icraiin be hydrolyzed to Baohuoside I by cellulase was intuitively analyzed by 3D surface chart, and taking verification tests and predictive analysis. The best enzymatic hydrolytic process was as following: substrate concentration 8. 23 mg/mL, pH 5. 12 of buffer,reaction time 35. 34 h. The optimum process of Icraiin be hydrolyzed to Baohuoside I by cellulase is determined by Plackett-Burman design combined with CCD response surface methodology. The optimized enzymatic hydrolytic process is simple, convenient, accurate, reproducible and predictable.
A Review of Validation Research on Psychological Variables Used in Hiring Police Officers.
ERIC Educational Resources Information Center
Malouff, John M.; Schutte Nicola S.
This paper reviews the methods and findings of published research on the validity of police selection procedures. As a preface to the review, the typical police officer selection process is briefly described. Several common methodological deficiencies of the validation research are identified and discussed in detail: (1) use of past-selection…
Tunable Patch Antennas Using Microelectromechanical Systems
2011-05-11
Figure 28, was selected as most suitable to this application. MetalMUMPs is a surface micromachining process with polysilicon , silicon nitride, nickel...yields. MEMS Variable Capacitor Design The MEMS capacitors reported here were an original design that features nickel and polysilicon layers as...the movable plates of a variable parallel plate capacitor. The polysilicon layer was embedded in silicon nitride for electrical isolation and suspended
Chee, H; Rampal, K
2003-01-01
Aims: To determine the relation between sick leave and selected exposure variables among women semiconductor workers. Methods: This was a cross sectional survey of production workers from 18 semiconductor factories. Those selected had to be women, direct production operators up to the level of line leader, and Malaysian citizens. Sick leave and exposure to physical and chemical hazards were determined by self reporting. Three sick leave variables were used; number of sick leave days taken in the past year was the variable of interest in logistic regression models where the effects of age, marital status, work task, work schedule, work section, and duration of work in factory and work section were also explored. Results: Marital status was strongly linked to the taking of sick leave. Age, work schedule, and duration of work in the factory were significant confounders only in certain cases. After adjusting for these confounders, chemical and physical exposures, with the exception of poor ventilation and smelling chemicals, showed no significant relation to the taking of sick leave within the past year. Work section was a good predictor for taking sick leave, as wafer polishing workers faced higher odds of taking sick leave for each of the three cut off points of seven days, three days, and not at all, while parts assembly workers also faced significantly higher odds of taking sick leave. Conclusion: In Malaysia, the wafer fabrication factories only carry out a limited portion of the work processes, in particular, wafer polishing and the processes immediately prior to and following it. This study, in showing higher illness rates for workers in wafer polishing compared to semiconductor assembly, has implications for the governmental policy of encouraging the setting up of wafer fabrication plants with the full range of work processes. PMID:12660374
Early efforts in wildlife management focused on reducing population variability and maximizing yields of select species. Aldo Leopold proposed the concept of habitat management as superior to population management. More recently, ecosystem management, whereby ecological processes...
ERIC Educational Resources Information Center
Melguizo, Tatiana
2010-01-01
The study takes advantage of the nontraditional selection process of the Gates Millennium Scholars (GMS) program to test the association between selectivity of 4-year institution attended as well as other noncognitive variables on the college completion rates of a sample of students of color. The results of logistic regression and propensity score…
Evolution of catalytic RNA in the laboratory
NASA Technical Reports Server (NTRS)
Joyce, Gerald F.
1992-01-01
We are interested in the biochemistry of existing RNA enzymes and in the development of RNA enzymes with novel catalytic function. The focal point of our research program has been the design and operation of a laboratory system for the controlled evolution of catalytic RNA. This system serves as working model of RNA-based life and can be used to explore the catalytic potential of RNA. Evolution requires the integration of three chemical processes: amplification, mutation, and selection. Amplification results in additional copies of the genetic material. Mutation operates at the level of genotype to introduce variability, this variability in turn being expressed as a range of phenotypes. Selection operates at the level of phenotype to reduce variability by excluding those individuals that do not conform to the prevailing fitness criteria. These three processes must be linked so that only the selected individuals are amplified, subject to mutational error, to produce a progeny distribution of mutant individuals. We devised techniques for the amplification, mutation, and selection of catalytic RNA, all of which can be performed rapidly in vitro within a single reaction vessel. We integrated these techniques in such a way that they can be performed iteratively and routinely. This allowed us to conduct evolution experiments in response to artificially-imposed selection constraints. Our objective was to develop novel RNA enzymes by altering the selection constraints in a controlled manner. In this way we were able to expand the catalytic repertoire of RNA. Our long-range objective is to develop an RNA enzyme with RNA replicase activity. If such an enzyme had the ability to produce additional copies of itself, then RNA evolution would operate autonomously and the origin of life will have been realized in the laboratory.
NASA Astrophysics Data System (ADS)
Hadi, Sinan Jasim; Tombul, Mustafa
2018-06-01
Streamflow is an essential component of the hydrologic cycle in the regional and global scale and the main source of fresh water supply. It is highly associated with natural disasters, such as droughts and floods. Therefore, accurate streamflow forecasting is essential. Forecasting streamflow in general and monthly streamflow in particular is a complex process that cannot be handled by data-driven models (DDMs) only and requires pre-processing. Wavelet transformation is a pre-processing technique; however, application of continuous wavelet transformation (CWT) produces many scales that cause deterioration in the performance of any DDM because of the high number of redundant variables. This study proposes multigene genetic programming (MGGP) as a selection tool. After the CWT analysis, it selects important scales to be imposed into the artificial neural network (ANN). A basin located in the southeast of Turkey is selected as case study to prove the forecasting ability of the proposed model. One month ahead downstream flow is used as output, and downstream flow, upstream, rainfall, temperature, and potential evapotranspiration with associated lags are used as inputs. Before modeling, wavelet coherence transformation (WCT) analysis was conducted to analyze the relationship between variables in the time-frequency domain. Several combinations were developed to investigate the effect of the variables on streamflow forecasting. The results indicated a high localized correlation between the streamflow and other variables, especially the upstream. In the models of the standalone layout where the data were entered to ANN and MGGP without CWT, the performance is found poor. In the best-scale layout, where the best scale of the CWT identified as the highest correlated scale is chosen and enters to ANN and MGGP, the performance increased slightly. Using the proposed model, the performance improved dramatically particularly in forecasting the peak values because of the inclusion of several scales in which seasonality and irregularity can be captured. Using hydrological and meteorological variables also improved the ability to forecast the streamflow.
[The nature of personality: a co-evolutionary perspective].
Asendorpf, J B
1996-01-01
Personality psychologists' attempts to explain human diversity have traditionally focused upon processes of person-situation interaction, and genotype-environment interaction. The great variability of genotypes and environments within cultures has remained unexplained in these efforts. Which processes may be responsible for the genetic and environmental variability within cultures? Answers to this question are sought in processes of genetic-cultural coevolution: mutation and sexual recombination of genes, innovation and synthesis of memes (units of cultural transmission), genotype-->environment and meme-->environment effects, and frequency-dependent natural and cultural selection. This twofold evolutionary explanation of personality differences within cultures suggests that a solid foundation of personality psychology requires bridging biology and cultural science.
An Investigation of Bilateral Symmetry During Manual Wheelchair Propulsion.
Soltau, Shelby L; Slowik, Jonathan S; Requejo, Philip S; Mulroy, Sara J; Neptune, Richard R
2015-01-01
Studies of manual wheelchair propulsion often assume bilateral symmetry to simplify data collection, processing, and analysis. However, the validity of this assumption is unclear. Most investigations of wheelchair propulsion symmetry have been limited by a relatively small sample size and a focus on a single propulsion condition (e.g., level propulsion at self-selected speed). The purpose of this study was to evaluate bilateral symmetry during manual wheelchair propulsion in a large group of subjects across different propulsion conditions. Three-dimensional kinematics and handrim kinetics along with spatiotemporal variables were collected and processed from 80 subjects with paraplegia while propelling their wheelchairs on a stationary ergometer during three different conditions: level propulsion at their self-selected speed (free), level propulsion at their fastest comfortable speed (fast), and propulsion on an 8% grade at their level, self-selected speed (graded). All kinematic variables had significant side-to-side differences, primarily in the graded condition. Push angle was the only spatiotemporal variable with a significant side-to-side difference, and only during the graded condition. No kinetic variables had significant side-to-side differences. The magnitudes of the kinematic differences were low, with only one difference exceeding 5°. With differences of such small magnitude, the bilateral symmetry assumption appears to be reasonable during manual wheelchair propulsion in subjects without significant upper-extremity pain or impairment. However, larger asymmetries may exist in individuals with secondary injuries and pain in their upper extremity and different etiologies of their neurological impairment.
An Investigation of Bilateral Symmetry During Manual Wheelchair Propulsion
Soltau, Shelby L.; Slowik, Jonathan S.; Requejo, Philip S.; Mulroy, Sara J.; Neptune, Richard R.
2015-01-01
Studies of manual wheelchair propulsion often assume bilateral symmetry to simplify data collection, processing, and analysis. However, the validity of this assumption is unclear. Most investigations of wheelchair propulsion symmetry have been limited by a relatively small sample size and a focus on a single propulsion condition (e.g., level propulsion at self-selected speed). The purpose of this study was to evaluate bilateral symmetry during manual wheelchair propulsion in a large group of subjects across different propulsion conditions. Three-dimensional kinematics and handrim kinetics along with spatiotemporal variables were collected and processed from 80 subjects with paraplegia while propelling their wheelchairs on a stationary ergometer during three different conditions: level propulsion at their self-selected speed (free), level propulsion at their fastest comfortable speed (fast), and propulsion on an 8% grade at their level, self-selected speed (graded). All kinematic variables had significant side-to-side differences, primarily in the graded condition. Push angle was the only spatiotemporal variable with a significant side-to-side difference, and only during the graded condition. No kinetic variables had significant side-to-side differences. The magnitudes of the kinematic differences were low, with only one difference exceeding 5°. With differences of such small magnitude, the bilateral symmetry assumption appears to be reasonable during manual wheelchair propulsion in subjects without significant upper-extremity pain or impairment. However, larger asymmetries may exist in individuals with secondary injuries and pain in their upper extremity and different etiologies of their neurological impairment. PMID:26125019
Mahdavi, Mahdi; Vissers, Jan; Elkhuizen, Sylvia; van Dijk, Mattees; Vanhala, Antero; Karampli, Eleftheria; Faubel, Raquel; Forte, Paul; Coroian, Elena; van de Klundert, Joris
2018-01-01
While health service provisioning for the chronic condition Type 2 Diabetes (T2D) often involves a network of organisations and professionals, most evidence on the relationships between the structures and processes of service provisioning and the outcomes considers single organisations or solo practitioners. Extending Donabedian's Structure-Process-Outcome (SPO) model, we investigate how differences in quality of life, effective coverage of diabetes, and service satisfaction are associated with differences in the structures, processes, and context of T2D services in six regions in Finland, Germany, Greece, Netherlands, Spain, and UK. Data collection consisted of: a) systematic modelling of provider network's structures and processes, and b) a cross-sectional survey of patient reported outcomes and other information. The survey resulted in data from 1459 T2D patients, during 2011-2012. Stepwise linear regression models were used to identify how independent cumulative proportion of variance in quality of life and service satisfaction are related to differences in context, structure and process. The selected context, structure and process variables are based on Donabedian's SPO model, a service quality research instrument (SERVQUAL), and previous organization and professional level evidence. Additional analysis deepens the possible bidirectional relation between outcomes and processes. The regression models explain 44% of variance in service satisfaction, mostly by structure and process variables (such as human resource use and the SERVQUAL dimensions). The models explained 23% of variance in quality of life between the networks, much of which is related to contextual variables. Our results suggest that effectiveness of A1c control is negatively correlated with process variables such as total hours of care provided per year and cost of services per year. While the selected structure and process variables explain much of the variance in service satisfaction, this is less the case for quality of life. Moreover, it appears that the effect of the clinical outcome A1c control on processes is stronger than the other way around, as poorer control seems to relate to more service use, and higher cost. The standardized operational models used in this research prove to form a basis for expanding the network level evidence base for effective T2D service provisioning.
Elkhuizen, Sylvia; van Dijk, Mattees; Vanhala, Antero; Karampli, Eleftheria; Faubel, Raquel; Forte, Paul; Coroian, Elena
2018-01-01
Background While health service provisioning for the chronic condition Type 2 Diabetes (T2D) often involves a network of organisations and professionals, most evidence on the relationships between the structures and processes of service provisioning and the outcomes considers single organisations or solo practitioners. Extending Donabedian’s Structure-Process-Outcome (SPO) model, we investigate how differences in quality of life, effective coverage of diabetes, and service satisfaction are associated with differences in the structures, processes, and context of T2D services in six regions in Finland, Germany, Greece, Netherlands, Spain, and UK. Methods Data collection consisted of: a) systematic modelling of provider network’s structures and processes, and b) a cross-sectional survey of patient reported outcomes and other information. The survey resulted in data from 1459 T2D patients, during 2011–2012. Stepwise linear regression models were used to identify how independent cumulative proportion of variance in quality of life and service satisfaction are related to differences in context, structure and process. The selected context, structure and process variables are based on Donabedian’s SPO model, a service quality research instrument (SERVQUAL), and previous organization and professional level evidence. Additional analysis deepens the possible bidirectional relation between outcomes and processes. Results The regression models explain 44% of variance in service satisfaction, mostly by structure and process variables (such as human resource use and the SERVQUAL dimensions). The models explained 23% of variance in quality of life between the networks, much of which is related to contextual variables. Our results suggest that effectiveness of A1c control is negatively correlated with process variables such as total hours of care provided per year and cost of services per year. Conclusions While the selected structure and process variables explain much of the variance in service satisfaction, this is less the case for quality of life. Moreover, it appears that the effect of the clinical outcome A1c control on processes is stronger than the other way around, as poorer control seems to relate to more service use, and higher cost. The standardized operational models used in this research prove to form a basis for expanding the network level evidence base for effective T2D service provisioning. PMID:29447220
A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences
NASA Astrophysics Data System (ADS)
Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert
2011-09-01
Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
A decision tool for selecting trench cap designs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paige, G.B.; Stone, J.J.; Lane, L.J.
1995-12-31
A computer based prototype decision support system (PDSS) is being developed to assist the risk manager in selecting an appropriate trench cap design for waste disposal sites. The selection of the {open_quote}best{close_quote} design among feasible alternatives requires consideration of multiple and often conflicting objectives. The methodology used in the selection process consists of: selecting and parameterizing decision variables using data, simulation models, or expert opinion; selecting feasible trench cap design alternatives; ordering the decision variables and ranking the design alternatives. The decision model is based on multi-objective decision theory and uses a unique approach to order the decision variables andmore » rank the design alternatives. Trench cap designs are evaluated based on federal regulations, hydrologic performance, cover stability and cost. Four trench cap designs, which were monitored for a four year period at Hill Air Force Base in Utah, are used to demonstrate the application of the PDSS and evaluate the results of the decision model. The results of the PDSS, using both data and simulations, illustrate the relative advantages of each of the cap designs and which cap is the {open_quotes}best{close_quotes} alternative for a given set of criteria and a particular importance order of those decision criteria.« less
Chabuk, Ali; Al-Ansari, Nadhir; Hussain, Hussain Musa; Knutsson, Sven; Pusch, Roland
2016-05-01
Al-Hillah Qadhaa is located in the central part of Iraq. It covers an area of 908 km(2) with a total population of 856,804 inhabitants. This Qadhaa is the capital of Babylon Governorate. Presently, no landfill site exists in that area based on scientific site selection criteria. For this reason, an attempt has been carried out to find the best locations for landfills. A total of 15 variables were considered in this process (groundwater depth, rivers, soil types, agricultural land use, land use, elevation, slope, gas pipelines, oil pipelines, power lines, roads, railways, urban centres, villages and archaeological sites) using a geographic information system. In addition, an analytical hierarchy process was used to identify the weight for each variable. Two suitable candidate landfill sites were determined that fulfil the requirements with an area of 9.153 km(2) and 8.204 km(2) These sites can accommodate solid waste till 2030. © The Author(s) 2016.
Five Guidelines for Selecting Hydrological Signatures
NASA Astrophysics Data System (ADS)
McMillan, H. K.; Westerberg, I.; Branger, F.
2017-12-01
Hydrological signatures are index values derived from observed or modeled series of hydrological data such as rainfall, flow or soil moisture. They are designed to extract relevant information about hydrological behavior, such as to identify dominant processes, and to determine the strength, speed and spatiotemporal variability of the rainfall-runoff response. Hydrological signatures play an important role in model evaluation. They allow us to test whether particular model structures or parameter sets accurately reproduce the runoff generation processes within the watershed of interest. Most modeling studies use a selection of different signatures to capture different aspects of the catchment response, for example evaluating overall flow distribution as well as high and low flow extremes and flow timing. Such studies often choose their own set of signatures, or may borrow subsets of signatures used in multiple other works. The link between signature values and hydrological processes is not always straightforward, leading to uncertainty and variability in hydrologists' signature choices. In this presentation, we aim to encourage a more rigorous approach to hydrological signature selection, which considers the ability of signatures to represent hydrological behavior and underlying processes for the catchment and application in question. To this end, we propose a set of guidelines for selecting hydrological signatures. We describe five criteria that any hydrological signature should conform to: Identifiability, Robustness, Consistency, Representativeness, and Discriminatory Power. We describe an example of the design process for a signature, assessing possible signature designs against the guidelines above. Due to their ubiquity, we chose a signature related to the Flow Duration Curve, selecting the FDC mid-section slope as a proposed signature to quantify catchment overall behavior and flashiness. We demonstrate how assessment against each guideline could be used to compare or choose between alternative signature definitions. We believe that reaching a consensus on selection criteria for hydrological signatures will assist modelers to choose between competing signatures, facilitate comparison between hydrological studies, and help hydrologists to fully evaluate their models.
ERIC Educational Resources Information Center
Snow, Richard E.; And Others
This pilot study investigated some relationships between tested ability variables and processing parameters obtained from memory search and visual search tasks. The 25 undergraduates who participated had also participated in a previous investigation by Chiang and Atkinson. A battery of traditional ability tests and several film tests were…
ERIC Educational Resources Information Center
Dawson, Frances Trigg
A study was made to determine the relationships between (1) satisfaction of members with service club management processes and member's perception of management systems, (2) perception of service club management system to selected independent variables, and (3) satisfaction to perception of service club management systems with independent…
Method and apparatus for manufacturing gas tags
Gross, K.C.; Laug, M.T.
1996-12-17
For use in the manufacture of gas tags employed in a gas tagging failure detection system for a nuclear reactor, a plurality of commercial feed gases each having a respective noble gas isotopic composition are blended under computer control to provide various tag gas mixtures having selected isotopic ratios which are optimized for specified defined conditions such as cost. Using a new approach employing a discrete variable structure rather than the known continuous-variable optimization problem, the computer controlled gas tag manufacturing process employs an analytical formalism from condensed matter physics known as stochastic relaxation, which is a special case of simulated annealing, for input feed gas selection. For a tag blending process involving M tag isotopes with N distinct feed gas mixtures commercially available from an enriched gas supplier, the manufacturing process calculates the cost difference between multiple combinations and specifies gas mixtures which approach the optimum defined conditions. The manufacturing process is then used to control tag blending apparatus incorporating tag gas canisters connected by stainless-steel tubing with computer controlled valves, with the canisters automatically filled with metered quantities of the required feed gases. 4 figs.
Method and apparatus for manufacturing gas tags
Gross, Kenny C.; Laug, Matthew T.
1996-01-01
For use in the manufacture of gas tags employed in a gas tagging failure detection system for a nuclear reactor, a plurality of commercial feed gases each having a respective noble gas isotopic composition are blended under computer control to provide various tag gas mixtures having selected isotopic ratios which are optimized for specified defined conditions such as cost. Using a new approach employing a discrete variable structure rather than the known continuous-variable optimization problem, the computer controlled gas tag manufacturing process employs an analytical formalism from condensed matter physics known as stochastic relaxation, which is a special case of simulated annealing, for input feed gas selection. For a tag blending process involving M tag isotopes with N distinct feed gas mixtures commercially available from an enriched gas supplier, the manufacturing process calculates the cost difference between multiple combinations and specifies gas mixtures which approach the optimum defined conditions. The manufacturing process is then used to control tag blending apparatus incorporating tag gas canisters connected by stainless-steel tubing with computer controlled valves, with the canisters automatically filled with metered quantities of the required feed gases.
Data driven model generation based on computational intelligence
NASA Astrophysics Data System (ADS)
Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus
2010-05-01
The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.
Selectivity of N170 for visual words in the right hemisphere: Evidence from single-trial analysis.
Yang, Hang; Zhao, Jing; Gaspar, Carl M; Chen, Wei; Tan, Yufei; Weng, Xuchu
2017-08-01
Neuroimaging and neuropsychological studies have identified the involvement of the right posterior region in the processing of visual words. Interestingly, in contrast, ERP studies of the N170 typically demonstrate selectivity for words more strikingly over the left hemisphere. Why is right hemisphere selectivity for words during the N170 epoch typically not observed, despite the clear involvement of this region in word processing? One possibility is that amplitude differences measured on averaged ERPs in previous studies may have been obscured by variation in peak latency across trials. This study examined this possibility by using single-trial analysis. Results show that words evoked greater single-trial N170s than control stimuli in the right hemisphere. Additionally, we observed larger trial-to-trial variability on N170 peak latency for words as compared to control stimuli over the right hemisphere. Results demonstrate that, in contrast to much of the prior literature, the N170 can be selective to words over the right hemisphere. This discrepancy is explained in terms of variability in trial-to-trial peak latency for responses to words over the right hemisphere. © 2017 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine
2014-10-01
The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.
Brown, Robin G.; Nichols, William D.
1990-01-01
Meteorological data were collected over bare soil at a site for low-level radioactive-waste burial near Beatty, Nevada, from November 1977 to May 1980. The data include precipitation, windspeed, wind direction, incident solar radiation, reflected solar radiation, net radiation, dry- and wet-bulb air temperatures at three heights, soil temperature at five depths, and soil-heat flux at three depths. Mean relative humidity was computed for each day of the collection period for which data are available.A discussion is presented of the study site and the instrumentation and procedures used for collecting and processing the data. Selected data from November 1977 to May 1980 are presented in tabular form. Diurnal fluctuations of selected meteorological variables for representative summer and winter periods are graphically presented. The effects on selected variables of a partial solar eclipse are also discussed
Investigating the Spectroscopic Variability of Magentically Active M Dwarfs In SDSS.
NASA Astrophysics Data System (ADS)
Ventura, Jean-Paul; Schmidt, Sarah J.; Cruz, Kelle; Rice, Emily; Cid, Aurora
2018-01-01
Magnetic activity, a wide range of observable phenomena produced in the outer atmospheres of stars is, currently, not well understood for M dwarfs. In higher mass stars, magnetic activity is powered by a dynamo process involving the differential rotation of a star’s inner regions. This process generates a magnetic field, heats up regions in the chromosphere and produces Hα emission line radiation from collisional excitation. Using spectroscopic data from the Sloan Digital Sky Survey (SDSS), I compare Hα emission line strengths for a subsample of 12,000 photometric variability selected M dwarfs from Pan-STARRS1 with those of a known non-variable sample. Presumably, the photometric variability originates from the occurrence of star spots at the stellar surface, which are the result of an intense magnetic field and associated chromospheric heating. We proceed with this work in order to test whether the photometric variability of the sample correlates with chromospheric Hα emission features. If not, we explore alternate reasons for that photometric variability (e.g. binarity or transiting planetary companions)
El-Naggar, Noura El-Ahmady; El-Shweihy, Nancy M; El-Ewasy, Sara M
2016-09-20
Due to broad range of clinical and industrial applications of cholesterol oxidase, isolation and screening of bacterial strains producing extracellular form of cholesterol oxidase is of great importance. One hundred and thirty actinomycete isolates were screened for their cholesterol oxidase activity. Among them, a potential culture, strain NEAE-42 is displayed the highest extracellular cholesterol oxidase activity. It was selected and identified as Streptomyces cavourensis strain NEAE-42. The optimization of different process parameters for cholesterol oxidase production by Streptomyces cavourensis strain NEAE-42 using Plackett-Burman experimental design and response surface methodology was carried out. Fifteen variables were screened using Plackett-Burman experimental design. Cholesterol, initial pH and (NH4)2SO4 were the most significant positive independent variables affecting cholesterol oxidase production. Central composite design was chosen to elucidate the optimal concentrations of the selected process variables on cholesterol oxidase production. It was found that, cholesterol oxidase production by Streptomyces cavourensis strain NEAE-42 after optimization process was 20.521U/mL which is higher than result obtained from the basal medium before screening process using Plackett-Burman (3.31 U/mL) with a fold of increase 6.19. The cholesterol oxidase level production obtained in this study (20.521U/mL) by the statistical method is higher than many of the reported values.
Mate choice theory and the mode of selection in sexual populations.
Carson, Hampton L
2003-05-27
Indirect new data imply that mate and/or gamete choice are major selective forces driving genetic change in sexual populations. The system dictates nonrandom mating, an evolutionary process requiring both revised genetic theory and new data on heritability of characters underlying Darwinian fitness. Successfully reproducing individuals represent rare selections from among vigorous, competing survivors of preadult natural selection. Nonrandom mating has correlated demographic effects: reduced effective population size, inbreeding, low gene flow, and emphasis on deme structure. Characters involved in choice behavior at reproduction appear based on quantitative trait loci. This variability serves selection for fitness within the population, having only an incidental relationship to the origin of genetically based reproductive isolation between populations. The claim that extensive hybridization experiments with Drosophila indicate that selection favors a gradual progression of "isolating mechanisms" is flawed, because intra-group random mating is assumed. Over deep time, local sexual populations are strong, independent genetic systems that use rich fields of variable polygenic components of fitness. The sexual reproduction system thus particularizes, in small subspecific populations, the genetic basis of the grand adaptive sweep of selective evolutionary change, much as Darwin proposed.
Relapse Model among Iranian Drug Users: A Qualitative Study.
Jalali, Amir; Seyedfatemi, Naiemeh; Peyrovi, Hamid
2015-01-01
Relapse is a common problem in drug user's rehabilitation program and reported in all over the country. An in-depth study on patients' experiences can be used for exploring the relapse process among drug users. Therefore, this study suggests a model for relapse process among Iranian drug users. In this qualitative study with grounded theory approach, 22 participants with rich information about the phenomenon under the study were selected using purposive, snowball and theoretical sampling methods. After obtaining the informed consent, data were collected based on face-to-face, in-depth, semi-structured interviews. All interviews were analyzed in three stages of axial, selective and open coding methods. Nine main categories emerged, including avoiding of drugs, concerns about being accepted, family atmosphere, social conditions, mental challenge, self-management, self-deception, use and remorse and a main category, feeling of loss as the core variable. Mental challenge has two subcategories, evoking pleasure and craving. Relapse model is a dynamic and systematic process including from cycles of drug avoidance to remorse with a core variable as feeling of loss. Relapse process is a dynamic and systematic process that needs an effective control. Determining a relapse model as a clear process could be helpful in clinical sessions. RESULTS of this research have depicted relapse process among Iranian drugs user by conceptual model.
Patwardhan, Ketaki; Asgarzadeh, Firouz; Dassinger, Thomas; Albers, Jessica; Repka, Michael A
2015-05-01
In this study, the principles of quality by design (QbD) have been uniquely applied to a pharmaceutical melt extrusion process for an immediate release formulation with a low melting model drug, ibuprofen. Two qualitative risk assessment tools - Fishbone diagram and failure mode effect analysis - were utilized to strategically narrow down the most influential parameters. Selected variables were further assessed using a Plackett-Burman screening study, which was upgraded to a response surface design consisting of the critical factors to study the interactions between the study variables. In process torque, glass transition temperature (Tg ) of the extrudates, assay, dissolution and phase change were measured as responses to evaluate the critical quality attributes (CQAs) of the extrudates. The effect of each study variable on the measured responses was analysed using multiple regression for the screening design and partial least squares for the optimization design. Experimental limits for formulation and process parameters to attain optimum processing have been outlined. A design space plot describing the domain of experimental variables within which the CQAs remained unchanged was developed. A comprehensive approach for melt extrusion product development based on the QbD methodology has been demonstrated. Drug loading concentrations between 40- 48%w/w and extrusion temperature in the range of 90-130°C were found to be the most optimum. © 2015 Royal Pharmaceutical Society.
Social Comparison Processes in an Organizational Context: New Directions
ERIC Educational Resources Information Center
Goodman, Paul S.; Haisley, Emily
2007-01-01
The goal of this article is to frame some new directions to social comparison research in organizational settings. Four themes are developed. First, we examine the role of organizational variables in shaping the basic sub processes in social comparison, such as the selection of referents. The second theme focuses on the meaning of level of…
A Study on Basic Process Skills of Turkish Primary School Students
ERIC Educational Resources Information Center
Aydogdu, Bulent
2017-01-01
Purpose: The purpose of this study was to find out primary school students' basic process skills (BPSs) in terms of select variables. In addition, this study aims to investigate the relationship between BPSs and academic achievement. Research Methods: The study had a survey design and was conducted with 1272 primary school students. The study data…
Wood-based composites and panel products
John A. Youngquist
1999-01-01
Because wood properties vary among species, between trees of the same species, and between pieces from the same tree, solid wood cannot match reconstituted wood in the range of properties that can be controlled in processing. When processing variables are properly selected, the end result can sometimes surpass natureâs best effort. With solid wood, changes in...
Low-sensitivity, frequency-selective amplifier circuits for hybrid and bipolar fabrication.
NASA Technical Reports Server (NTRS)
Pi, C.; Dunn, W. R., Jr.
1972-01-01
A network is described which is suitable for realizing a low-sensitivity high-Q second-order frequency-selective amplifier for high-frequency operation. Circuits are obtained from this network which are well suited for realizing monolithic integrated circuits and which do not require any process steps more critical than those used for conventional monolithic operational and video amplifiers. A single chip version using compatible thin-film techniques for the frequency determination elements is then feasible. Center frequency and bandwidth can be set independently by trimming two resistors. The frequency selective circuits have a low sensitivity to the process variables, and the sensitivity of the center frequency and bandwidth to changes in temperature is very low.
Reward speeds up and increases consistency of visual selective attention: a lifespan comparison.
Störmer, Viola; Eppinger, Ben; Li, Shu-Chen
2014-06-01
Children and older adults often show less favorable reward-based learning and decision making, relative to younger adults. It is unknown, however, whether reward-based processes that influence relatively early perceptual and attentional processes show similar lifespan differences. In this study, we investigated whether stimulus-reward associations affect selective visual attention differently across the human lifespan. Children, adolescents, younger adults, and older adults performed a visual search task in which the target colors were associated with either high or low monetary rewards. We discovered that high reward value speeded up response times across all four age groups, indicating that reward modulates attentional selection across the lifespan. This speed-up in response time was largest in younger adults, relative to the other three age groups. Furthermore, only younger adults benefited from high reward value in increasing response consistency (i.e., reduction of trial-by-trial reaction time variability). Our findings suggest that reward-based modulations of relatively early and implicit perceptual and attentional processes are operative across the lifespan, and the effects appear to be greater in adulthood. The age-specific effect of reward on reducing intraindividual response variability in younger adults likely reflects mechanisms underlying the development and aging of reward processing, such as lifespan age differences in the efficacy of dopaminergic modulation. Overall, the present results indicate that reward shapes visual perception across different age groups by biasing attention to motivationally salient events.
Nekkanti, Vijaykumar; Marwah, Ashwani; Pillai, Raviraj
2015-01-01
Design of experiments (DOE), a component of Quality by Design (QbD), is systematic and simultaneous evaluation of process variables to develop a product with predetermined quality attributes. This article presents a case study to understand the effects of process variables in a bead milling process used for manufacture of drug nanoparticles. Experiments were designed and results were computed according to a 3-factor, 3-level face-centered central composite design (CCD). The factors investigated were motor speed, pump speed and bead volume. Responses analyzed for evaluating these effects and interactions were milling time, particle size and process yield. Process validation batches were executed using the optimum process conditions obtained from software Design-Expert® to evaluate both the repeatability and reproducibility of bead milling technique. Milling time was optimized to <5 h to obtain the desired particle size (d90 < 400 nm). The desirability function used to optimize the response variables and observed responses were in agreement with experimental values. These results demonstrated the reliability of selected model for manufacture of drug nanoparticles with predictable quality attributes. The optimization of bead milling process variables by applying DOE resulted in considerable decrease in milling time to achieve the desired particle size. The study indicates the applicability of DOE approach to optimize critical process parameters in the manufacture of drug nanoparticles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, J.; Moon, T.J.; Howell, J.R.
This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less
A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.
Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger
2018-04-19
Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model for the mutational process; regions that deviate from the null model are candidates for elements that drive cancer development.
Optimal information networks: Application for data-driven integrated health in populations
Servadio, Joseph L.; Convertino, Matteo
2018-01-01
Development of composite indicators for integrated health in populations typically relies on a priori assumptions rather than model-free, data-driven evidence. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead considering only individual correlations. In addition, a unified method for assessing integrated health statuses of populations is lacking, making systematic comparison among populations impossible. We propose the use of maximum entropy networks (MENets) that use transfer entropy to assess interrelatedness among selected variables considered for inclusion in a composite indicator. We also define optimal information networks (OINs) that are scale-invariant MENets, which use the information in constructed networks for optimal decision-making. Health outcome data from multiple cities in the United States are applied to this method to create a systemic health indicator, representing integrated health in a city. PMID:29423440
Kumwenda, Ben; Dowell, Jon; Husbands, Adrian
2013-07-01
The assessment of non-academic achievements through the personal statement remains part of the selection process at most UK medical and dental schools. Such statement offers applicants an opportunity to highlight their non-academic achievements, but the highly competitive nature of the process may tempt them to exaggerate their accomplishments. The challenge is that selectors cannot discern applicants' exaggerated claims from genuine accounts and the system risks preferentially selecting dishonest applicants. To explore the level and perception of deception on UCAS personal statements among applicants to medical and dental schools. To investigate the association between attitudes towards deception and various other demographic variables and cognitive ability via the UKCAT. An online survey was completed with first year students from six UK medical schools and one dental school. Questionnaire items were classified into three categories involving individual acts, how they suspect their peers behave, and overall perceptions of personal statements to influence the selection process. Descriptive statistics were used to investigate responses to questionnaire items. t-Tests were used to investigate the relationship between items, demographic variables and cognitive ability. Candidates recognized that putting fraudulent information or exaggerating one's experience on UCAS personal statement was dishonest; however there is a widespread belief that their peers do it. Female respondents and those with a higher UKCAT score were more likely to condemn deceptive practices. The existing selection process is open to abuse and may benefit dishonest applicants. Admission systems should consider investing in systems that can pursue traceable information that applicants provide, and nullify the application should it contain fraudulent information.
A Framework for Orbital Performance Evaluation in Distributed Space Missions for Earth Observation
NASA Technical Reports Server (NTRS)
Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Miller, David W.; de Weck, Olivier
2015-01-01
Distributed Space Missions (DSMs) are gaining momentum in their application to earth science missions owing to their unique ability to increase observation sampling in spatial, spectral and temporal dimensions simultaneously. DSM architectures have a large number of design variables and since they are expected to increase mission flexibility, scalability, evolvability and robustness, their design is a complex problem with many variables and objectives affecting performance. There are very few open-access tools available to explore the tradespace of variables which allow performance assessment and are easy to plug into science goals, and therefore select the most optimal design. This paper presents a software tool developed on the MATLAB engine interfacing with STK, for DSM orbit design and selection. It is capable of generating thousands of homogeneous constellation or formation flight architectures based on pre-defined design variable ranges and sizing those architectures in terms of predefined performance metrics. The metrics can be input into observing system simulation experiments, as available from the science teams, allowing dynamic coupling of science and engineering designs. Design variables include but are not restricted to constellation type, formation flight type, FOV of instrument, altitude and inclination of chief orbits, differential orbital elements, leader satellites, latitudes or regions of interest, planes and satellite numbers. Intermediate performance metrics include angular coverage, number of accesses, revisit coverage, access deterioration over time at every point of the Earth's grid. The orbit design process can be streamlined and variables more bounded along the way, owing to the availability of low fidelity and low complexity models such as corrected HCW equations up to high precision STK models with J2 and drag. The tool can thus help any scientist or program manager select pre-Phase A, Pareto optimal DSM designs for a variety of science goals without having to delve into the details of the engineering design process.
Urban Heat Wave Vulnerability Analysis Considering Climate Change
NASA Astrophysics Data System (ADS)
JE, M.; KIM, H.; Jung, S.
2017-12-01
Much attention has been paid to thermal environments in Seoul City in South Korea since 2016 when the worst heatwave in 22 years. It is necessary to provide a selective measure by singling out vulnerable regions in advance to cope with the heat wave-related damage. This study aims to analyze and categorize vulnerable regions of thermal environments in the Seoul and analyzes and discusses the factors and risk factors for each type. To do this, this study conducted the following processes: first, based on the analyzed various literature reviews, indices that can evaluate vulnerable regions of thermal environment are collated. The indices were divided into climate exposure index related to temperature, sensitivity index including demographic, social, and economic indices, and adaptation index related to urban environment and climate adaptation policy status. Second, significant variables were derived to evaluate a vulnerable region of thermal environment based on the summarized indices in the above. this study analyzed a relationship between the number of heat-related patients in Seoul and variables that affected the number using multi-variate statistical analysis to derive significant variables. Third, the importance of each variable was calculated quantitatively by integrating the statistical analysis results and analytic hierarchy process (AHP) method. Fourth, a distribution of data for each index was identified based on the selected variables and indices were normalized and overlapped. Fifth, For the climate exposure index, evaluations were conducted as same as the current vulnerability evaluation method by selecting future temperature of Seoul predicted through the representative concentration pathways (RCPs) climate change scenarios as an evaluation variable. The results of this study can be utilized as foundational data to establish a countermeasure against heatwave in Seoul. Although it is limited to control heatwave occurrences itself completely, improvements on environment for heatwave alleviation and response can be done. In particular, if vulnerable regions of heatwave can be identified and managed in advance, the study results are expected to be utilized as a basis of policy utilization in local communities accordingly.
Scale-Dependent Habitat Selection and Size-Based Dominance in Adult Male American Alligators
Strickland, Bradley A.; Vilella, Francisco J.; Belant, Jerrold L.
2016-01-01
Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance and space use can provide great insight into their functional role in the ecosystem. PMID:27588947
Scale-dependent habitat selection and size-based dominance in adult male American alligators
Strickland, Bradley A.; Vilella, Francisco; Belant, Jerrold L.
2016-01-01
Habitat selection is an active behavioral process that may vary across spatial and temporal scales. Animals choose an area of primary utilization (i.e., home range) then make decisions focused on resource needs within patches. Dominance may affect the spatial distribution of conspecifics and concomitant habitat selection. Size-dependent social dominance hierarchies have been documented in captive alligators, but evidence is lacking from wild populations. We studied habitat selection for adult male American alligators (Alligator mississippiensis; n = 17) on the Pearl River in central Mississippi, USA, to test whether habitat selection was scale-dependent and individual resource selectivity was a function of conspecific body size. We used K-select analysis to quantify selection at the home range scale and patches within the home range to determine selection congruency and important habitat variables. In addition, we used linear models to determine if body size was related to selection patterns and strengths. Our results indicated habitat selection of adult male alligators was a scale-dependent process. Alligators demonstrated greater overall selection for habitat variables at the patch level and less at the home range level, suggesting resources may not be limited when selecting a home range for animals in our study area. Further, diurnal habitat selection patterns may depend on thermoregulatory needs. There was no relationship between resource selection or home range size and body size, suggesting size-dependent dominance hierarchies may not have influenced alligator resource selection or space use in our sample. Though apparent habitat suitability and low alligator density did not manifest in an observed dominance hierarchy, we hypothesize that a change in either could increase intraspecific interactions, facilitating a dominance hierarchy. Due to the broad and diverse ecological roles of alligators, understanding the factors that influence their social dominance and space use can provide great insight into their functional role in the ecosystem.
Tiller, Thomas; Schuster, Ingrid; Deppe, Dorothée; Siegers, Katja; Strohner, Ralf; Herrmann, Tanja; Berenguer, Marion; Poujol, Dominique; Stehle, Jennifer; Stark, Yvonne; Heßling, Martin; Daubert, Daniela; Felderer, Karin; Kaden, Stefan; Kölln, Johanna; Enzelberger, Markus; Urlinger, Stefanie
2013-01-01
This report describes the design, generation and testing of Ylanthia, a fully synthetic human Fab antibody library with 1.3E+11 clones. Ylanthia comprises 36 fixed immunoglobulin (Ig) variable heavy (VH)/variable light (VL) chain pairs, which cover a broad range of canonical complementarity-determining region (CDR) structures. The variable Ig heavy and Ig light (VH/VL) chain pairs were selected for biophysical characteristics favorable to manufacturing and development. The selection process included multiple parameters, e.g., assessment of protein expression yield, thermal stability and aggregation propensity in fragment antigen binding (Fab) and IgG1 formats, and relative Fab display rate on phage. The framework regions are fixed and the diversified CDRs were designed based on a systematic analysis of a large set of rearranged human antibody sequences. Care was taken to minimize the occurrence of potential posttranslational modification sites within the CDRs. Phage selection was performed against various antigens and unique antibodies with excellent biophysical properties were isolated. Our results confirm that quality can be built into an antibody library by prudent selection of unmodified, fully human VH/VL pairs as scaffolds. PMID:23571156
Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto
2010-01-11
In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.
Framework for adaptive multiscale analysis of nonhomogeneous point processes.
Helgason, Hannes; Bartroff, Jay; Abry, Patrice
2011-01-01
We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.
Variable Selection in the Presence of Missing Data: Imputation-based Methods.
Zhao, Yize; Long, Qi
2017-01-01
Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.
Reimer, Christina B; Strobach, Tilo; Schubert, Torsten
2017-12-01
Visual attention and response selection are limited in capacity. Here, we investigated whether visual attention requires the same bottleneck mechanism as response selection in a dual-task of the psychological refractory period (PRP) paradigm. The dual-task consisted of an auditory two-choice discrimination Task 1 and a conjunction search Task 2, which were presented at variable temporal intervals (stimulus onset asynchrony, SOA). In conjunction search, visual attention is required to select items and to bind their features resulting in a serial search process around the items in the search display (i.e., set size). We measured the reaction time of the visual search task (RT2) and the N2pc, an event-related potential (ERP), which reflects lateralized visual attention processes. If the response selection processes in Task 1 influence the visual attention processes in Task 2, N2pc latency and amplitude would be delayed and attenuated at short SOA compared to long SOA. The results, however, showed that latency and amplitude were independent of SOA, indicating that visual attention was concurrently deployed to response selection. Moreover, the RT2 analysis revealed an underadditive interaction of SOA and set size. We concluded that visual attention does not require the same bottleneck mechanism as response selection in dual-tasks.
Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh
2004-01-01
Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.
Paradowska, Katarzyna; Jamróz, Marta Katarzyna; Kobyłka, Mariola; Gowin, Ewelina; Maczka, Paulina; Skibiński, Robert; Komsta, Łukasz
2012-01-01
This paper presents a preliminary study in building discriminant models from solid-state NMR spectrometry data to detect the presence of acetaminophen in over-the-counter pharmaceutical formulations. The dataset, containing 11 spectra of pure substances and 21 spectra of various formulations, was processed by partial least squares discriminant analysis (PLS-DA). The model found coped with the discrimination, and its quality parameters were acceptable. It was found that standard normal variate preprocessing had almost no influence on unsupervised investigation of the dataset. The influence of variable selection with the uninformative variable elimination by PLS method was studied, reducing the dataset from 7601 variables to around 300 informative variables, but not improving the model performance. The results showed the possibility to construct well-working PLS-DA models from such small datasets without a full experimental design.
A Call to Standardize Preanalytic Data Elements for Biospecimens, Part II.
Robb, James A; Bry, Lynn; Sluss, Patrick M; Wagar, Elizabeth A; Kennedy, Mary F
2015-09-01
Biospecimens must have appropriate clinical annotation (data) to ensure optimal quality for both patient care and research. Additional clinical preanalytic variables are the focus of this continuing study. To complete the identification of the essential preanalytic variables (data fields) that can, and in some instances should, be attached to every collected biospecimen by adding the additional specific variables for clinical chemistry and microbiology to our original 170 variables. The College of American Pathologists Diagnostic Intelligence and Health Information Technology Committee sponsored a second Biorepository Working Group to complete the list of preanalytic variables for annotating biospecimens. Members of the second Biorepository Working Group are experts in clinical pathology and microbiology. Additional preanalytic area-specific variables were identified and ranked along with definitions and potential negative impacts if the variable is not attached to the biospecimen. The draft manuscript was reviewed by additional national and international stakeholders. Four additional required preanalytic variables were identified specifically for clinical chemistry and microbiology biospecimens that can be used as a guide for site-specific implementation into patient care and research biorepository processes. In our collective experience, selecting which of the many preanalytic variables to attach to any specific set of biospecimens used for patient care and/or research is often difficult. The additional ranked list should be of practical benefit when selecting preanalytic variables for a given biospecimen collection.
Saraf-Sinik, Inbar; Assa, Eldad; Ahissar, Ehud
2015-06-10
Tactile perception is obtained by coordinated motor-sensory processes. We studied the processes underlying the perception of object location in freely moving rats. We trained rats to identify the relative location of two vertical poles placed in front of them and measured at high resolution the motor and sensory variables (19 and 2 variables, respectively) associated with this whiskers-based perceptual process. We found that the rats developed stereotypic head and whisker movements to solve this task, in a manner that can be described by several distinct behavioral phases. During two of these phases, the rats' whiskers coded object position by first temporal and then angular coding schemes. We then introduced wind (in two opposite directions) and remeasured their perceptual performance and motor-sensory variables. Our rats continued to perceive object location in a consistent manner under wind perturbations while maintaining all behavioral phases and relatively constant sensory coding. Constant sensory coding was achieved by keeping one group of motor variables (the "controlled variables") constant, despite the perturbing wind, at the cost of strongly modulating another group of motor variables (the "modulated variables"). The controlled variables included coding-relevant variables, such as head azimuth and whisker velocity. These results indicate that consistent perception of location in the rat is obtained actively, via a selective control of perception-relevant motor variables. Copyright © 2015 the authors 0270-6474/15/358777-13$15.00/0.
Hadders-Algra, Mijna
2001-01-01
The Neuronal Group Selection Theory (NGST) could offer new insights into the mechanisms directing motor disorders, such as cerebral palsy and developmental coordination disorder. According to NGST, normal motor development is characterized by two phases of variability. Variation is not at random but determined by criteria set by genetic information. Development starts with the phase of primary variability,during which variation in motor behavior is not geared to external conditions. At function-specific ages secondary variability starts, during which motor performance can be adapted to specific situations. In both forms, of variability, selection on the basis of afferent information plays a significant role. From the NGST point of view, children with pre- or perinatally acquired brain damage, such as children with cerebral palsy and part of the children with developmental coordination disorder, suffer from stereotyped motor behavior, produced by a limited repertoire or primary (sub)cortical neuronal networks. These children also have roblems in selecting the most efficient neuronal activity, due to deficits in the processing of sensory information. Therefore, NGST suggests that intervention in these children at early age should aim at an enlargement of the primary neuronal networks. With increasing age, the emphasis of intervention could shift to the provision of ample opportunities for active practice, which might form a compensation for the impaired selection. PMID:11530887
Juhasz, Barbara J
2016-11-14
Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.
Quality control developments for graphite/PMR15 polyimide composites materials
NASA Technical Reports Server (NTRS)
Sheppard, C. H.; Hoggatt, J. T.
1979-01-01
The problem of lot-to-lot and within-lot variability of graphite/PMR-15 prepreg was investigated. The PMR-15 chemical characterization data were evaluated along with the processing conditions controlling the manufacture of PMR-15 resin and monomers. Manufacturing procedures were selected to yield a consistently reproducible graphite prepreg that could be processed into acceptable structural elements.
Butler, Emily E; Saville, Christopher W N; Ward, Robert; Ramsey, Richard
2017-01-01
The human face cues a range of important fitness information, which guides mate selection towards desirable others. Given humans' high investment in the central nervous system (CNS), cues to CNS function should be especially important in social selection. We tested if facial attractiveness preferences are sensitive to the reliability of human nervous system function. Several decades of research suggest an operational measure for CNS reliability is reaction time variability, which is measured by standard deviation of reaction times across trials. Across two experiments, we show that low reaction time variability is associated with facial attractiveness. Moreover, variability in performance made a unique contribution to attractiveness judgements above and beyond both physical health and sex-typicality judgements, which have previously been associated with perceptions of attractiveness. In a third experiment, we empirically estimated the distribution of attractiveness preferences expected by chance and show that the size and direction of our results in Experiments 1 and 2 are statistically unlikely without reference to reaction time variability. We conclude that an operating characteristic of the human nervous system, reliability of information processing, is signalled to others through facial appearance. Copyright © 2016 Elsevier B.V. All rights reserved.
Willecke, N; Szepes, A; Wunderlich, M; Remon, J P; Vervaet, C; De Beer, T
2017-04-30
The overall objective of this work is to understand how excipient characteristics influence the process and product performance for a continuous twin-screw wet granulation process. The knowledge gained through this study is intended to be used for a Quality by Design (QbD)-based formulation design approach and formulation optimization. A total of 9 preferred fillers and 9 preferred binders were selected for this study. The selected fillers and binders were extensively characterized regarding their physico-chemical and solid state properties using 21 material characterization techniques. Subsequently, principal component analysis (PCA) was performed on the data sets of filler and binder characteristics in order to reduce the variety of single characteristics to a limited number of overarching properties. Four principal components (PC) explained 98.4% of the overall variability in the fillers data set, while three principal components explained 93.4% of the overall variability in the data set of binders. Both PCA models allowed in-depth evaluation of similarities and differences in the excipient properties. Copyright © 2017. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Lien, Mei-Ching; Proctor, Robert W.
2002-01-01
The purpose of this paper was to provide insight into the nature of response selection by reviewing the literature on stimulus-response compatibility (SRC) effects and the psychological refractory period (PRP) effect individually and jointly. The empirical findings and theoretical explanations of SRC effects that have been studied within a single-task context suggest that there are two response-selection routes-automatic activation and intentional translation. In contrast, all major PRP models reviewed in this paper have treated response selection as a single processing stage. In particular, the response-selection bottleneck (RSB) model assumes that the processing of Task 1 and Task 2 comprises two separate streams and that the PRP effect is due to a bottleneck located at response selection. Yet, considerable evidence from studies of SRC in the PRP paradigm shows that the processing of the two tasks is more interactive than is suggested by the RSB model and by most other models of the PRP effect. The major implication drawn from the studies of SRC effects in the PRP context is that response activation is a distinct process from final response selection. Response activation is based on both long-term and short-term task-defined S-R associations and occurs automatically and in parallel for the two tasks. The final response selection is an intentional act required even for highly compatible and practiced tasks and is restricted to processing one task at a time. Investigations of SRC effects and response-selection variables in dual-task contexts should be conducted more systematically because they provide significant insight into the nature of response-selection mechanisms.
Chemidlin Prévost-Bouré, Nicolas; Dequiedt, Samuel; Thioulouse, Jean; Lelièvre, Mélanie; Saby, Nicolas P. A.; Jolivet, Claudy; Arrouays, Dominique; Plassart, Pierre; Lemanceau, Philippe; Ranjard, Lionel
2014-01-01
Spatial scaling of microorganisms has been demonstrated over the last decade. However, the processes and environmental filters shaping soil microbial community structure on a broad spatial scale still need to be refined and ranked. Here, we compared bacterial and fungal community composition turnovers through a biogeographical approach on the same soil sampling design at a broad spatial scale (area range: 13300 to 31000 km2): i) to examine their spatial structuring; ii) to investigate the relative importance of environmental selection and spatial autocorrelation in determining their community composition turnover; and iii) to identify and rank the relevant environmental filters and scales involved in their spatial variations. Molecular fingerprinting of soil bacterial and fungal communities was performed on 413 soils from four French regions of contrasting environmental heterogeneity (Landes
Chemidlin Prévost-Bouré, Nicolas; Dequiedt, Samuel; Thioulouse, Jean; Lelièvre, Mélanie; Saby, Nicolas P A; Jolivet, Claudy; Arrouays, Dominique; Plassart, Pierre; Lemanceau, Philippe; Ranjard, Lionel
2014-01-01
Spatial scaling of microorganisms has been demonstrated over the last decade. However, the processes and environmental filters shaping soil microbial community structure on a broad spatial scale still need to be refined and ranked. Here, we compared bacterial and fungal community composition turnovers through a biogeographical approach on the same soil sampling design at a broad spatial scale (area range: 13300 to 31000 km2): i) to examine their spatial structuring; ii) to investigate the relative importance of environmental selection and spatial autocorrelation in determining their community composition turnover; and iii) to identify and rank the relevant environmental filters and scales involved in their spatial variations. Molecular fingerprinting of soil bacterial and fungal communities was performed on 413 soils from four French regions of contrasting environmental heterogeneity (Landes
Efficient Variable Selection Method for Exposure Variables on Binary Data
NASA Astrophysics Data System (ADS)
Ohno, Manabu; Tarumi, Tomoyuki
In this paper, we propose a new variable selection method for "robust" exposure variables. We define "robust" as property that the same variable can select among original data and perturbed data. There are few studies of effective for the selection method. The problem that selects exposure variables is almost the same as a problem that extracts correlation rules without robustness. [Brin 97] is suggested that correlation rules are possible to extract efficiently using chi-squared statistic of contingency table having monotone property on binary data. But the chi-squared value does not have monotone property, so it's is easy to judge the method to be not independent with an increase in the dimension though the variable set is completely independent, and the method is not usable in variable selection for robust exposure variables. We assume anti-monotone property for independent variables to select robust independent variables and use the apriori algorithm for it. The apriori algorithm is one of the algorithms which find association rules from the market basket data. The algorithm use anti-monotone property on the support which is defined by association rules. But independent property does not completely have anti-monotone property on the AIC of independent probability model, but the tendency to have anti-monotone property is strong. Therefore, selected variables with anti-monotone property on the AIC have robustness. Our method judges whether a certain variable is exposure variable for the independent variable using previous comparison of the AIC. Our numerical experiments show that our method can select robust exposure variables efficiently and precisely.
NASA Astrophysics Data System (ADS)
García-Díaz, J. Carlos
2009-11-01
Fault detection and diagnosis is an important problem in process engineering. Process equipments are subject to malfunctions during operation. Galvanized steel is a value added product, furnishing effective performance by combining the corrosion resistance of zinc with the strength and formability of steel. Fault detection and diagnosis is an important problem in continuous hot dip galvanizing and the increasingly stringent quality requirements in automotive industry has also demanded ongoing efforts in process control to make the process more robust. When faults occur, they change the relationship among these observed variables. This work compares different statistical regression models proposed in the literature for estimating the quality of galvanized steel coils on the basis of short time histories. Data for 26 batches were available. Five variables were selected for monitoring the process: the steel strip velocity, four bath temperatures and bath level. The entire data consisting of 48 galvanized steel coils was divided into sets. The first training data set was 25 conforming coils and the second data set was 23 nonconforming coils. Logistic regression is a modeling tool in which the dependent variable is categorical. In most applications, the dependent variable is binary. The results show that the logistic generalized linear models do provide good estimates of quality coils and can be useful for quality control in manufacturing process.
NASA Astrophysics Data System (ADS)
Moll, Andreas; Stegert, Christoph
2007-01-01
This paper outlines an approach to couple a structured zooplankton population model with state variables for eggs, nauplii, two copepodites stages and adults adapted to Pseudocalanus elongatus into the complex marine ecosystem model ECOHAM2 with 13 state variables resolving the carbon and nitrogen cycle. Different temperature and food scenarios derived from laboratory culture studies were examined to improve the process parameterisation for copepod stage dependent development processes. To study annual cycles under realistic weather and hydrographic conditions, the coupled ecosystem-zooplankton model is applied to a water column in the northern North Sea. The main ecosystem state variables were validated against observed monthly mean values. Then vertical profiles of selected state variables were compared to the physical forcing to study differences between zooplankton as one biomass state variable or partitioned into five population state variables. Simulated generation times are more affected by temperature than food conditions except during the spring phytoplankton bloom. Up to six generations within the annual cycle can be discerned in the simulation.
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
2008-09-01
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walko, Robert
2016-11-07
The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of themore » atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.« less
Kim, Keun Ho; Ku, Boncho; Kang, Namsik; Kim, Young-Su; Jang, Jun-Su; Kim, Jong Yeol
2012-01-01
The voice has been used to classify the four constitution types, and to recognize a subject's health condition by extracting meaningful physical quantities, in traditional Korean medicine. In this paper, we propose a method of selecting the reliable variables from various voice features, such as frequency derivative features, frequency band ratios, and intensity, from vowels and a sentence. Further, we suggest a process to extract independent variables by eliminating explanatory variables and reducing their correlation and remove outlying data to enable reliable discriminant analysis. Moreover, the suitable division of data for analysis, according to the gender and age of subjects, is discussed. Finally, the vocal features are applied to a discriminant analysis to classify each constitution type. This method of voice classification can be widely used in the u-Healthcare system of personalized medicine and for improving diagnostic accuracy. PMID:22529874
Nielsen, Simon; Wilms, L Inge
2014-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
DOT National Transportation Integrated Search
2013-02-15
The technical tasks in this study included activities to characterize the impact of selected : metallurgical processing and fabrication variables on ethanol stress corrosion cracking (ethanol : SCC) of new pipeline steels, develop a better understand...
ERIC Educational Resources Information Center
GLASER, ROBERT
THIS CHAPTER IN A LARGER WORK ON INDUSTRIAL PSYCHOLOGY DEALS LARGELY WITH THE NEED TO SPECIFY TRAINING OBJECTIVES THROUGH JOB ANALYSIS, USES OF TESTING IN TRAINEE SELECTION, TRAINING VARIABLES AND LEARNING PROCESSES, TRAINING TECHNOLOGY (MAINLY THE CHARACTERISTICS OF PROGRAMED INSTRUCTION), THE EVALUATION OF PROFICIENCY, THE VALUE OF…
Selection by Certification: A Neglected Variable in Stratification Research.
ERIC Educational Resources Information Center
Faia, Michael A.
1981-01-01
Reviews literature on status attainment, with emphasis on the relationship between status and education in the United States. Concludes that the status attainment process in the United States may depart substantially from the rational choice model favored by human capital theory. (DB)
NASA Astrophysics Data System (ADS)
Gutiérrez, J. M.; Natxiondo, A.; Nieves, J.; Zabala, A.; Sertucha, J.
2017-04-01
The study of shrinkage incidence variations in nodular cast irons is an important aspect of manufacturing processes. These variations change the feeding requirements on castings and the optimization of risers' size is consequently affected when avoiding the formation of shrinkage defects. The effect of a number of processing variables on the shrinkage size has been studied using a layout specifically designed for this purpose. The β parameter has been defined as the relative volume reduction from the pouring temperature up to the room temperature. It is observed that shrinkage size and β decrease as effective carbon content increases and when inoculant is added in the pouring stream. A similar effect is found when the parameters selected from cooling curves show high graphite nucleation during solidification of cast irons for a given inoculation level. Pearson statistical analysis has been used to analyze the correlations among all involved variables and a group of Bayesian networks have been subsequently built so as to get the best accurate model for predicting β as a function of the input processing variables. The developed models can be used in foundry plants to study the shrinkage incidence variations in the manufacturing process and to optimize the related costs.
Biomotor status and kinesiological education of girls aged 10 to 12 years--example: volleyball.
Milić, Mirjana; Grgantov, Zoran; Katić, Ratko
2012-09-01
The aim of this study was to define processes of orientation and/or selection towards sports game of volleyball in schoolgirls of Kastela, aged 10-12, by examining the relations between regular classes of physical education (PE) and extracurricular sport activities. For this purpose, two morphological measures were used (body height and body mass) and a set of 11 motor tests (6 basic motor abilities tests and 5 motor achievement tests) on a sample of 242 girls aged 10-12 was used, divided into a subsample of 42 girls participating in volleyball training (Volleyball players) and a subsample of 200 girls who do not participate in volleyball training (volleyball non-players). Based on the comparison of test results of schoolgirls from Kastela and Croatian norms, factor analysis of applied variables and discriminant analysis of these variables between volleyball players and non-players, processes and/or phases of selection in forming quality volleyball players were defined. Selection processes are preceded by orientation processes in physical education classes, i.e. choosing those sport activities which are in accordance with the biomotor status of students. Results have shown that orientation and initial selection in female volleyball needs to be executed based on the motor set of psychomotor speed, repetitive strength of the trunk and flexibility (muscle tone regulation), and body height. Volleyball training has affected the muscle mass development and the development of strength factors, so that explosive strength of jumping and/or takeoff along with body height, has predominantly differentiated female volleyball players from non-players, aged 10 to 12, and serve and spike quality will have dominant influence on the match outcome.
NASA Astrophysics Data System (ADS)
Saprykin, A. A.; Sharkeev, Yu P.; Ibragimov, E. A.; Babakova, E. V.; Dudikhin, D. V.
2016-07-01
Alloys based on the titanium-niobium system are widely used in implant production. It is conditional, first of all, on the low modulus of elasticity and bio-inert properties of an alloy. These alloys are especially important for tooth replacement and orthopedic surgery. At present alloys based on the titanium-niobium system are produced mainly using conventional metallurgical methods. The further subtractive manufacturing an end product results in a lot of wastes, increasing, therefore, its cost. The alternative of these processes is additive manufacturing. Selective laser melting is a technology, which makes it possible to synthesize products of metal powders and their blends. The point of this technology is laser melting a layer of a powdered material; then a sintered layer is coated with the next layer of powder etc. Complex products and working prototypes are made on the base of this technology. The authors of this paper address to the issue of applying selective laser melting in order to synthesize a binary alloy of a composite powder based on the titanium-niobium system. A set of 10x10 mm samples is made in various process conditions. The samples are made by an experimental selective laser synthesis machine «VARISKAF-100MB». The machine provides adjustment of the following process variables: laser emission power, scanning rate and pitch, temperature of powder pre-heating, thickness of the layer to be sprinkled, and diameter of laser spot focusing. All samples are made in the preliminary vacuumized shielding atmosphere of argon. The porosity and thickness of the sintered layer related to the laser emission power are shown at various scanning rates. It is revealed that scanning rate and laser emission power are adjustable process variables, having the greatest effect on forming the sintered layer.
Mediterranean dunes on the go: Evidence from a short term study on coastal herbaceous vegetation
NASA Astrophysics Data System (ADS)
Prisco, Irene; Stanisci, Angela; Acosta, Alicia T. R.
2016-12-01
Detailed monitoring studies on permanent sites are a promising tool for an accurate evaluation of short, medium or long term vegetation dynamics. This work aims to evaluate short-term changes in coastal dune herbaceous plant species and EU Habitats through a multi-temporal analysis using permanent vegetation transects. In particular, (I) we analyze changes in species richness of coastal habitats; (II) we identify changes in plant cover of selected focal plants; and (III) we relate the changes to selected climatic variables and erosion/accretion processes. We selected one of the Italian's peninsula best preserved coastal dune areas (ca. 50 km along the Adriatic sea) with a relatively homogeneous coastal zonation and low anthropic pressure but with different erosion/accretion processes. We explored changes in richness over time using generalized linear models (GLMs). We identified different ecological guilds: focal, ruderal and alien plant species and investigated temporal trends in these guilds' species richness. We also applied GLMs to determine how plant cover of the most important focal species have changed over time. Overall, in this study we observed that the influence of climatic variables was relatively small. However, we found remarkable different trends in response to erosion/accretion processes both at community and at species level. Thus, our results highlight the importance of coastal dynamics in preserving not only coastal vegetation zonation, but also species richness and focal species cover. Moreover, we identified the dune grasslands as the most sensitive habitat for detecting the influence of climatic variables throughout a short term monitoring survey. Information from this study provides useful insights for detecting changes in vegetation, for establishing habitat protection priorities and for improving conservation efforts for these fragile ecosystems.
Commercially sterilized mussel meats (Mytilus chilensis): a study on process yield.
Almonacid, S; Bustamante, J; Simpson, R; Urtubia, A; Pinto, M; Teixeira, A
2012-06-01
The processing steps most responsible for yield loss in the manufacture of canned mussel meats are the thermal treatments of precooking to remove meats from shells, and thermal processing (retorting) to render the final canned product commercially sterile for long-term shelf stability. The objective of this study was to investigate and evaluate the impact of different combinations of process variables on the ultimate drained weight in the final mussel product (Mytilu chilensis), while verifying that any differences found were statistically and economically significant. The process variables selected for this study were precooking time, brine salt concentration, and retort temperature. Results indicated 2 combinations of process variables producing the widest difference in final drained weight, designated best combination and worst combination with 35% and 29% yield, respectively. Significance of this difference was determined by employing a Bootstrap methodology, which assumes an empirical distribution of statistical error. A difference of nearly 6 percentage points in total yield was found. This represents a 20% increase in annual sales from the same quantity of raw material, in addition to increase in yield, the conditions for the best process included a retort process time 65% shorter than that for the worst process, this difference in yield could have significant economic impact, important to the mussel canning industry. © 2012 Institute of Food Technologists®
Saini, Parmesh K; Marks, Harry M; Dreyfuss, Moshe S; Evans, Peter; Cook, L Victor; Dessai, Uday
2011-08-01
Measuring commonly occurring, nonpathogenic organisms on poultry products may be used for designing statistical process control systems that could result in reductions of pathogen levels. The extent of pathogen level reduction that could be obtained from actions resulting from monitoring these measurements over time depends upon the degree of understanding cause-effect relationships between processing variables, selected output variables, and pathogens. For such measurements to be effective for controlling or improving processing to some capability level within the statistical process control context, sufficiently frequent measurements would be needed to help identify processing deficiencies. Ultimately the correct balance of sampling and resources is determined by those characteristics of deficient processing that are important to identify. We recommend strategies that emphasize flexibility, depending upon sampling objectives. Coupling the measurement of levels of indicator organisms with practical emerging technologies and suitable on-site platforms that decrease the time between sample collections and interpreting results would enhance monitoring process control.
Parameters in selective laser melting for processing metallic powders
NASA Astrophysics Data System (ADS)
Kurzynowski, Tomasz; Chlebus, Edward; Kuźnicka, Bogumiła; Reiner, Jacek
2012-03-01
The paper presents results of studies on Selective Laser Melting. SLM is an additive manufacturing technology which may be used to process almost all metallic materials in the form of powder. Types of energy emission sources, mainly fiber lasers and/or Nd:YAG laser with similar characteristics and the wavelength of 1,06 - 1,08 microns, are provided primarily for processing metallic powder materials with high absorption of laser radiation. The paper presents results of selected variable parameters (laser power, scanning time, scanning strategy) and fixed parameters such as the protective atmosphere (argon, nitrogen, helium), temperature, type and shape of the powder material. The thematic scope is very broad, so the work was focused on optimizing the process of selective laser micrometallurgy for producing fully dense parts. The density is closely linked with other two conditions: discontinuity of the microstructure (microcracks) and stability (repeatability) of the process. Materials used for the research were stainless steel 316L (AISI), tool steel H13 (AISI), and titanium alloy Ti6Al7Nb (ISO 5832-11). Studies were performed with a scanning electron microscope, a light microscopes, a confocal microscope and a μCT scanner.
Mental Status Documentation: Information Quality and Data Processes
Weir, Charlene; Gibson, Bryan; Taft, Teresa; Slager, Stacey; Lewis, Lacey; Staggers, Nancy
2016-01-01
Delirium is a fluctuating disturbance of cognition and/or consciousness associated with poor outcomes. Caring for patients with delirium requires integration of disparate information across clinicians, settings and time. The goal of this project was to characterize the information processes involved in nurses’ assessment, documentation, decisionmaking and communication regarding patients’ mental status in the inpatient setting. VA nurse managers of medical wards (n=18) were systematically selected across the US. A semi-structured telephone interview focused on current assessment, documentation, and communication processes, as well as clinical and administrative decision-making was conducted, audio-recorded and transcribed. A thematic analytic approach was used. Five themes emerged: 1) Fuzzy Concepts, 2) Grey Data, 3) Process Variability 4) Context is Critical and 5) Goal Conflict. This project describes the vague and variable information processes related to delirium and mental status that undermine effective risk, prevention, identification, communication and mitigation of harm. PMID:28269919
Mental Status Documentation: Information Quality and Data Processes.
Weir, Charlene; Gibson, Bryan; Taft, Teresa; Slager, Stacey; Lewis, Lacey; Staggers, Nancy
2016-01-01
Delirium is a fluctuating disturbance of cognition and/or consciousness associated with poor outcomes. Caring for patients with delirium requires integration of disparate information across clinicians, settings and time. The goal of this project was to characterize the information processes involved in nurses' assessment, documentation, decisionmaking and communication regarding patients' mental status in the inpatient setting. VA nurse managers of medical wards (n=18) were systematically selected across the US. A semi-structured telephone interview focused on current assessment, documentation, and communication processes, as well as clinical and administrative decision-making was conducted, audio-recorded and transcribed. A thematic analytic approach was used. Five themes emerged: 1) Fuzzy Concepts, 2) Grey Data, 3) Process Variability 4) Context is Critical and 5) Goal Conflict. This project describes the vague and variable information processes related to delirium and mental status that undermine effective risk, prevention, identification, communication and mitigation of harm.
Equilibrium Noise in Ion Selective Field Effect Transistors.
1982-07-21
face. These parameters have been evaluated for several ion-selective membranes. DD I JAN ") 1473 EDITION or I Mov 09SIS OSSOLETE ONi 0102-LF-0146601...the "integrated circuit" noise on the processing parameters which were different for the two laboratories. This variability in the "integrated circuit...systems and is useful in the identification of the parameters limiting the performance of -11- these systems. In thermodynamic equilibrium, every
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
NASA Astrophysics Data System (ADS)
Kal, Subhadeep; Mohanty, Nihar; Farrell, Richard A.; Franke, Elliott; Raley, Angelique; Thibaut, Sophie; Pereira, Cheryl; Pillai, Karthik; Ko, Akiteru; Mosden, Aelan; Biolsi, Peter
2017-04-01
Scaling beyond the 7nm technology node demands significant control over the variability down to a few angstroms, in order to achieve reasonable yield. For example, to meet the current scaling targets it is highly desirable to achieve sub 30nm pitch line/space features at back-end of the line (BEOL) or front end of line (FEOL); uniform and precise contact/hole patterning at middle of line (MOL). One of the quintessential requirements for such precise and possibly self-aligned patterning strategies is superior etch selectivity between the target films while other masks/films are exposed. The need to achieve high etch selectivity becomes more evident for unit process development at MOL and BEOL, as a result of low density films choices (compared to FEOL film choices) due to lower temperature budget. Low etch selectivity with conventional plasma and wet chemical etch techniques, causes significant gouging (un-intended etching of etch stop layer, as shown in Fig 1), high line edge roughness (LER)/line width roughness (LWR), non-uniformity, etc. In certain circumstances this may lead to added downstream process stochastics. Furthermore, conventional plasma etches may also have the added disadvantage of plasma VUV damage and corner rounding (Fig. 1). Finally, the above mentioned factors can potentially compromise edge placement error (EPE) and/or yield. Therefore a process flow enabled with extremely high selective etches inherent to film properties and/or etch chemistries is a significant advantage. To improve this etch selectivity for certain etch steps during a process flow, we have to implement alternate highly selective, plasma free techniques in conjunction with conventional plasma etches (Fig 2.). In this article, we will present our plasma free, chemical gas phase etch technique using chemistries that have high selectivity towards a spectrum of films owing to the reaction mechanism ( as shown Fig 1). Gas phase etches also help eliminate plasma damage to the features during the etch process. Herein we will also demonstrate a test case on how a combination or plasma assisted and plasma free etch techniques has the potential to improve process performance of a 193nm immersion based self aligned quandruple patterning (SAQP) for BEOL compliant films (an example shown in Fig 2). In addition, we will also present on the application of gas etches for (1) profile improvement, (2) selective mandrel pull (3) critical dimension trim of mandrels, with an analysis of advantages over conventional techniques in terms of LER and EPE.
van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Lakie, Martin; Loram, Ian D.
2013-01-01
Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control. PMID:23675342
NASA Technical Reports Server (NTRS)
Smith, T. M.; Kloesel, M. F.; Sudbrack, C. K.
2017-01-01
Powder-bed additive manufacturing processes use fine powders to build parts layer by layer. For selective laser melted (SLM) Alloy 718, the powders that are available off-the-shelf are in the 10-45 or 15-45 micron size range. A comprehensive investigation of sixteen powders from these typical ranges and two off-nominal-sized powders is underway to gain insight into the impact of feedstock on processing, durability and performance of 718 SLM space-flight hardware. This talk emphasizes an aspect of this work: the impact of powder variability on the microstructure and defects observed in the as-fabricated and full heated material, where lab-scale components were built using vendor recommended parameters. These typical powders exhibit variation in composition, percentage of fines, roughness, morphology and particle size distribution. How these differences relate to the melt-pool size, porosity, grain structure, precipitate distributions, and inclusion content will be presented and discussed in context of build quality and powder acceptance.
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Genetic variability and evolutionary dynamics of viruses of the family Closteroviridae
Rubio, Luis; Guerri, José; Moreno, Pedro
2013-01-01
RNA viruses have a great potential for genetic variation, rapid evolution and adaptation. Characterization of the genetic variation of viral populations provides relevant information on the processes involved in virus evolution and epidemiology and it is crucial for designing reliable diagnostic tools and developing efficient and durable disease control strategies. Here we performed an updated analysis of sequences available in Genbank and reviewed present knowledge on the genetic variability and evolutionary processes of viruses of the family Closteroviridae. Several factors have shaped the genetic structure and diversity of closteroviruses. (I) A strong negative selection seems to be responsible for the high genetic stability in space and time for some viruses. (2) Long distance migration, probably by human transport of infected propagative plant material, have caused that genetically similar virus isolates are found in distant geographical regions. (3) Recombination between divergent sequence variants have generated new genotypes and plays an important role for the evolution of some viruses of the family Closteroviridae. (4) Interaction between virus strains or between different viruses in mixed infections may alter accumulation of certain strains. (5) Host change or virus transmission by insect vectors induced changes in the viral population structure due to positive selection of sequence variants with higher fitness for host-virus or vector-virus interaction (adaptation) or by genetic drift due to random selection of sequence variants during the population bottleneck associated to the transmission process. PMID:23805130
Wilimowska, J; Kłys, M; Jawień, W
2014-01-01
To compare the metabolic profile of valproic acid (VPA) in the studied groups of cases through an analysis of variability of concentrations of VPA with its selected metabolites (2-ene-VPA, 4-ene-VPA, 3-keto-VPA). Blood serum samples collected from 27 patients treated with VPA drugs in the Psychiatry Unit and in the Neurology and Cerebral Strokes Unit at the Ludwik Rydygier Provincial Specialist Hospital in Krakow, and blood serum samples collected from 26 patients hospitalized because of suspected acute VPA poisoning at the Toxicology Department, Chair of Toxicology and Environmental Diseases, Jagiellonian University Medical College in Krakow. The analysis of concentrations of VPA and its selected metabolites has shown that the metabolic profile of VPA determined in cases of acute poisoning is different from cases of VPA therapy. One of VPA's metabolic pathways - the process of desaturation - is unchanged in acute poisoning and prevails over the process of β-oxidation. The ingestion of toxic VPA doses results in an increased formation of 4-ene-VPA, proportional to an increase in VPA concentration. Acute VPA poisoning involves the saturation of VPA's metabolic transformations at the stage of β-oxidation. The process of oxidation of 2-ene-VPA to 3-keto-VPA is slowed down after the ingestion of toxic doses.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Sun, Wei; Huang, Guo H; Zeng, Guangming; Qin, Xiaosheng; Yu, Hui
2011-03-01
It is widely known that variation of the C/N ratio is dependent on many state variables during composting processes. This study attempted to develop a genetic algorithm aided stepwise cluster analysis (GASCA) method to describe the nonlinear relationships between the selected state variables and the C/N ratio in food waste composting. The experimental data from six bench-scale composting reactors were used to demonstrate the applicability of GASCA. Within the GASCA framework, GA searched optimal sets of both specified state variables and SCA's internal parameters; SCA established statistical nonlinear relationships between state variables and the C/N ratio; to avoid unnecessary and time-consuming calculation, a proxy table was introduced to save around 70% computational efforts. The obtained GASCA cluster trees had smaller sizes and higher prediction accuracy than the conventional SCA trees. Based on the optimal GASCA tree, the effects of the GA-selected state variables on the C/N ratio were ranged in a descending order as: NH₄+-N concentration>Moisture content>Ash Content>Mean Temperature>Mesophilic bacteria biomass. Such a rank implied that the variation of ammonium nitrogen concentration, the associated temperature and the moisture conditions, the total loss of both organic matters and available mineral constituents, and the mesophilic bacteria activity, were critical factors affecting the C/N ratio during the investigated food waste composting. This first application of GASCA to composting modelling indicated that more direct search algorithms could be coupled with SCA or other multivariate analysis methods to analyze complicated relationships during composting and many other environmental processes. Copyright © 2010 Elsevier B.V. All rights reserved.
On the period determination of ASAS eclipsing binaries
NASA Astrophysics Data System (ADS)
Mayangsari, L.; Priyatikanto, R.; Putra, M.
2014-03-01
Variable stars, or particularly eclipsing binaries, are very essential astronomical occurrence. Surveys are the backbone of astronomy, and many discoveries of variable stars are the results of surveys. All-Sky Automated Survey (ASAS) is one of the observing projects whose ultimate goal is photometric monitoring of variable stars. Since its first light in 1997, ASAS has collected 50,099 variable stars, with 11,076 eclipsing binaries among them. In the present work we focus on the period determination of the eclipsing binaries. Since the number of data points in each ASAS eclipsing binary light curve is sparse, period determination of any system is a not straightforward process. For 30 samples of such systems we compare the implementation of Lomb-Scargle algorithm which is an Fast Fourier Transform (FFT) basis and Phase Dispersion Minimization (PDM) method which is non-FFT basis to determine their period. It is demonstrated that PDM gives better performance at handling eclipsing detached (ED) systems whose variability are non-sinusoidal. More over, using semi-automatic recipes, we get better period solution and satisfactorily improve 53% of the selected object's light curves, but failed against another 7% of selected objects. In addition, we also highlight 4 interesting objects for further investigation.
Cultural and Cognitive Considerations in the Prevention of American Indian Adolescent Suicide.
ERIC Educational Resources Information Center
La Framboise, Teresa D.; Big Foot, Delores Subia
1988-01-01
Describes cultural considerations associated with American Indian adolescents coping within a transactional, cognitive-phenomenological framework. Discusses select cultural values and beliefs of American Indians associated with death in terms of person variables and situational demand characteristics that interplay in coping process. Suggests…
A Selectionist Perspective on Systemic and Behavioral Change in Organizations
ERIC Educational Resources Information Center
Sandaker, Ingunn
2009-01-01
This article provides a discussion of how different dynamics in production processes and communication structures in the organization serve as different environmental contingencies favoring different behavioral patterns and variability of performance in organizations. Finally, an elaboration on a systems perspective on the selection of corporate…
Behavioral variability in an evolutionary theory of behavior dynamics.
Popa, Andrei; McDowell, J J
2016-03-01
McDowell's evolutionary theory of behavior dynamics (McDowell, 2004) instantiates populations of behaviors (abstractly represented by integers) that evolve under the selection pressure of the environment in the form of positive reinforcement. Each generation gives rise to the next via low-level Darwinian processes of selection, recombination, and mutation. The emergent patterns can be analyzed and compared to those produced by biological organisms. The purpose of this project was to explore the effects of high mutation rates on behavioral variability in environments that arranged different reinforcer rates and magnitudes. Behavioral variability increased with the rate of mutation. High reinforcer rates and magnitudes reduced these effects; low reinforcer rates and magnitudes augmented them. These results are in agreement with live-organism research on behavioral variability. Various combinations of mutation rates, reinforcer rates, and reinforcer magnitudes produced similar high-level outcomes (equifinality). These findings suggest that the independent variables that describe an experimental condition interact; that is, they do not influence behavior independently. These conclusions have implications for the interpretation of high levels of variability, mathematical undermatching, and the matching theory. The last part of the discussion centers on a potential biological counterpart for the rate of mutation, namely spontaneous fluctuations in the brain's default mode network. © 2016 Society for the Experimental Analysis of Behavior.
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Charlesworth, Brian; Charlesworth, Deborah; Coyne, Jerry A; Langley, Charles H
2016-08-01
The 1966 GENETICS papers by John Hubby and Richard Lewontin were a landmark in the study of genome-wide levels of variability. They used the technique of gel electrophoresis of enzymes and proteins to study variation in natural populations of Drosophila pseudoobscura, at a set of loci that had been chosen purely for technical convenience, without prior knowledge of their levels of variability. Together with the independent study of human populations by Harry Harris, this seminal study provided the first relatively unbiased picture of the extent of genetic variability in protein sequences within populations, revealing that many genes had surprisingly high levels of diversity. These papers stimulated a large research program that found similarly high electrophoretic variability in many different species and led to statistical tools for interpreting the data in terms of population genetics processes such as genetic drift, balancing and purifying selection, and the effects of selection on linked variants. The current use of whole-genome sequences in studies of variation is the direct descendant of this pioneering work. Copyright © 2016 by the Genetics Society of America.
Greaves, Mel; Maley, Carlo C.
2012-01-01
Cancers evolve by a reiterative process of clonal expansion, genetic diversification and clonal selection within the adaptive landscapes of tissue ecosystems. The dynamics are complex with highly variable patterns of genetic diversity and resultant clonal architecture. Therapeutic intervention may decimate cancer clones, and erode their habitats, but inadvertently provides potent selective pressure for the expansion of resistant variants. The inherently Darwinian character of cancer lies at the heart of therapeutic failure but perhaps also holds the key to more effective control. PMID:22258609
How Distinctive Processing Enhances Hits and Reduces False Alarms
Hunt, R. Reed; Smith, Rebekah E.
2015-01-01
Distinctive processing is a concept designed to account for precision in memory, both correct responses and avoidance of errors. The principal question addressed in two experiments is how distinctive processing of studied material reduces false alarms to familiar distractors. Jacoby (Jacoby, Kelley, & McElree, 1999) has used the metaphors early selection and late correction to describe two different types of control processes. Early selection refers to limitations on access whereas late correction describes controlled monitoring of accessed information. The two types of processes are not mutually exclusive, and previous research has provided evidence for the operation of both. The data reported here extend previous work to a criterial recollection paradigm and to a recognition memory test. The results of both experiments show that variables that reduce false memory for highly familiar distracters continue to exert their effect under conditions of minimal post-access monitoring. Level of monitoring was reduced in the first experiment through test instructions and in the second experiment through speeded test responding. The results were consistent with the conclusion that both early selection and late correction operate to control accuracy in memory. PMID:26034343
Kljajic, Alen; Bester-Rogac, Marija; Klobcar, Andrej; Zupet, Rok; Pejovnik, Stane
2013-02-01
The active pharmaceutical ingredient orlistat is usually manufactured using a semi-synthetic procedure, producing crude product and complex mixtures of highly related impurities with minimal side-chain structure variability. It is therefore crucial for the overall success of industrial/pharmaceutical application to develop an effective purification process. In this communication, we present the newly developed water-in-oil reversed micelles and microemulsion system-based crystallization process. Physiochemical properties of the presented crystallization media were varied through surfactants and water composition, and the impact on efficiency was measured through final variation of these two parameters. Using precisely defined properties of the dispersed water phase in crystallization media, a highly efficient separation process in terms of selectivity and yield was developed. Small-angle X-ray scattering, high-performance liquid chromatography, mass spectrometry, and scanning electron microscopy were used to monitor and analyze the separation processes and orlistat products obtained. Typical process characteristics, especially selectivity and yield in regard to reference examples, were compared and discussed. Copyright © 2012 Wiley Periodicals, Inc.
Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?
Delhey, Kaspar; Peters, Anne
2008-01-01
Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766
Proteomic Prediction of Breast Cancer Risk: A Cohort Study
2007-03-01
Total 1728 1189 68.81 (c) Data processing. Data analysis was performed using in-house software (Du P , Angeletti RH. Automatic deconvolution of...isotope-resolved mass spectra using variable selection and quantized peptide mass distribution. Anal Chem., 78:3385-92, 2006; P Du, R Sudha, MB...control. Reportable Outcomes So far our publications have been on the development of algorithms for signal processing: 1. Du P , Angeletti RH
NASA Astrophysics Data System (ADS)
Rathod, Vishal
The objective of the present project was to develop the Ibuprofen-loaded Nanostructured Lipid Carrier (IBU-NLCs) for topical ocular delivery based on substantial pre-formulation screening of the components and understanding the interplay between the formulation and process variables. The BCS Class II drug: Ibuprofen was selected as the model drug for the current study. IBU-NLCs were prepared by melt emulsification and ultrasonication technique. Extensive pre-formulation studies were performed to screen the lipid components (solid and liquid) based on drug's solubility and affinity as well as components compatibility. The results from DSC & XRD assisted in selecting the most suitable ratio to be utilized for future studies. DynasanRTM 114 was selected as the solid lipid & MiglyolRTM 840 was selected as the liquid lipid based on preliminary lipid screening. The ratio of 6:4 was predicted to be the best based on its crystallinity index and the thermal events. As there are many variables involved for further optimization of the formulation, a single design approach is not always adequate. A hybrid-design approach was applied by employing the Plackett Burman design (PBD) for preliminary screening of 7 critical variables, followed by Box-Behnken design (BBD), a sub-type of response surface methodology (RSM) design using 2 relatively significant variables from the former design and incorporating Surfactant/Co-surfactant ratio as the third variable. Comparatively, KolliphorRTM HS15 demonstrated lower Mean Particle Size (PS) & Polydispersity Index (PDI) and KolliphorRTM P188 resulted in Zeta Potential (ZP) < -20 mV during the surfactant screening & stability studies. Hence, Surfactant/Cosurfactant ratio was employed as the third variable to understand its synergistic effect on the response variables. We selected PS, PDI, and ZP as critical response variables in the PBD since they significantly influence the stability & performance of NLCs. Formulations prepared using BBD were further characterized and evaluated concerning PS, PDI, ZP and Entrapment Efficiency (EE) to identify the multi-factor interactions between selected formulation variables. In vitro release studies were performed using Spectra/por dialysis membrane on Franz diffusion cell and Phosphate Saline buffer (7.4 pH) as the medium. Samples for assay, EE, Loading Capacity (LC), Solubility studies & in-vitro release were filtered using Amicon 50K and analyzed via UPLC system (Waters) at a detection wavelength of 220 nm. Significant variables were selected through PBD, and the third variable was incorporated based on surfactant screening & stability studies for the next design. Assay of the BBD based formulations was found to be within 95-104% of the theoretically calculated values. Further studies were investigated for PS, PDI, ZP & EE. PS was found to be in the range of 103-194 nm with PDI ranging from 0.118 to 0.265. The ZP and EE were observed to be in the range of -22.2 to -11 mV & 90 to 98.7 % respectively. Drug release of 30% was observed from the optimized formulation in the first 6 hr of in-vitro studies, and the drug release showed a sustained release of ibuprofen thereafter over several hours. These values also confirm that the production method, and all other selected variables, effectively promoted the incorporation of ibuprofen in NLC. Quality by Design (QbD) approach was successfully implemented in developing a robust ophthalmic formulation with superior physicochemical and morphometric properties. NLCs as the nanocarrier demonstrated promising perspective for topical delivery of poorly water-soluble drugs.
General shape optimization capability
NASA Technical Reports Server (NTRS)
Chargin, Mladen K.; Raasch, Ingo; Bruns, Rudolf; Deuermeyer, Dawson
1991-01-01
A method is described for calculating shape sensitivities, within MSC/NASTRAN, in a simple manner without resort to external programs. The method uses natural design variables to define the shape changes in a given structure. Once the shape sensitivities are obtained, the shape optimization process is carried out in a manner similar to property optimization processes. The capability of this method is illustrated by two examples: the shape optimization of a cantilever beam with holes, loaded by a point load at the free end (with the shape of the holes and the thickness of the beam selected as the design variables), and the shape optimization of a connecting rod subjected to several different loading and boundary conditions.
The importance of immune gene variability (MHC) in evolutionary ecology and conservation
Sommer, Simone
2005-01-01
Genetic studies have typically inferred the effects of human impact by documenting patterns of genetic differentiation and levels of genetic diversity among potentially isolated populations using selective neutral markers such as mitochondrial control region sequences, microsatellites or single nucleotide polymorphism (SNPs). However, evolutionary relevant and adaptive processes within and between populations can only be reflected by coding genes. In vertebrates, growing evidence suggests that genetic diversity is particularly important at the level of the major histocompatibility complex (MHC). MHC variants influence many important biological traits, including immune recognition, susceptibility to infectious and autoimmune diseases, individual odours, mating preferences, kin recognition, cooperation and pregnancy outcome. These diverse functions and characteristics place genes of the MHC among the best candidates for studies of mechanisms and significance of molecular adaptation in vertebrates. MHC variability is believed to be maintained by pathogen-driven selection, mediated either through heterozygote advantage or frequency-dependent selection. Up to now, most of our knowledge has derived from studies in humans or from model organisms under experimental, laboratory conditions. Empirical support for selective mechanisms in free-ranging animal populations in their natural environment is rare. In this review, I first introduce general information about the structure and function of MHC genes, as well as current hypotheses and concepts concerning the role of selection in the maintenance of MHC polymorphism. The evolutionary forces acting on the genetic diversity in coding and non-coding markers are compared. Then, I summarise empirical support for the functional importance of MHC variability in parasite resistance with emphasis on the evidence derived from free-ranging animal populations investigated in their natural habitat. Finally, I discuss the importance of adaptive genetic variability with respect to human impact and conservation, and implications for future studies. PMID:16242022
Kostanyan, Artak E; Erastov, Andrey A; Shishilov, Oleg N
2014-06-20
The multiple dual mode (MDM) counter-current chromatography separation processes consist of a succession of two isocratic counter-current steps and are characterized by the shuttle (forward and back) transport of the sample in chromatographic columns. In this paper, the improved MDM method based on variable duration of alternating phase elution steps has been developed and validated. The MDM separation processes with variable duration of phase elution steps are analyzed. Basing on the cell model, analytical solutions are developed for impulse and non-impulse sample loading at the beginning of the column. Using the analytical solutions, a calculation program is presented to facilitate the simulation of MDM with variable duration of phase elution steps, which can be used to select optimal process conditions for the separation of a given feed mixture. Two options of the MDM separation are analyzed: 1 - with one-step solute elution: the separation is conducted so, that the sample is transferred forward and back with upper and lower phases inside the column until the desired separation of the components is reached, and then each individual component elutes entirely within one step; 2 - with multi-step solute elution, when the fractions of individual components are collected in over several steps. It is demonstrated that proper selection of the duration of individual cycles (phase flow times) can greatly increase the separation efficiency of CCC columns. Experiments were carried out using model mixtures of compounds from the GUESSmix with solvent systems hexane/ethyl acetate/methanol/water. The experimental results are compared to the predictions of the theory. A good agreement between theory and experiment has been demonstrated. Copyright © 2014 Elsevier B.V. All rights reserved.
Resolving the Conflict Between Associative Overdominance and Background Selection
Zhao, Lei; Charlesworth, Brian
2016-01-01
In small populations, genetic linkage between a polymorphic neutral locus and loci subject to selection, either against partially recessive mutations or in favor of heterozygotes, may result in an apparent selective advantage to heterozygotes at the neutral locus (associative overdominance) and a retardation of the rate of loss of variability by genetic drift at this locus. In large populations, selection against deleterious mutations has previously been shown to reduce variability at linked neutral loci (background selection). We describe analytical, numerical, and simulation studies that shed light on the conditions under which retardation vs. acceleration of loss of variability occurs at a neutral locus linked to a locus under selection. We consider a finite, randomly mating population initiated from an infinite population in equilibrium at a locus under selection. With mutation and selection, retardation occurs only when S, the product of twice the effective population size and the selection coefficient, is of order 1. With S >> 1, background selection always causes an acceleration of loss of variability. Apparent heterozygote advantage at the neutral locus is, however, always observed when mutations are partially recessive, even if there is an accelerated rate of loss of variability. With heterozygote advantage at the selected locus, loss of variability is nearly always retarded. The results shed light on experiments on the loss of variability at marker loci in laboratory populations and on the results of computer simulations of the effects of multiple selected loci on neutral variability. PMID:27182952
NASA Astrophysics Data System (ADS)
Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud
2014-02-01
The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.
Olive fruits and vacuum impregnation, an interesting combination for dietetic iron enrichment.
Zunin, Paola; Turrini, Federica; Leardi, Riccardo; Boggia, Raffaella
2017-02-01
In this study vacuum impregnation (VI) was employed for the iron enrichment of olive fruits, which are very interesting as food vehicle for VI mineral supplementation for the porosity of their pulp. NaFeEDTA was chosen for olives fortification since it prevents iron from binding with compounds that could hinder it from being efficiently absorbed and since it causes few organoleptic problems. In order to improve the efficiency of the VI process, several parameters of the whole process were studied by design of experiment techniques. First of all D-optimal design was employed for a preliminary screening of the most significant process variables and showed that the concentration of VI solution was by far the most significant process variable, though its time in contact with olives was also significant. A factorial design was then applied to the remaining variables and it showed that the speed of the addition of VI solution was also significant. Finally, the application of a face centered composite design to the three selected variables allowed to detect processing conditions leading to final iron contents of 1.5-3 mg/g, corresponding to an introduction of 10-15 mg Fe with four or five fortified olive fruits. No effect on olive taste was observed at these concentrations. The results showed that olive fruits were the most interesting vehicles for the supplementation of both iron and other minerals.
Medical Decision Making: A Selective Review for Child Psychiatrists and Psychologists
ERIC Educational Resources Information Center
Galanter, Cathryn A.; Patel, Vimla L.
2005-01-01
Physicians, including child and adolescent psychiatrists, show variability and inaccuracies in diagnosis and treatment of their patients and do not routinely implement evidenced-based medical and psychiatric treatments in the community. We believe that it is necessary to characterize the decision-making processes of child and adolescent…
Content Analysis Schedule for Bilingual Education Programs: Proyecto PAL.
ERIC Educational Resources Information Center
Gonzalez, Castor
This content analysis schedule for "Proyecto PAL" in San Jose, California, presents information on the history, funding, and scope of the project. Included are sociolinguistic process variables such as the native and dominant languages of students and their interaction. Information is provided on staff selection and the linguistic…
Gender Differences in Field-Dependence and Educational Style.
ERIC Educational Resources Information Center
Fritz, Robert L.
1994-01-01
Secondary marketing students (n=144) completed the Group Embedded Figures Test and Educational Style Preference Inventory. Gender differences were found in information processing strategies and on 12 of 19 conative variables representing the way moods and emotions act as filters to produce selective attention. These differences could be most…
Personal Variables and Bias in Educational Decision-Making.
ERIC Educational Resources Information Center
Huebner, E. Scott; And Others
1984-01-01
Findings regarding the influence of four potential sources of bias (sex, socioeconimic status, race, physical attractiveness) upon decision-making stages of the assessment process are selectively reviewed. It is concluded that, though further research is needed, convincing evidence of bias in later stages of decision making has yet to be…
16 CFR 1107.21 - Periodic testing.
Code of Federal Regulations, 2012 CFR
2012-01-01
... samples selected for testing pass the test, there is a high degree of assurance that the other untested... determining the testing interval include, but are not limited to, the following: (i) High variability in test... process management techniques and tests provide a high degree of assurance of compliance if they are not...
16 CFR § 1107.21 - Periodic testing.
Code of Federal Regulations, 2013 CFR
2013-01-01
... samples selected for testing pass the test, there is a high degree of assurance that the other untested... determining the testing interval include, but are not limited to, the following: (i) High variability in test... process management techniques and tests provide a high degree of assurance of compliance if they are not...
16 CFR 1107.21 - Periodic testing.
Code of Federal Regulations, 2014 CFR
2014-01-01
... samples selected for testing pass the test, there is a high degree of assurance that the other untested... determining the testing interval include, but are not limited to, the following: (i) High variability in test... process management techniques and tests provide a high degree of assurance of compliance if they are not...
Mindful Listening Instruction: Does It Make a Difference
ERIC Educational Resources Information Center
Anderson, William Todd
2013-01-01
This study examines the effect of mindfulness on student listening. Mindfulness is defined as "the process of noticing novel distinctions." Fifth grade students (N = 38) at a single school participated in this study, which used a posttest-only, random selection experimental design. The Independent Variable was exposure to mindful…
Examining Self Regulated Learning in Relation to Certain Selected Variables
ERIC Educational Resources Information Center
Johnson, N.
2012-01-01
Self-regulation is the controlling of a process or activity by the students who are involved in Problem solving in Physics rather than by an external agency (Johnson, 2011). Selfregulated learning consists of three main components: cognition, metacognition, and motivation. Cognition includes skills necessary to encode, memorise, and recall…
Myths and realities about the recovery of L׳Aquila after the earthquake
Contreras, Diana; Blaschke, Thomas; Kienberger, Stefan; Zeil, Peter
2014-01-01
There is a set of myths which are linked to the recovery of L׳Aquila, such as: the L׳Aquila recovery has come to a halt, it is still in an early recovery phase, and there is economic stagnation. The objective of this paper is threefold: (a) to identify and develop a set of spatial indicators for the case of L׳Aquila, (b) to test the feasibility of a numerical assessment of these spatial indicators as a method to monitor the progress of a recovery process after an earthquake and (c) to answer the question whether the recovery process in L׳Aquila stagnates or not. We hypothesize that after an earthquake the spatial distribution of expert defined variables can constitute an index to assess the recovery process more objectively. In these articles, we aggregated several indicators of building conditions to characterize the physical dimension, and we developed building use indicators to serve as proxies for the socio-economic dimension while aiming for transferability of this approach. The methodology of this research entailed six steps: (1) fieldwork, (2) selection of a sampling area, (3) selection of the variables and indicators for the physical and socio-economic dimensions, (4) analyses of the recovery progress using spatial indicators by comparing the changes in the restricted core area as well as building use over time; (5) selection and integration of the results through expert weighting; and (6) determining hotspots of recovery in L׳Aquila. Eight categories of building conditions and twelve categories of building use were identified. Both indicators: building condition and building use are aggregated into a recovery index. The reconstruction process in the city center of L׳Aquila seems to stagnate, which is reflected by the five following variables: percentage of buildings with on-going reconstruction, partial reconstruction, reconstruction projected residential building use and transport facilities. These five factors were still at low levels within the core area in 2012. Nevertheless, we can conclude that the recovery process in L׳Aquila did not come to a halt but is still ongoing, albeit being slow. PMID:26779431
Do attentional capacities and processing speed mediate the effect of age on executive functioning?
Gilsoul, Jessica; Simon, Jessica; Hogge, Michaël; Collette, Fabienne
2018-02-06
The executive processes are well known to decline with age, and similar data also exists for attentional capacities and processing speed. Therefore, we investigated whether these two last nonexecutive variables would mediate the effect of age on executive functions (inhibition, shifting, updating, and dual-task coordination). We administered a large battery of executive, attentional and processing speed tasks to 104 young and 71 older people, and we performed mediation analyses with variables showing a significant age effect. All executive and processing speed measures showed age-related effects while only the visual scanning task performance (selective attention) was explained by age when controlled for gender and educational level. Regarding mediation analyses, visual scanning partially mediated the age effect on updating while processing speed partially mediated the age effect on shifting, updating and dual-task coordination. In a more exploratory way, inhibition was also found to partially mediate the effect of age on the three other executive functions. Attention did not greatly influence executive functioning in aging while, in agreement with the literature, processing speed seems to be a major mediator of the age effect on these processes. Interestingly, the global pattern of results seems also to indicate an influence of inhibition but further studies are needed to confirm the role of that variable as a mediator and its relative importance by comparison with processing speed.
Soft sensor for real-time cement fineness estimation.
Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir
2015-03-01
This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Richer concepts are better remembered: number of features effects in free recall
Hargreaves, Ian S.; Pexman, Penny M.; Johnson, Jeremy C.; Zdrazilova, Lenka
2012-01-01
Many models of memory build in a term for encoding variability, the observation that there can be variability in the richness or extensiveness of processing at encoding, and that this variability has consequences for retrieval. In four experiments, we tested the expectation that encoding variability could be driven by the properties of the to-be-remembered item. Specifically, that concepts associated with more semantic features would be better remembered than concepts associated with fewer semantic features. Using feature listing norms we selected sets of items for which people tend to list higher numbers of features (high NoF) and items for which people tend to list lower numbers of features (low NoF). Results showed more accurate free recall for high NoF concepts than for low NoF concepts in expected memory tasks (Experiments 1–3) and also in an unexpected memory task (Experiment 4). This effect was not the result of associative chaining between study items (Experiment 3), and can be attributed to the amount of item-specific processing that occurs at study (Experiment 4). These results provide evidence that stimulus-specific differences in processing at encoding have consequences for explicit memory retrieval. PMID:22514526
Approximate techniques of structural reanalysis
NASA Technical Reports Server (NTRS)
Noor, A. K.; Lowder, H. E.
1974-01-01
A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.
Atlı, Burcu; Yamaç, Mustafa; Yıldız, Zeki; Isikhuemhen, Omoanghe S
2016-01-01
In this study, culture conditions were optimized to improve lovastatin production by Omphalotus olearius, isolate OBCC 2002, using statistical experimental designs. The Plackett-Burman design was used to select important variables affecting lovastatin production. Accordingly, glucose, peptone, and agitation speed were determined as the variables that have influence on lovastatin production. In a further experiment, these variables were optimized with a Box-Behnken design and applied in a submerged process; this resulted in 12.51 mg/L lovastatin production on a medium containing glucose (10 g/L), peptone (5 g/L), thiamine (1 mg/L), and NaCl (0.4 g/L) under static conditions. This level of lovastatin production is eight times higher than that produced under unoptimized media and growth conditions by Omphalotus olearius. To the best of our knowledge, this is the first attempt to optimize submerged fermentation process for lovastatin production by Omphalotus olearius.
Impaired auditory temporal selectivity in the inferior colliculus of aged Mongolian gerbils.
Khouri, Leila; Lesica, Nicholas A; Grothe, Benedikt
2011-07-06
Aged humans show severe difficulties in temporal auditory processing tasks (e.g., speech recognition in noise, low-frequency sound localization, gap detection). A degradation of auditory function with age is also evident in experimental animals. To investigate age-related changes in temporal processing, we compared extracellular responses to temporally variable pulse trains and human speech in the inferior colliculus of young adult (3 month) and aged (3 years) Mongolian gerbils. We observed a significant decrease of selectivity to the pulse trains in neuronal responses from aged animals. This decrease in selectivity led, on the population level, to an increase in signal correlations and therefore a decrease in heterogeneity of temporal receptive fields and a decreased efficiency in encoding of speech signals. A decrease in selectivity to temporal modulations is consistent with a downregulation of the inhibitory transmitter system in aged animals. These alterations in temporal processing could underlie declines in the aging auditory system, which are unrelated to peripheral hearing loss. These declines cannot be compensated by traditional hearing aids (that rely on amplification of sound) but may rather require pharmacological treatment.
Ecological prediction with nonlinear multivariate time-frequency functional data models
Yang, Wen-Hsi; Wikle, Christopher K.; Holan, Scott H.; Wildhaber, Mark L.
2013-01-01
Time-frequency analysis has become a fundamental component of many scientific inquiries. Due to improvements in technology, the amount of high-frequency signals that are collected for ecological and other scientific processes is increasing at a dramatic rate. In order to facilitate the use of these data in ecological prediction, we introduce a class of nonlinear multivariate time-frequency functional models that can identify important features of each signal as well as the interaction of signals corresponding to the response variable of interest. Our methodology is of independent interest and utilizes stochastic search variable selection to improve model selection and performs model averaging to enhance prediction. We illustrate the effectiveness of our approach through simulation and by application to predicting spawning success of shovelnose sturgeon in the Lower Missouri River.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antoniucci, S.; Giannini, T.; Li Causi, G.
2014-02-10
Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchusmore » than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.« less
Gannu, Ramesh; Yamsani, Vamshi Vishnu; Palem, Chinna Reddy; Yamsani, Shravan Kumar; Yamsani, Madhusudan Rao
2010-01-01
The objective of the investigation was to optimize the iontophoresis process parameters of lisinopril (LSP) by 3 x 3 factorial design, Box-Behnken statistical design. LSP is an ideal candidate for iontophoretic delivery to avoid the incomplete absorption problem associated after its oral administration. Independent variables selected were current (X(1)), salt (sodium chloride) concentration (X(2)) and medium/pH (X(3)). The dependent variables studied were amount of LSP permeated in 4 h (Y(1): Q(4)), 24 h (Y(2): Q(24)) and lag time (Y(3)). Mathematical equations and response surface plots were used to relate the dependent and independent variables. The regression equation generated for the iontophoretic permeation was Y(1) = 1.98 + 1.23X(1) - 0.49X(2) + 0.025X(3) - 0.49X(1)X(2) + 0.040X(1)X(3) - 0.010X(2)X(3) + 0.58X(1)(2) - 0.17X(2)(2) - 0.18X(3)(2); Y(2) = 7.28 + 3.32X(1) - 1.52X(2) + 0.22X(3) - 1.30X(1)X(2) + 0.49X(1)X(3) - 0.090X(2)X(3) + 0.79X(1)(2) - 0.62X(2)(2) - 0.33X(3)(2) and Y(3) = 0.60 + 0.0038X(1) + 0.12X(2) - 0.011X(3) + 0.005X(1)X(2) - 0.018X(1)X(3) - 0.015X(2)X(3) - 0.00075X(1)(2) + 0.017X(2)(2) - 0.11X(3)(2). The statistical validity of the polynomials was established and optimized process parameters were selected by feasibility and grid search. Validation of the optimization study with 8 confirmatory runs indicated high degree of prognostic ability of response surface methodology. The use of Box-Behnken design approach helped in identifying the critical process parameters in the iontophoretic delivery of lisinopril.
Immigration, stress, and depressive symptoms in a Mexican-American community.
Golding, J M; Burnam, M A
1990-03-01
This study assessed levels of depressive symptomatology in a household probability sample of Mexico-born (N = 706) and U.S.-born (N = 538) Mexican Americans. We hypothesized that immigration status differences in acculturation, strain, social resources, and social conflict, as well as differences in the associations of these variables with depression, would account for differences in depression between U.S.-born and Mexico-born respondents. U.S.-born Mexican Americans had higher depression scores than those born in Mexico. When cultural and social psychological variables were controlled in a multiple regression analysis, the immigrant status difference persisted. Tests of interaction terms suggested greater vulnerability to the effects of low acculturation and low educational attainment among the U.S.-born relative to those born in Mexico; however, the immigrant status difference persisted after controlling for these interactions. Unmeasured variables such as selective migration of persons with better coping skills, selective return of depressed immigrants, or generational differences in social comparison processes may account for the immigration status difference.
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
A review of covariate selection for non-experimental comparative effectiveness research.
Sauer, Brian C; Brookhart, M Alan; Roy, Jason; VanderWeele, Tyler
2013-11-01
This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for a common cause pathway between treatment and outcome can remove confounding, whereas adjustment for other structural types may increase bias. For this reason, variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely known. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher's knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. Copyright © 2013 John Wiley & Sons, Ltd.
A Review of Covariate Selection for Nonexperimental Comparative Effectiveness Research
Sauer, Brian C.; Brookhart, Alan; Roy, Jason; Vanderweele, Tyler
2014-01-01
This paper addresses strategies for selecting variables for adjustment in non-experimental comparative effectiveness research (CER), and uses causal graphs to illustrate the causal network that relates treatment to outcome. Variables in the causal network take on multiple structural forms. Adjustment for on a common cause pathway between treatment and outcome can remove confounding, while adjustment for other structural types may increase bias. For this reason variable selection would ideally be based on an understanding of the causal network; however, the true causal network is rarely know. Therefore, we describe more practical variable selection approaches based on background knowledge when the causal structure is only partially known. These approaches include adjustment for all observed pretreatment variables thought to have some connection to the outcome, all known risk factors for the outcome, and all direct causes of the treatment or the outcome. Empirical approaches, such as forward and backward selection and automatic high-dimensional proxy adjustment, are also discussed. As there is a continuum between knowing and not knowing the causal, structural relations of variables, we recommend addressing variable selection in a practical way that involves a combination of background knowledge and empirical selection and that uses the high-dimensional approaches. This empirical approach can be used to select from a set of a priori variables based on the researcher’s knowledge to be included in the final analysis or to identify additional variables for consideration. This more limited use of empirically-derived variables may reduce confounding while simultaneously reducing the risk of including variables that may increase bias. PMID:24006330
Rahman, Anisur; Faqeerzada, Mohammad A; Cho, Byoung-Kwan
2018-03-14
Allicin and soluble solid content (SSC) in garlic is the responsible for its pungent flavor and odor. However, current conventional methods such as the use of high-pressure liquid chromatography and a refractometer have critical drawbacks in that they are time-consuming, labor-intensive and destructive procedures. The present study aimed to predict allicin and SSC in garlic using hyperspectral imaging in combination with variable selection algorithms and calibration models. Hyperspectral images of 100 garlic cloves were acquired that covered two spectral ranges, from which the mean spectra of each clove were extracted. The calibration models included partial least squares (PLS) and least squares-support vector machine (LS-SVM) regression, as well as different spectral pre-processing techniques, from which the highest performing spectral preprocessing technique and spectral range were selected. Then, variable selection methods, such as regression coefficients, variable importance in projection (VIP) and the successive projections algorithm (SPA), were evaluated for the selection of effective wavelengths (EWs). Furthermore, PLS and LS-SVM regression methods were applied to quantitatively predict the quality attributes of garlic using the selected EWs. Of the established models, the SPA-LS-SVM model obtained an Rpred2 of 0.90 and standard error of prediction (SEP) of 1.01% for SSC prediction, whereas the VIP-LS-SVM model produced the best result with an Rpred2 of 0.83 and SEP of 0.19 mg g -1 for allicin prediction in the range 1000-1700 nm. Furthermore, chemical images of garlic were developed using the best predictive model to facilitate visualization of the spatial distributions of allicin and SSC. The present study clearly demonstrates that hyperspectral imaging combined with an appropriate chemometrics method can potentially be employed as a fast, non-invasive method to predict the allicin and SSC in garlic. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
Thermo-Mechanical Processing in Friction Stir Welds
NASA Technical Reports Server (NTRS)
Schneider, Judy
2003-01-01
Friction stir welding is a solid-phase joining, or welding process that was invented in 1991 at The Welding Institute (TWI). The process is potentially capable of joining a wide variety of aluminum alloys that are traditionally difficult to fusion weld. The friction stir welding (FSW) process produces welds by moving a non-consumable rotating pin tool along a seam between work pieces that are firmly clamped to an anvil. At the start of the process, the rotating pin is plunged into the material to a pre-determined load. The required heat is produced by a combination of frictional and deformation heating. The shape of the tool shoulder and supporting anvil promotes a high hydrostatic pressure along the joint line as the tool shears and literally stirs the metal together. To produce a defect free weld, process variables (RPM, transverse speed, and downward force) and tool pin design must be chosen carefully. An accurate model of the material flow during the process is necessary to guide process variable selection. At MSFC a plastic slip line model of the process has been synthesized based on macroscopic images of the resulting weld material. Although this model appears to have captured the main features of the process, material specific interactions are not understood. The objective of the present research was to develop a basic understanding of the evolution of the microstructure to be able to relate it to the deformation process variables of strain, strain rate, and temperature.
Talent identification and early development of elite water-polo players: a 2-year follow-up study.
Falk, Bareket; Lidor, Ronnie; Lander, Yael; Lang, Benny
2004-04-01
The processes of talent detection and early development are critical in any sport programme. However, not much is known about the appropriate strategies to be implemented during these processes, and little scientific inquiry has been conducted in this area. The aim of this study was to identify variables of swimming, ball handling and physical ability, as well as game intelligence, which could assist in the selection process of young water-polo players. Twenty-four players aged 14-15 years underwent a battery of tests three times during a 2-year period, before selection to the junior national team. The tests included: freestyle swim for 50, 100, 200 and 400 m, 100-m breast-stroke, 100-m 'butterfly' (with breast-stroke leg motion), 50-m dribbling, throwing at the goal, throw for distance in the water, vertical 'jump' from the water, and evaluation of game intelligence by two coaches. A comparison of those players eventually selected to the team and those not selected demonstrated that, 2 years before selection, selected players were already superior on most of the swim tasks (with the exception of breast-stroke and 50-m freestyle), as well as dribbling and game intelligence. This superiority was maintained throughout the 2 years. Two-way tabulation revealed that, based on baseline scores, the prediction for 67% of the players was in agreement with the final selection to the junior national team. We recommend that fewer swim events be used in the process of selecting young water-polo players, and that greater emphasis should be placed on evaluation of game intelligence.
Selective attention deficits in obsessive-compulsive disorder: the role of metacognitive processes.
Koch, Julia; Exner, Cornelia
2015-02-28
While initial studies supported the hypothesis that cognitive characteristics that capture cognitive resources act as underlying mechanisms in memory deficits in obsessive-compulsive disorder (OCD), the influence of those characteristics on selective attention has not been studied, yet. In this study, we examined the influence of cognitive self-consciousness (CSC), rumination and worrying on performance in selective attention in OCD and compared the results to a depressive and a healthy control group. We found that 36 OCD and 36 depressive participants were impaired in selective attention in comparison to 36 healthy controls. In all groups, hierarchical regression analyses demonstrated that age, intelligence and years in school significantly predicted performance in selective attention. But only in OCD, the predictive power of the regression model was improved when CSC, rumination and worrying were implemented as predictor variables. In contrast, in none of the three groups the predictive power improved when indicators of severity of obsessive-compulsive (OC) and depressive symptoms and trait anxiety were introduced as predictor variables. Thus, our results support the assumption that mental characteristics that bind cognitive resources play an important role in the understanding of selective attention deficits in OCD and that this mechanism is especially relevant for OCD. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Natural selection reduced diversity on human y chromosomes.
Wilson Sayres, Melissa A; Lohmueller, Kirk E; Nielsen, Rasmus
2014-01-01
The human Y chromosome exhibits surprisingly low levels of genetic diversity. This could result from neutral processes if the effective population size of males is reduced relative to females due to a higher variance in the number of offspring from males than from females. Alternatively, selection acting on new mutations, and affecting linked neutral sites, could reduce variability on the Y chromosome. Here, using genome-wide analyses of X, Y, autosomal and mitochondrial DNA, in combination with extensive population genetic simulations, we show that low observed Y chromosome variability is not consistent with a purely neutral model. Instead, we show that models of purifying selection are consistent with observed Y diversity. Further, the number of sites estimated to be under purifying selection greatly exceeds the number of Y-linked coding sites, suggesting the importance of the highly repetitive ampliconic regions. While we show that purifying selection removing deleterious mutations can explain the low diversity on the Y chromosome, we cannot exclude the possibility that positive selection acting on beneficial mutations could have also reduced diversity in linked neutral regions, and may have contributed to lowering human Y chromosome diversity. Because the functional significance of the ampliconic regions is poorly understood, our findings should motivate future research in this area.
Natural Selection Reduced Diversity on Human Y Chromosomes
Wilson Sayres, Melissa A.; Lohmueller, Kirk E.; Nielsen, Rasmus
2014-01-01
The human Y chromosome exhibits surprisingly low levels of genetic diversity. This could result from neutral processes if the effective population size of males is reduced relative to females due to a higher variance in the number of offspring from males than from females. Alternatively, selection acting on new mutations, and affecting linked neutral sites, could reduce variability on the Y chromosome. Here, using genome-wide analyses of X, Y, autosomal and mitochondrial DNA, in combination with extensive population genetic simulations, we show that low observed Y chromosome variability is not consistent with a purely neutral model. Instead, we show that models of purifying selection are consistent with observed Y diversity. Further, the number of sites estimated to be under purifying selection greatly exceeds the number of Y-linked coding sites, suggesting the importance of the highly repetitive ampliconic regions. While we show that purifying selection removing deleterious mutations can explain the low diversity on the Y chromosome, we cannot exclude the possibility that positive selection acting on beneficial mutations could have also reduced diversity in linked neutral regions, and may have contributed to lowering human Y chromosome diversity. Because the functional significance of the ampliconic regions is poorly understood, our findings should motivate future research in this area. PMID:24415951
An, Ruopeng; Sturm, Roland
2017-03-01
A South African insurer launched a rebate program for healthy food purchases for its members, but only available in program-designated supermarkets. To eliminate selection bias in program enrollment, we estimated the impact of subsidies in nudging the population towards a healthier diet using an instrumental variable approach. Data came from a health behavior questionnaire administered among members in the health promotion program. Individual and supermarket addresses were geocoded and differential distances from home to program-designated supermarkets versus competing supermarkets were calculated. Bivariate probit and linear instrumental variable models were performed to control for likely unobserved selection biases, employing differential distances as a predictor of program enrollment. For regular fast-food, processed meat, and salty food consumption, approximately two-thirds of the difference between participants and nonparticipants was attributable to the intervention and one-third to selection effects. For fruit/ vegetable and fried food consumption, merely one-eighth of the difference was selection. The rebate reduced regular consumption of fast food by 15% and foods high in salt/sugar and fried foods by 22%- 26%, and increased fruit/vegetable consumption by 21% (0.66 serving/day). Large population interventions are an essential complement to laboratory experiments, but selection biases require explicit attention in evaluation studies conducted in naturalistic settings.
Application of Advanced Process Control techniques to a pusher type reheating furnace
NASA Astrophysics Data System (ADS)
Zanoli, S. M.; Pepe, C.; Barboni, L.
2015-11-01
In this paper an Advanced Process Control system aimed at controlling and optimizing a pusher type reheating furnace located in an Italian steel plant is proposed. The designed controller replaced the previous control system, based on PID controllers manually conducted by process operators. A two-layer Model Predictive Control architecture has been adopted that, exploiting a chemical, physical and economic modelling of the process, overcomes the limitations of plant operators’ mental model and knowledge. In addition, an ad hoc decoupling strategy has been implemented, allowing the selection of the manipulated variables to be used for the control of each single process variable. Finally, in order to improve the system flexibility and resilience, the controller has been equipped with a supervision module. A profitable trade-off between conflicting specifications, e.g. safety, quality and production constraints, energy saving and pollution impact, has been guaranteed. Simulation tests and real plant results demonstrated the soundness and the reliability of the proposed system.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Selecting an oxygen plant for a copper smelter modernization
NASA Astrophysics Data System (ADS)
Larson, Kenneth H.; Hutchison, Robert L.
1994-10-01
The selection of an oxygen plant for the Cyprus Miami smelter modernization project began with a good definition of the use requirements and the smelter process variables that can affect oxygen demand. To achieve a reliable supply of oxygen with a reasonable amount of capital, critical equipment items were reviewed and reliability was added through the use of installed spares, purchase of insurance spare parts or the installation of equipment design for 50 percent of the production design such that the plant could operate with one unit while the other unit is being maintained. The operating range of the plant was selected to cover variability in smelter oxygen demand, and it was recognized that the broader operating range sacrificed about two to three percent in plant power consumption. Careful consideration of the plant "design point" was important to both the capital and operating costs of the plant, and a design point was specified that allowed a broad range of operation for maximum flexibility.
How holistic processing of faces relates to cognitive control and intelligence.
Gauthier, Isabel; Chua, Kao-Wei; Richler, Jennifer J
2018-04-16
The Vanderbilt Holistic Processing Test for faces (VHPT-F) is the first standard test designed to measure individual differences in holistic processing. The test measures failures of selective attention to face parts through congruency effects, an operational definition of holistic processing. However, this conception of holistic processing has been challenged by the suggestion that it may tap into the same selective attention or cognitive control mechanisms that yield congruency effects in Stroop and Flanker paradigms. Here, we report data from 130 subjects on the VHPT-F, several versions of Stroop and Flanker tasks, as well as fluid IQ. Results suggested a small degree of shared variance in Stroop and Flanker congruency effects, which did not relate to congruency effects on the VHPT-F. Variability on the VHPT-F was also not correlated with Fluid IQ. In sum, we find no evidence that holistic face processing as measured by congruency in the VHPT-F is accounted for by domain-general control mechanisms.
Clustering and variable selection in the presence of mixed variable types and missing data.
Storlie, C B; Myers, S M; Katusic, S K; Weaver, A L; Voigt, R G; Croarkin, P E; Stoeckel, R E; Port, J D
2018-05-17
We consider the problem of model-based clustering in the presence of many correlated, mixed continuous, and discrete variables, some of which may have missing values. Discrete variables are treated with a latent continuous variable approach, and the Dirichlet process is used to construct a mixture model with an unknown number of components. Variable selection is also performed to identify the variables that are most influential for determining cluster membership. The work is motivated by the need to cluster patients thought to potentially have autism spectrum disorder on the basis of many cognitive and/or behavioral test scores. There are a modest number of patients (486) in the data set along with many (55) test score variables (many of which are discrete valued and/or missing). The goal of the work is to (1) cluster these patients into similar groups to help identify those with similar clinical presentation and (2) identify a sparse subset of tests that inform the clusters in order to eliminate unnecessary testing. The proposed approach compares very favorably with other methods via simulation of problems of this type. The results of the autism spectrum disorder analysis suggested 3 clusters to be most likely, while only 4 test scores had high (>0.5) posterior probability of being informative. This will result in much more efficient and informative testing. The need to cluster observations on the basis of many correlated, continuous/discrete variables with missing values is a common problem in the health sciences as well as in many other disciplines. Copyright © 2018 John Wiley & Sons, Ltd.
The relationship between Urbanisation and changes in flood regimes: the British case.
NASA Astrophysics Data System (ADS)
Prosdocimi, Ilaria; Miller, James; Kjeldsen, Thomas
2013-04-01
This pilot study investigates if long-term changes in observed series of extreme flood events can be attributed to changes in climate and land-use drivers. We investigate, in particular, changes of winter and summer peaks extracted from gauged instantaneous flows records in selected British catchments. Using a Poisson processes framework, the frequency and magnitude of extreme events above a threshold can be modelled simultaneously under the standard stationarity assumptions of constant location and scale. In the case of a non-stationary process, the framework was extended to include covariates to account for changes in the process parameters. By including covariates related to the physical process, such as increased urbanization or North Atlantic Oscillation (NAO) Index levels, rather than just time, an enhanced understanding of the changes in high flows is obtainable. Indeed some variability is expected in any natural process and can be partially explained by large scale measures like NAO Index. The focus of this study is to understand, once natural variability is taken into account, how much of the remaining variability can be explained by increased urbanization levels. For this study, catchments are selected that have experienced significant growth in urbanisation in the past decades, typically 1960s to present, and for which concurrent good quality high flow data are available. Temporal change in the urban extent within catchments is obtained using novel processing of historical mapping sources, whereby the urban, suburban and rural fractions are obtained for decadal periods. Suitable flow data from localised rural catchments are also included as control cases to compare observed changes in the flood regime of urbanised catchments against, and to provide evidence of changes in regional climate. Initial results suggest that the effect of urbanisation can be detected in the rate of occurrence of flood events, especially in summer, whereas the impact on flood magnitude is less pronounced. Further tests across a greater number of catchments are necessary to validate these results.
Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D
2016-11-10
A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.
NASA Astrophysics Data System (ADS)
Petukhov, A. M.; Soldatov, E. Yu
2017-12-01
Separation of electroweak component from strong component of associated Zγ production on hadron colliders is a very challenging task due to identical final states of such processes. The only difference is the origin of two leading jets in these two processes. Rectangular cuts on jet kinematic variables from ATLAS/CMS 8 TeV Zγ experimental analyses were improved using machine learning techniques. New selection variables were also tested. The expected significance of separation for LHC experiments conditions at the second datataking period (Run2) and 120 fb-1 amount of data reaches more than 5σ. Future experimental observation of electroweak Zγ production can also lead to the observation physics beyond Standard Model.
NASA Astrophysics Data System (ADS)
Colette, Augustin; Bessagnet, Bertrand; Dangiola, Ariela; D'Isidoro, Massimo; Gauss, Michael; Granier, Claire; Hodnebrog, Øivind; Jakobs, Hermann; Kanakidou, Maria; Khokhar, Fahim; Law, Kathy; Maurizi, Alberto; Meleux, Frederik; Memmesheimer, Michael; Nyiri, Agnes; Rouil, Laurence; Stordal, Frode; Tampieri, Francesco
2010-05-01
With the growth of urban agglomerations, assessing the drivers of variability of air quality in and around the main anthropogenic emission hotspots has become a major societal concern as well as a scientific challenge. These drivers include emission changes and meteorological variability; both of them can be investigated by means of numerical modelling of trends over the past few years. A collaborative effort has been developed in the framework of the CityZen European project to address this question. Several chemistry and transport models (CTMs) are deployed in this activity: four regional models (BOLCHEM, CHIMERE, EMEP and EURAD) and three global models (CTM2, MOZART, and TM4). The period from 1998 to 2007 has been selected for the historic reconstruction. The focus for the present preliminary presentation is Europe. A consistent set of emissions is used by all partners (EMEP for the European domain and IPCC-AR5 beyond) while a variety of meteorological forcing is used to gain robustness in the ensemble spread amongst models. The results of this experiment will be investigated to address the following questions: - Is the envelope of models able to reproduce the observed trends of the key chemical constituents? - How the variability amongst models changes in time and space and what does it tell us about the processes driving the observed trends? - Did chemical regimes and aerosol formation processes changed in selected hotspots? Answering the above questions will contribute to fulfil the ultimate goal of the present study: distinguishing the respective contribution of meteorological variability and emissions changes on air quality trends in major anthropogenic emissions hotspots.
Content Analysis Schedule for Bilingual Education Programs: BICEP Intercambio de la Cultura.
ERIC Educational Resources Information Center
Shore, Marietta Saravia; Nafus, Charles
This content analysis schedule for BICEP Intercambio de la Cultura (San Bernardino, California), presents information on the history, funding, and scope of the project. Included are sociolinguistic process variables such as the native and dominant languages of students and their interaction. Information is provided on staff selection and the…
Content Analysis Schedule for Bilingual Education Programs: Bilingual Project Forward-Adelante.
ERIC Educational Resources Information Center
Figueroa, Ramon
This content analysis schedule for the Bilingual Project of Rochester, New York presents information on the history, funding, and scope of the project. Included are sociolinguistic process variables such as the native and dominant languages of students and their interaction. Information is provided on staff selection and the linguistic background…
The Impact of Education on Income Distribution.
ERIC Educational Resources Information Center
Tinbergen, Jan
The author's previously developed theory on income distribution, in which two of the explanatory variables are the average level and the distribution of education, is refined and tested on data selected and processed by the author and data from three studies by Americans. The material consists of data on subdivisions of three countries, the United…
ERIC Educational Resources Information Center
Fritz, Robert L.
A study examined the association between field-dependence and its related information processing characteristics, and educational cognitive style as a model of conative influence. Data were collected from 145 secondary marketing education students in nothern Georgia during spring 1991. Descriptive statistics, Pearson product moment correlations,…
ERIC Educational Resources Information Center
Hess, Richard T.; And Others
This content analysis schedule for the Albuquerque (New Mexico) Public School Bicultural-Bilingual Program presents information on the history, funding, and scope of the project. Included are sociolinguistic process variables such as the native and dominant languages of students and their interaction. Information is provided on staff selection and…
Mode Selection Techniques in Variable Mass Flexible Body Modeling
NASA Technical Reports Server (NTRS)
Quiocho, Leslie J.; Ghosh, Tushar K.; Frenkel, David; Huynh, An
2010-01-01
In developing a flexible body spacecraft simulation for the Launch Abort System of the Orion vehicle, when a rapid mass depletion takes place, the dynamics problem with time varying eigenmodes had to be addressed. Three different techniques were implemented, with different trade-offs made between performance and fidelity. A number of technical issues had to be solved in the process. This paper covers the background of the variable mass flexibility problem, the three approaches to simulating it, and the technical issues that were solved in formulating and implementing them.
Evolutionary algorithm for vehicle driving cycle generation.
Perhinschi, Mario G; Marlowe, Christopher; Tamayo, Sergio; Tu, Jun; Wayne, W Scott
2011-09-01
Modeling transit bus emissions and fuel economy requires a large amount of experimental data over wide ranges of operational conditions. Chassis dynamometer tests are typically performed using representative driving cycles defined based on vehicle instantaneous speed as sequences of "microtrips", which are intervals between consecutive vehicle stops. Overall significant parameters of the driving cycle, such as average speed, stops per mile, kinetic intensity, and others, are used as independent variables in the modeling process. Performing tests at all the necessary combinations of parameters is expensive and time consuming. In this paper, a methodology is proposed for building driving cycles at prescribed independent variable values using experimental data through the concatenation of "microtrips" isolated from a limited number of standard chassis dynamometer test cycles. The selection of the adequate "microtrips" is achieved through a customized evolutionary algorithm. The genetic representation uses microtrip definitions as genes. Specific mutation, crossover, and karyotype alteration operators have been defined. The Roulette-Wheel selection technique with elitist strategy drives the optimization process, which consists of minimizing the errors to desired overall cycle parameters. This utility is part of the Integrated Bus Information System developed at West Virginia University.
Massey, Jessica S; Meares, Susanne; Batchelor, Jennifer; Bryant, Richard A
2015-07-01
Few studies have examined whether psychological distress and pain affect cognitive functioning in the acute to subacute phase (up to 30 days postinjury) following mild traumatic brain injury (mTBI). The current study explored whether acute posttraumatic stress, depression, and pain were associated with performance on a task of selective and sustained attention completed under conditions of increasing cognitive demands (standard, auditory distraction, and dual-task), and on tests of working memory, memory, processing speed, reaction time (RT), and verbal fluency. At a mean of 2.87 days (SD = 2.32) postinjury, 50 adult mTBI participants, consecutive admissions to a Level 1 trauma hospital, completed neuropsychological tests and self-report measures of acute posttraumatic stress, depression, and pain. A series of canonical correlation analyses was used to explore the relationships of a common set of psychological variables to various sets of neuropsychological variables. Significant results were found on the task of selective and sustained attention. Strong relationships were found between psychological variables and speed (r(c) = .56, p = .02) and psychological variables and accuracy (r(c) = .68, p = .002). Pain and acute posttraumatic stress were associated with higher speed scores (reflecting more correctly marked targets) under standard conditions. Acute posttraumatic stress was associated with lower accuracy scores across all task conditions. Moderate but nonsignificant associations were found between psychological variables and most cognitive tasks. Acute posttraumatic stress and pain show strong associations with selective and sustained attention following mTBI. (c) 2015 APA, all rights reserved).
Orlandini, S; Pasquini, B; Caprini, C; Del Bubba, M; Squarcialupi, L; Colotta, V; Furlanetto, S
2016-09-30
A comprehensive strategy involving the use of mixture-process variable (MPV) approach and Quality by Design principles has been applied in the development of a capillary electrophoresis method for the simultaneous determination of the anti-inflammatory drug diclofenac and its five related substances. The selected operative mode consisted in microemulsion electrokinetic chromatography with the addition of methyl-β-cyclodextrin. The critical process parameters included both the mixture components (MCs) of the microemulsion and the process variables (PVs). The MPV approach allowed the simultaneous investigation of the effects of MCs and PVs on the critical resolution between diclofenac and its 2-deschloro-2-bromo analogue and on analysis time. MPV experiments were used both in the screening phase and in the Response Surface Methodology, making it possible to draw MCs and PVs contour plots and to find important interactions between MCs and PVs. Robustness testing was carried out by MPV experiments and validation was performed following International Conference on Harmonisation guidelines. The method was applied to a real sample of diclofenac gastro-resistant tablets. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Arduini, R. F.; Aherron, R. M.; Samms, R. W.
1984-01-01
A computational model of the deterministic and stochastic processes involved in multispectral remote sensing was designed to evaluate the performance of sensor systems and data processing algorithms for spectral feature classification. Accuracy in distinguishing between categories of surfaces or between specific types is developed as a means to compare sensor systems and data processing algorithms. The model allows studies to be made of the effects of variability of the atmosphere and of surface reflectance, as well as the effects of channel selection and sensor noise. Examples of these effects are shown.
A meta-analysis of research on science teacher education practices associated with inquiry strategy
NASA Astrophysics Data System (ADS)
Sweitzer, Gary L.; Anderson, Ronald D.
A meta-analysis was conducted of studies of teacher education having as measured outcomes one or more variables associated with inquiry teaching. Inquiry addresses those teacher behaviors that facilitate student acquisition of concepts and processes through strategies such as problem solving, uses of evidence, logical and analytical reasoning, clarification of values, and decision making. Studies which contained sufficient data for the calculation of an effect size were coded for 114 variables. These variables were divided into the following six major categories: study information and design characteristics, teacher and teacher trainee characteristics, student characteristics, treatment description, outcome description, and effect size calculation. A total of 68 studies resulting in 177 effect size calculations were coded. Mean effect sizes broken across selected variables were calculated.
Vanderhaeghe, F; Smolders, A J P; Roelofs, J G M; Hoffmann, M
2012-03-01
Selecting an appropriate variable subset in linear multivariate methods is an important methodological issue for ecologists. Interest often exists in obtaining general predictive capacity or in finding causal inferences from predictor variables. Because of a lack of solid knowledge on a studied phenomenon, scientists explore predictor variables in order to find the most meaningful (i.e. discriminating) ones. As an example, we modelled the response of the amphibious softwater plant Eleocharis multicaulis using canonical discriminant function analysis. We asked how variables can be selected through comparison of several methods: univariate Pearson chi-square screening, principal components analysis (PCA) and step-wise analysis, as well as combinations of some methods. We expected PCA to perform best. The selected methods were evaluated through fit and stability of the resulting discriminant functions and through correlations between these functions and the predictor variables. The chi-square subset, at P < 0.05, followed by a step-wise sub-selection, gave the best results. In contrast to expectations, PCA performed poorly, as so did step-wise analysis. The different chi-square subset methods all yielded ecologically meaningful variables, while probable noise variables were also selected by PCA and step-wise analysis. We advise against the simple use of PCA or step-wise discriminant analysis to obtain an ecologically meaningful variable subset; the former because it does not take into account the response variable, the latter because noise variables are likely to be selected. We suggest that univariate screening techniques are a worthwhile alternative for variable selection in ecology. © 2011 German Botanical Society and The Royal Botanical Society of the Netherlands.
Application of Plackett-Burman experimental design in the development of muffin using adlay flour
NASA Astrophysics Data System (ADS)
Valmorida, J. S.; Castillo-Israel, K. A. T.
2018-01-01
The application of Plackett-Burman experimental design was made to identify significant formulation and process variables in the development of muffin using adlay flour. Out of the seven screened variables, levels of sugar, levels of butter and baking temperature had the most significant influence on the product model in terms of physicochemical and sensory acceptability. Results of the experiment further demonstrate the effectiveness of Plackett-Burman design in choosing the best adlay variety for muffin production. Hence, the statistical method used in the study permits an efficient selection of important variables needed in the development of muffin from adlay which can be optimized using response surface methodology.
Toward a formalization of the process to select IMIA Yearbook best papers.
Lamy, J-B; Séroussi, B; Griffon, N; Kerdelhué, G; Jaulent, M-C; Bouaud, J
2015-01-01
Each year, the International Medical Informatics Association Yearbook recognizes significant scientific papers, labelled as "best papers", published the previous year in the subfields of biomedical informatics that correspond to the different section topics of the journal. For each section, about fifteen pre-selected "candidate" best papers are externally peer-reviewed to select the actual best papers. Although based on the available literature, little is known about the pre-selection process. To move toward an explicit formalization of the candidate best papers selection process to reduce variability in the literature search across sections and over years. A methodological framework is proposed to build for each section topic specific queries tailored to PubMed and Web of Science citation databases. The two sets of returned papers are merged and reviewed by two independent section editors and citations are tagged as "discarded", "pending", and "kept". A protocolized consolidation step is then jointly conducted to resolve conflicts. A bibliographic software tool, BibReview, was developed to support the whole process. The proposed search strategy was fully applied to the Decision Support section of the 2013 edition of the Yearbook. For this section, 1124 references were returned (689 PubMed-specific, 254 WoS-specific, 181 common to both databases) among which the 15 candidate best papers were selected. The search strategy for determining candidate best papers for an IMIA Yearbook's section is now explicitly specified and allows for reproducibility. However, some aspects of the whole process remain reviewer-dependent, mostly because there is no characterization of a "best paper".
Disruptive chemicals, senescence and immortality
Carnero, Amancio; Blanco-Aparicio, Carmen; Kondoh, Hiroshi; Lleonart, Matilde E.; Martinez-Leal, Juan Fernando; Mondello, Chiara; Ivana Scovassi, A.; Bisson, William H.; Amedei, Amedeo; Roy, Rabindra; Woodrick, Jordan; Colacci, Annamaria; Vaccari, Monica; Raju, Jayadev; Al-Mulla, Fahd; Al-Temaimi, Rabeah; Salem, Hosni K.; Memeo, Lorenzo; Forte, Stefano; Singh, Neetu; Hamid, Roslida A.; Ryan, Elizabeth P.; Brown, Dustin G.; Wise, John Pierce; Wise, Sandra S.; Yasaei, Hemad
2015-01-01
Carcinogenesis is thought to be a multistep process, with clonal evolution playing a central role in the process. Clonal evolution involves the repeated ‘selection and succession’ of rare variant cells that acquire a growth advantage over the remaining cell population through the acquisition of ‘driver mutations’ enabling a selective advantage in a particular micro-environment. Clonal selection is the driving force behind tumorigenesis and possesses three basic requirements: (i) effective competitive proliferation of the variant clone when compared with its neighboring cells, (ii) acquisition of an indefinite capacity for self-renewal, and (iii) establishment of sufficiently high levels of genetic and epigenetic variability to permit the emergence of rare variants. However, several questions regarding the process of clonal evolution remain. Which cellular processes initiate carcinogenesis in the first place? To what extent are environmental carcinogens responsible for the initiation of clonal evolution? What are the roles of genotoxic and non-genotoxic carcinogens in carcinogenesis? What are the underlying mechanisms responsible for chemical carcinogen-induced cellular immortality? Here, we explore the possible mechanisms of cellular immortalization, the contribution of immortalization to tumorigenesis and the mechanisms by which chemical carcinogens may contribute to these processes. PMID:26106138
Some anthropologic characteristics of elite female handball players at different playing positions.
Rogulj, Nenad; Srhoj, Vatromir; Nazor, Mirjana; Srhoj, Ljerka; Cavala, Marijana
2005-12-01
Differences in motor and psychologic variables according to playing positions were analyzed in a sample of 53 elite female handball players, members of junior and senior national team. Motor status included 8 variables for assessment of explosive strength of landing and throwing, agility, speed strength, movement frequency, and flexibility. Psychologic status was analyzed through 4 dimensions according to Eysenck: extroversion, psychotic behavior, neurotic behavior, and lie. The anthropologic features analyzed showed statistically significant differences. Considering motor abilities, differences were recorded in the variables for assessment of speed strength, agility and leg movement frequency, where wings predominated, whereas goalkeepers showed predominance in flexibility. In psychologic status, differences were present in the variable for assessment of extroversion, which was most pronounced in wings, whereas psychotic behavior was more expressed in those at pivot position. The differences were primarily consequential to the selection of players of a specific anthropologic profile for particular playing positions. The hypothesis of the impact of kinesiologic specificities of a particular playing position on the formation of the players' anthropologic profile should be scientifically tested. Study results may found application in training and contest practice, especially in forming anthropologic models for particular positions during the process of player selection.
ERIC Educational Resources Information Center
Brusco, Michael J.; Singh, Renu; Steinley, Douglas
2009-01-01
The selection of a subset of variables from a pool of candidates is an important problem in several areas of multivariate statistics. Within the context of principal component analysis (PCA), a number of authors have argued that subset selection is crucial for identifying those variables that are required for correct interpretation of the…
Cheng, George Shu-Xing; Mulkey, Steven L; Wang, Qiang; Chow, Andrew J
2013-11-26
A method and apparatus for intelligently controlling continuous process variables. A Dream Controller comprises an Intelligent Engine mechanism and a number of Model-Free Adaptive (MFA) controllers, each of which is suitable to control a process with specific behaviors. The Intelligent Engine can automatically select the appropriate MFA controller and its parameters so that the Dream Controller can be easily used by people with limited control experience and those who do not have the time to commission, tune, and maintain automatic controllers.
Casian, Tibor; Iurian, Sonia; Bogdan, Catalina; Rus, Lucia; Moldovan, Mirela; Tomuta, Ioan
2017-12-01
This study proposed the development of oral lyophilisates with respect to pediatric medicine development guidelines, by applying risk management strategies and DoE as an integrated QbD approach. Product critical quality attributes were overviewed by generating Ishikawa diagrams for risk assessment purposes, considering process, formulation and methodology related parameters. Failure Mode Effect Analysis was applied to highlight critical formulation and process parameters with an increased probability of occurrence and with a high impact on the product performance. To investigate the effect of qualitative and quantitative formulation variables D-optimal designs were used for screening and optimization purposes. Process parameters related to suspension preparation and lyophilization were classified as significant factors, and were controlled by implementing risk mitigation strategies. Both quantitative and qualitative formulation variables introduced in the experimental design influenced the product's disintegration time, mechanical resistance and dissolution properties selected as CQAs. The optimum formulation selected through Design Space presented ultra-fast disintegration time (5 seconds), a good dissolution rate (above 90%) combined with a high mechanical resistance (above 600 g load). Combining FMEA and DoE allowed the science based development of a product with respect to the defined quality target profile by providing better insights on the relevant parameters throughout development process. The utility of risk management tools in pharmaceutical development was demonstrated.
King, Adam C; Newell, Karl M
2015-10-01
The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.
Wilson, Stephen M; Isenberg, Anna Lisette; Hickok, Gregory
2009-11-01
Word production is a complex multistage process linking conceptual representations, lexical entries, phonological forms and articulation. Previous studies have revealed a network of predominantly left-lateralized brain regions supporting this process, but many details regarding the precise functions of different nodes in this network remain unclear. To better delineate the functions of regions involved in word production, we used event-related functional magnetic resonance imaging (fMRI) to identify brain areas where blood oxygen level-dependent (BOLD) responses to overt picture naming were modulated by three psycholinguistic variables: concept familiarity, word frequency, and word length, and one behavioral variable: reaction time. Each of these variables has been suggested by prior studies to be associated with different aspects of word production. Processing of less familiar concepts was associated with greater BOLD responses in bilateral occipitotemporal regions, reflecting visual processing and conceptual preparation. Lower frequency words produced greater BOLD signal in left inferior temporal cortex and the left temporoparietal junction, suggesting involvement of these regions in lexical selection and retrieval and encoding of phonological codes. Word length was positively correlated with signal intensity in Heschl's gyrus bilaterally, extending into the mid-superior temporal gyrus (STG) and sulcus (STS) in the left hemisphere. The left mid-STS site was also modulated by reaction time, suggesting a role in the storage of lexical phonological codes.
NASA Astrophysics Data System (ADS)
Collins, Curtis Andrew
Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.
Model selection bias and Freedman's paradox
Lukacs, P.M.; Burnham, K.P.; Anderson, D.R.
2010-01-01
In situations where limited knowledge of a system exists and the ratio of data points to variables is small, variable selection methods can often be misleading. Freedman (Am Stat 37:152-155, 1983) demonstrated how common it is to select completely unrelated variables as highly "significant" when the number of data points is similar in magnitude to the number of variables. A new type of model averaging estimator based on model selection with Akaike's AIC is used with linear regression to investigate the problems of likely inclusion of spurious effects and model selection bias, the bias introduced while using the data to select a single seemingly "best" model from a (often large) set of models employing many predictor variables. The new model averaging estimator helps reduce these problems and provides confidence interval coverage at the nominal level while traditional stepwise selection has poor inferential properties. ?? The Institute of Statistical Mathematics, Tokyo 2009.
Variables selection methods in near-infrared spectroscopy.
Xiaobo, Zou; Jiewen, Zhao; Povey, Malcolm J W; Holmes, Mel; Hanpin, Mao
2010-05-14
Near-infrared (NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields, such as the petrochemical, pharmaceutical, environmental, clinical, agricultural, food and biomedical sectors during the past 15 years. A NIR spectrum of a sample is typically measured by modern scanning instruments at hundreds of equally spaced wavelengths. The large number of spectral variables in most data sets encountered in NIR spectral chemometrics often renders the prediction of a dependent variable unreliable. Recently, considerable effort has been directed towards developing and evaluating different procedures that objectively identify variables which contribute useful information and/or eliminate variables containing mostly noise. This review focuses on the variable selection methods in NIR spectroscopy. Selection methods include some classical approaches, such as manual approach (knowledge based selection), "Univariate" and "Sequential" selection methods; sophisticated methods such as successive projections algorithm (SPA) and uninformative variable elimination (UVE), elaborate search-based strategies such as simulated annealing (SA), artificial neural networks (ANN) and genetic algorithms (GAs) and interval base algorithms such as interval partial least squares (iPLS), windows PLS and iterative PLS. Wavelength selection with B-spline, Kalman filtering, Fisher's weights and Bayesian are also mentioned. Finally, the websites of some variable selection software and toolboxes for non-commercial use are given. Copyright 2010 Elsevier B.V. All rights reserved.
Crawford, John T; Loken, Luke C; Casson, Nora J; Smith, Colin; Stone, Amanda G; Winslow, Luke A
2015-01-06
Advanced sensor technology is widely used in aquatic monitoring and research. Most applications focus on temporal variability, whereas spatial variability has been challenging to document. We assess the capability of water chemistry sensors embedded in a high-speed water intake system to document spatial variability. This new sensor platform continuously samples surface water at a range of speeds (0 to >45 km h(-1)) resulting in high-density, mesoscale spatial data. These novel observations reveal previously unknown variability in physical, chemical, and biological factors in streams, rivers, and lakes. By combining multiple sensors into one platform, we were able to detect terrestrial-aquatic hydrologic connections in a small dystrophic lake, to infer the role of main-channel vs backwater nutrient processing in a large river and to detect sharp chemical changes across aquatic ecosystem boundaries in a stream/lake complex. Spatial sensor data were verified in our examples by comparing with standard lab-based measurements of selected variables. Spatial fDOM data showed strong correlation with wet chemistry measurements of DOC, and optical NO3 concentrations were highly correlated with lab-based measurements. High-frequency spatial data similar to our examples could be used to further understand aquatic biogeochemical fluxes, ecological patterns, and ecosystem processes, and will both inform and benefit from fixed-site data.
Crawford, John T.; Loken, Luke C.; Casson, Nora J.; Smith, Collin; Stone, Amanda G.; Winslow, Luke A.
2015-01-01
Advanced sensor technology is widely used in aquatic monitoring and research. Most applications focus on temporal variability, whereas spatial variability has been challenging to document. We assess the capability of water chemistry sensors embedded in a high-speed water intake system to document spatial variability. This new sensor platform continuously samples surface water at a range of speeds (0 to >45 km h–1) resulting in high-density, mesoscale spatial data. These novel observations reveal previously unknown variability in physical, chemical, and biological factors in streams, rivers, and lakes. By combining multiple sensors into one platform, we were able to detect terrestrial–aquatic hydrologic connections in a small dystrophic lake, to infer the role of main-channel vs backwater nutrient processing in a large river and to detect sharp chemical changes across aquatic ecosystem boundaries in a stream/lake complex. Spatial sensor data were verified in our examples by comparing with standard lab-based measurements of selected variables. Spatial fDOM data showed strong correlation with wet chemistry measurements of DOC, and optical NO3 concentrations were highly correlated with lab-based measurements. High-frequency spatial data similar to our examples could be used to further understand aquatic biogeochemical fluxes, ecological patterns, and ecosystem processes, and will both inform and benefit from fixed-site data.
2014-01-01
Background The UK Clinical Aptitude Test (UKCAT) was introduced to facilitate widening participation in medical and dental education in the UK by providing universities with a continuous variable to aid selection; one that might be less sensitive to the sociodemographic background of candidates compared to traditional measures of educational attainment. Initial research suggested that males, candidates from more advantaged socioeconomic backgrounds and those who attended independent or grammar schools performed better on the test. The introduction of the A* grade at A level permits more detailed analysis of the relationship between UKCAT scores, secondary educational attainment and sociodemographic variables. Thus, our aim was to further assess whether the UKCAT is likely to add incremental value over A level (predicted or actual) attainment in the selection process. Methods Data relating to UKCAT and A level performance from 8,180 candidates applying to medicine in 2009 who had complete information relating to six key sociodemographic variables were analysed. A series of regression analyses were conducted in order to evaluate the ability of sociodemographic status to predict performance on two outcome measures: A level ‘best of three’ tariff score; and the UKCAT scores. Results In this sample A level attainment was independently and positively predicted by four sociodemographic variables (independent/grammar schooling, White ethnicity, age and professional social class background). These variables also independently and positively predicted UKCAT scores. There was a suggestion that UKCAT scores were less sensitive to educational background compared to A level attainment. In contrast to A level attainment, UKCAT score was independently and positively predicted by having English as a first language and male sex. Conclusions Our findings are consistent with a previous report; most of the sociodemographic factors that predict A level attainment also predict UKCAT performance. However, compared to A levels, males and those speaking English as a first language perform better on UKCAT. Our findings suggest that UKCAT scores may be more influenced by sex and less sensitive to school type compared to A levels. These factors must be considered by institutions utilising the UKCAT as a component of the medical and dental school selection process. PMID:24400861
Mueller, Martina; Wagner, Carol L; Annibale, David J; Knapp, Rebecca G; Hulsey, Thomas C; Almeida, Jonas S
2006-03-01
Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS) will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU) census. A five-step procedure was developed to identify predictive variables. Clinical expert (CE) thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN) and a multivariate logistic regression model (MLR). The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML) and the mathematical model employing the ANN. CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006). Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0-1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with regard to how much the new data differ from the data originally used for the development of the prediction tool. State-of-the-art, machine-learning methods can be employed for the development of sophisticated tools to aid clinicians' decisions. We identified numerous variables considered relevant for extubation decisions for mechanically ventilated premature infants with RDS. We then developed a web-based decision-support tool for clinicians which can be made widely available and potentially improve patient care world wide.
[Violent video games and aggression: long-term impact and selection effects].
Staude-Müller, Frithjof
2011-01-01
This study applied social-cognitive models of aggression in order to examine relations between video game use and aggressive tendencies and biases in social information processing. To this end, 499 secondary school students (aged 12-16) completed a survey on two occasions one year apart. Hierarchical regression analysis probed media effects and selection effects and included relevant contextual variables (parental monitoring of media consumption, impulsivity, and victimization). Results revealed that it was not the consumption of violent video games but rather an uncontrolled pattern of video game use that was associated with increasing aggressive tendencies. This increase was partly mediated by a hostile attribution bias in social information processing. The influence of aggressive tendencies on later video game consumption was also examined (selection path). Adolescents with aggressive traits intensified their video game behavior only in terms of their uncontrolled video game use. This was found even after controlling for sensation seeking and parental media control.
Data mining and statistical inference in selective laser melting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, Chandrika
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Data mining and statistical inference in selective laser melting
Kamath, Chandrika
2016-01-11
Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less
Selection Practices of Group Leaders: A National Survey.
ERIC Educational Resources Information Center
Riva, Maria T.; Lippert, Laurel; Tackett, M. Jan
2000-01-01
Study surveys the selection practices of group leaders. Explores methods of selection, variables used to make selection decisions, and the types of selection errors that leaders have experienced. Results suggest that group leaders use clinical judgment to make selection decisions and endorse using some specific variables in selection. (Contains 22…
Pei, Yan-Ling; Wu, Zhi-Sheng; Shi, Xin-Yuan; Zhou, Lu-Wei; Qiao, Yan-Jiang
2014-09-01
The present paper firstly reviewed the research progress and main methods of NIR spectral assignment coupled with our research results. Principal component analysis was focused on characteristic signal extraction to reflect spectral differences. Partial least squares method was concerned with variable selection to discover characteristic absorption band. Two-dimensional correlation spectroscopy was mainly adopted for spectral assignment. Autocorrelation peaks were obtained from spectral changes, which were disturbed by external factors, such as concentration, temperature and pressure. Density functional theory was used to calculate energy from substance structure to establish the relationship between molecular energy and spectra change. Based on the above reviewed method, taking a NIR spectral assignment of chlorogenic acid as example, a reliable spectral assignment for critical quality attributes of Chinese materia medica (CMM) was established using deuterium technology and spectral variable selection. The result demonstrated the assignment consistency according to spectral features of different concentrations of chlorogenic acid and variable selection region of online NIR model in extract process. Although spectral assignment was initial using an active pharmaceutical ingredient, it is meaningful to look forward to the futurity of the complex components in CMM. Therefore, it provided methodology for NIR spectral assignment of critical quality attributes in CMM.
Bayesian block-diagonal variable selection and model averaging
Papaspiliopoulos, O.; Rossell, D.
2018-01-01
Summary We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for block-diagonal designs. Posterior probabilities for any number of models are obtained by evaluating a single one-dimensional integral, and other quantities of interest such as variable inclusion probabilities and model-averaged regression estimates are obtained by an adaptive, deterministic one-dimensional numerical integration. The overall computational cost scales linearly with the number of blocks, which can be processed in parallel, and exponentially with the block size, rendering it most adequate in situations where predictors are organized in many moderately-sized blocks. For general designs, we approximate the Gram matrix by a block-diagonal matrix using spectral clustering and propose an iterative algorithm that capitalizes on the block-diagonal algorithms to explore efficiently the model space. All methods proposed in this paper are implemented in the R library mombf. PMID:29861501
Guellouz, Asma; Valerio-Lepiniec, Marie; Urvoas, Agathe; Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe
2013-01-01
We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a "filtration" procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×10(9) independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties.
Chevrel, Anne; Graille, Marc; Fourati-Kammoun, Zaineb; Desmadril, Michel; van Tilbeurgh, Herman; Minard, Philippe
2013-01-01
We previously designed a new family of artificial proteins named αRep based on a subgroup of thermostable helicoidal HEAT-like repeats. We have now assembled a large optimized αRep library. In this library, the side chains at each variable position are not fully randomized but instead encoded by a distribution of codons based on the natural frequency of side chains of the natural repeats family. The library construction is based on a polymerization of micro-genes and therefore results in a distribution of proteins with a variable number of repeats. We improved the library construction process using a “filtration” procedure to retain only fully coding modules that were recombined to recreate sequence diversity. The final library named Lib2.1 contains 1.7×109 independent clones. Here, we used phage display to select, from the previously described library or from the new library, new specific αRep proteins binding to four different non-related predefined protein targets. Specific binders were selected in each case. The results show that binders with various sizes are selected including relatively long sequences, with up to 7 repeats. ITC-measured affinities vary with Kd values ranging from micromolar to nanomolar ranges. The formation of complexes is associated with a significant thermal stabilization of the bound target protein. The crystal structures of two complexes between αRep and their cognate targets were solved and show that the new interfaces are established by the variable surfaces of the repeated modules, as well by the variable N-cap residues. These results suggest that αRep library is a new and versatile source of tight and specific binding proteins with favorable biophysical properties. PMID:24014183
Scruggs, Stacie; Mama, Scherezade K; Carmack, Cindy L; Douglas, Tommy; Diamond, Pamela; Basen-Engquist, Karen
2018-01-01
This study examined whether a physical activity intervention affects transtheoretical model (TTM) variables that facilitate exercise adoption in breast cancer survivors. Sixty sedentary breast cancer survivors were randomized to a 6-month lifestyle physical activity intervention or standard care. TTM variables that have been shown to facilitate exercise adoption and progress through the stages of change, including self-efficacy, decisional balance, and processes of change, were measured at baseline, 3 months, and 6 months. Differences in TTM variables between groups were tested using repeated measures analysis of variance. The intervention group had significantly higher self-efficacy ( F = 9.55, p = .003) and perceived significantly fewer cons of exercise ( F = 5.416, p = .025) at 3 and 6 months compared with the standard care group. Self-liberation, counterconditioning, and reinforcement management processes of change increased significantly from baseline to 6 months in the intervention group, and self-efficacy and reinforcement management were significantly associated with improvement in stage of change. The stage-based physical activity intervention increased use of select processes of change, improved self-efficacy, decreased perceptions of the cons of exercise, and helped participants advance in stage of change. These results point to the importance of using a theory-based approach in interventions to increase physical activity in cancer survivors.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
Data preprocessing methods of FT-NIR spectral data for the classification cooking oil
NASA Astrophysics Data System (ADS)
Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli
2014-12-01
This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.
Assessment of Process Capability: the case of Soft Drinks Processing Unit
NASA Astrophysics Data System (ADS)
Sri Yogi, Kottala
2018-03-01
The process capability studies have significant impact in investigating process variation which is important in achieving product quality characteristics. Its indices are to measure the inherent variability of a process and thus to improve the process performance radically. The main objective of this paper is to understand capability of the process being produced within specification of the soft drinks processing unit, a premier brands being marketed in India. A few selected critical parameters in soft drinks processing: concentration of gas volume, concentration of brix, torque of crock has been considered for this study. Assessed some relevant statistical parameters: short term capability, long term capability as a process capability indices perspective. For assessment we have used real time data of soft drinks bottling company which is located in state of Chhattisgarh, India. As our research output suggested reasons for variations in the process which is validated using ANOVA and also predicted Taguchi cost function, assessed also predicted waste monetarily this shall be used by organization for improving process parameters. This research work has substantially benefitted the organization in understanding the various variations of selected critical parameters for achieving zero rejection.
NASA Astrophysics Data System (ADS)
Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban
2017-01-01
Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.
Bonanno, Angelo; Giannoulaki, Marianna; Barra, Marco; Basilone, Gualtiero; Machias, Athanassios; Genovese, Simona; Goncharov, Sergey; Popov, Sergey; Rumolo, Paola; Di Bitetto, Massimiliano; Aronica, Salvatore; Patti, Bernardo; Fontana, Ignazio; Giacalone, Giovanni; Ferreri, Rosalia; Buscaino, Giuseppa; Somarakis, Stylianos; Pyrounaki, Maria-Myrto; Tsoukali, Stavroula; Mazzola, Salvatore
2014-01-01
A number of scientific papers in the last few years singled out the influence of environmental conditions on the spatial distribution of fish species, highlighting the need for the fisheries scientific community to investigate, besides biomass estimates, also the habitat selection of commercially important fish species. The Mediterranean Sea, although generally oligotrophic, is characterized by high habitat variability and represents an ideal study area to investigate the adaptive behavior of small pelagics under different environmental conditions. In this study the habitat selection of European anchovy Engraulis encrasicolus and European sardine Sardina pilchardus is analyzed in two areas of the Mediterranean Sea that largely differentiate in terms of environmental regimes: the Strait of Sicily and the North Aegean Sea. A number of environmental parameters were used to investigate factors influencing anchovy and sardine habitat selection. Acoustic surveys data, collected during the summer period 2002–2010, were used for this purpose. The quotient analysis was used to identify the association between high density values and environmental variables; it was applied to the entire dataset in each area in order to identify similarities or differences in the “mean” spatial behavioral pattern for each species. Principal component analysis was applied to selected environmental variables in order to identify those environmental regimes which drive each of the two ecosystems. The analysis revealed the effect of food availability along with bottom depth selection on the spatial distribution of both species. Furthermore PCA results highlighted that observed selectivity for shallower waters is mainly associated to specific environmental processes that locally increase productivity. The common trends in habitat selection of the two species, as observed in the two regions although they present marked differences in hydrodynamics, seem to be driven by the oligotrophic character of the study areas, highlighting the role of areas where the local environmental regimes meet ‘the ocean triad hypothesis’. PMID:24992576
Patients classification on weaning trials using neural networks and wavelet transform.
Arizmendi, Carlos; Viviescas, Juan; González, Hernando; Giraldo, Beatriz
2014-01-01
The determination of the optimal time of the patients in weaning trial process from mechanical ventilation, between patients capable of maintaining spontaneous breathing and patients that fail to maintain spontaneous breathing, is a very important task in intensive care unit. Wavelet Transform (WT) and Neural Networks (NN) techniques were applied in order to develop a classifier for the study of patients on weaning trial process. The respiratory pattern of each patient was characterized through different time series. Genetic Algorithms (GA) and Forward Selection were used as feature selection techniques. A classification performance of 77.00±0.06% of well classified patients, was obtained using a NN and GA combination, with only 6 variables of the 14 initials.
NASA Technical Reports Server (NTRS)
Hughes, C. W.; Logan, A. H.
1981-01-01
Various candidate rotor systems were compared in an effort to select a modern four-bladed rotor for the RSRA. The YAH-64 rotor system was chosen as the candidate rotor system for further development for the RSRA. The process used to select the rotor system, studies conducted to mate the rotor with the RSRA and provide parametric variability, and the development plan which would be used to implement these studies are presented. Drawings are included.
NASA Astrophysics Data System (ADS)
Uma Maheswari, R.; Umamaheswari, R.
2017-02-01
Condition Monitoring System (CMS) substantiates potential economic benefits and enables prognostic maintenance in wind turbine-generator failure prevention. Vibration Monitoring and Analysis is a powerful tool in drive train CMS, which enables the early detection of impending failure/damage. In variable speed drives such as wind turbine-generator drive trains, the vibration signal acquired is of non-stationary and non-linear. The traditional stationary signal processing techniques are inefficient to diagnose the machine faults in time varying conditions. The current research trend in CMS for drive-train focuses on developing/improving non-linear, non-stationary feature extraction and fault classification algorithms to improve fault detection/prediction sensitivity and selectivity and thereby reducing the misdetection and false alarm rates. In literature, review of stationary signal processing algorithms employed in vibration analysis is done at great extent. In this paper, an attempt is made to review the recent research advances in non-linear non-stationary signal processing algorithms particularly suited for variable speed wind turbines.
Cortical processing of dynamic sound envelope transitions.
Zhou, Yi; Wang, Xiaoqin
2010-12-08
Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.
Hand placement near the visual stimulus improves orientation selectivity in V2 neurons
Sergio, Lauren E.; Crawford, J. Douglas; Fallah, Mazyar
2015-01-01
Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements. PMID:25717165
ERIC Educational Resources Information Center
Manning, Brad A.
The first section of this manual contains a selective review of organizational change literature which focuses on predictive institutional variables as they affect the adoption-diffusion process. The second section describes the development of the Trouble Shooting Checklist (TSC). The third section presents two Trouble Shooting Checklists (TSC-A…
ERIC Educational Resources Information Center
Stoffey, Ronald W.
Researchers are increasingly aware of the importance of job applicants' reactions to the personnel selection process. This study examines three variables in connection with drug testing policies: (1) the potential applicant's reactions to two different drug testing policies which varied in terms of drug policy characteristics and their impact on…
Multi-Stage Mental Process for Economic Choice in Capuchins
ERIC Educational Resources Information Center
Padoa-Schioppa, Camillo; Jandolo, Lucia; Visalberghi, Elisabetta
2006-01-01
We studied economic choice behavior in capuchin monkeys by offering them to choose between two different foods available in variable amounts. When monkeys selected between familiar foods, their choice patterns were well-described in terms of relative value of the two foods. A leading view in economics and biology is that such behavior results from…
Physiological Factors in Adult Learning and Instruction. Research to Practice Series.
ERIC Educational Resources Information Center
Verner, Coolie; Davison, Catherine V.
The physiological condition of the adult learner as related to his learning capability is discussed. The design of the instructional process, the selection of learning tasks, the rate at which instruction occurs, and the nature of the instructional setting may all be modified by the instructor to accomodate the variable physiological conditions of…
CORRELATION PURSUIT: FORWARD STEPWISE VARIABLE SELECTION FOR INDEX MODELS
Zhong, Wenxuan; Zhang, Tingting; Zhu, Yu; Liu, Jun S.
2012-01-01
In this article, a stepwise procedure, correlation pursuit (COP), is developed for variable selection under the sufficient dimension reduction framework, in which the response variable Y is influenced by the predictors X1, X2, …, Xp through an unknown function of a few linear combinations of them. Unlike linear stepwise regression, COP does not impose a special form of relationship (such as linear) between the response variable and the predictor variables. The COP procedure selects variables that attain the maximum correlation between the transformed response and the linear combination of the variables. Various asymptotic properties of the COP procedure are established, and in particular, its variable selection performance under diverging number of predictors and sample size has been investigated. The excellent empirical performance of the COP procedure in comparison with existing methods are demonstrated by both extensive simulation studies and a real example in functional genomics. PMID:23243388
Coron, Camille
2016-01-01
We are interested in the long-time behavior of a diploid population with sexual reproduction and randomly varying population size, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a 3-dimensional birth-and-death process with competition, weak cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter K that goes to infinity, following a large population assumption. When the individual birth and natural death rates are of order K, the sequence of stochastic processes indexed by K converges toward a new slow-fast dynamics with variable population size. We indeed prove the convergence toward 0 of a fast variable giving the deviation of the population from quasi Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a 2-dimensional diffusion process that reaches (0,0) almost surely in finite time. The population size and the proportion of a given allele converge toward a Wright-Fisher diffusion with stochastically varying population size and diploid selection. We insist on differences between haploid and diploid populations due to population size stochastic variability. Using a non trivial change of variables, we study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting (0,0) admits a unique quasi-stationary distribution. We give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Geng, Zhigeng; Wang, Sijian; Yu, Menggang; Monahan, Patrick O.; Champion, Victoria; Wahba, Grace
2017-01-01
Summary In many scientific and engineering applications, covariates are naturally grouped. When the group structures are available among covariates, people are usually interested in identifying both important groups and important variables within the selected groups. Among existing successful group variable selection methods, some methods fail to conduct the within group selection. Some methods are able to conduct both group and within group selection, but the corresponding objective functions are non-convex. Such a non-convexity may require extra numerical effort. In this article, we propose a novel Log-Exp-Sum(LES) penalty for group variable selection. The LES penalty is strictly convex. It can identify important groups as well as select important variables within the group. We develop an efficient group-level coordinate descent algorithm to fit the model. We also derive non-asymptotic error bounds and asymptotic group selection consistency for our method in the high-dimensional setting where the number of covariates can be much larger than the sample size. Numerical results demonstrate the good performance of our method in both variable selection and prediction. We applied the proposed method to an American Cancer Society breast cancer survivor dataset. The findings are clinically meaningful and may help design intervention programs to improve the qualify of life for breast cancer survivors. PMID:25257196
2013-06-01
Kobu, 2007) Gunasekaran and Kobu also presented six observations as they relate to these key performance indicators ( KPI ), as follows: 1...Internal business process (50% of the KPI ) and customers (50% of the KPI ) play a significant role in SC environments. This implies that internal business...process PMs have significant impact on the operational performance. 2. The most widely used PM is financial performance (38% of the KPI ). This
Public goods games in populations with fluctuating size.
McAvoy, Alex; Fraiman, Nicolas; Hauert, Christoph; Wakeley, John; Nowak, Martin A
2018-05-01
Many mathematical frameworks of evolutionary game dynamics assume that the total population size is constant and that selection affects only the relative frequency of strategies. Here, we consider evolutionary game dynamics in an extended Wright-Fisher process with variable population size. In such a scenario, it is possible that the entire population becomes extinct. Survival of the population may depend on which strategy prevails in the game dynamics. Studying cooperative dilemmas, it is a natural feature of such a model that cooperators enable survival, while defectors drive extinction. Although defectors are favored for any mixed population, random drift could lead to their elimination and the resulting pure-cooperator population could survive. On the other hand, if the defectors remain, then the population will quickly go extinct because the frequency of cooperators steadily declines and defectors alone cannot survive. In a mutation-selection model, we find that (i) a steady supply of cooperators can enable long-term population survival, provided selection is sufficiently strong, and (ii) selection can increase the abundance of cooperators but reduce their relative frequency. Thus, evolutionary game dynamics in populations with variable size generate a multifaceted notion of what constitutes a trait's long-term success. Copyright © 2018 Elsevier Inc. All rights reserved.
a Empirical Modelation of Runoff in Small Watersheds Using LIDAR Data
NASA Astrophysics Data System (ADS)
Lopatin, J.; Hernández, J.; Galleguillos, M.; Mancilla, G.
2013-12-01
Hydrological models allow the simulation of water natural processes and also the quantification and prediction of the effects of human impacts in runoff behavior. However, obtaining the information that is need for applying these models can be costly in both time and resources, especially in large and difficult to access areas. The objective of this research was to integrate LiDAR data in the hydrological modeling of runoff in small watersheds, using derivated hydrologic, vegetation and topography variables. The study area includes 10 small head watersheds cover bay forest, between 2 and 16 ha, which are located in the south-central coastal range of Chile. In each of the former instantaneous rainfall and runoff flow of a total of 15 rainfall events were measured, between August 2012 and July 2013, yielding a total of 79 observations. In March 2011 a Harrier 54/G4 Dual System was used to obtain a LiDAR point cloud of discrete pulse with an average of 4.64 points per square meter. A Digital Terrain Model (DTM) of 1 meter resolution was obtained from the point cloud, and subsequently 55 topographic variables were derived, such as physical watershed parameters and morphometric features. At the same time, 30 vegetation descriptive variables were obtained directly from the point cloud and from a Digital Canopy Model (DCM). The classification and regression "Random Forest" (RF) algorithm was used to select the most important variables in predicting water height (liters), and the "Partial Least Squares Path Modeling" (PLS-PM) algorithm was used to fit a model using the selected set of variables. Four Latent variables were selected (outer model) related to: climate, topography, vegetation and runoff, where in each one was designated a group of the predictor variables selected by RF (inner model). The coefficient of determination (R2) and Goodnes-of-Fit (GoF) of the final model were obtained. The best results were found when modeling using only the upper 50th percentile of rainfall events. The best variables selected by the RF algorithm were three topographic variables and three vegetation related ones. We obtained an R2 of 0.82 and a GoF of 0.87 with a 95% of confidence interval. This study shows that it is possible to predict the water harvesting collected during a rainstorm event in forest environment using only LiDAR data. However, this type of methodology does not have good result in flow produced by low magnitude rainfall events, as these are more influenced by initial conditions of soil, vegetation and climate, which make their behavior slower and erratic.
Results from the VALUE perfect predictor experiment: process-based evaluation
NASA Astrophysics Data System (ADS)
Maraun, Douglas; Soares, Pedro; Hertig, Elke; Brands, Swen; Huth, Radan; Cardoso, Rita; Kotlarski, Sven; Casado, Maria; Pongracz, Rita; Bartholy, Judit
2016-04-01
Until recently, the evaluation of downscaled climate model simulations has typically been limited to surface climatologies, including long term means, spatial variability and extremes. But these aspects are often, at least partly, tuned in regional climate models to match observed climate. The tuning issue is of course particularly relevant for bias corrected regional climate models. In general, a good performance of a model for these aspects in present climate does therefore not imply a good performance in simulating climate change. It is now widely accepted that, to increase our condidence in climate change simulations, it is necessary to evaluate how climate models simulate relevant underlying processes. In other words, it is important to assess whether downscaling does the right for the right reason. Therefore, VALUE has carried out a broad process-based evaluation study based on its perfect predictor experiment simulations: the downscaling methods are driven by ERA-Interim data over the period 1979-2008, reference observations are given by a network of 85 meteorological stations covering all European climates. More than 30 methods participated in the evaluation. In order to compare statistical and dynamical methods, only variables provided by both types of approaches could be considered. This limited the analysis to conditioning local surface variables on variables from driving processes that are simulated by ERA-Interim. We considered the following types of processes: at the continental scale, we evaluated the performance of downscaling methods for positive and negative North Atlantic Oscillation, Atlantic ridge and blocking situations. At synoptic scales, we considered Lamb weather types for selected European regions such as Scandinavia, the United Kingdom, the Iberian Pensinsula or the Alps. At regional scales we considered phenomena such as the Mistral, the Bora or the Iberian coastal jet. Such process-based evaluation helps to attribute biases in surface variables to underlying processes and ultimately to improve climate models.
Huntsman, Brock M; Falke, Jeffrey A; Savereide, James W; Bennett, Katrina E
2017-01-01
Density-dependent (DD) and density-independent (DI) habitat selection is strongly linked to a species' evolutionary history. Determining the relative importance of each is necessary because declining populations are not always the result of altered DI mechanisms but can often be the result of DD via a reduced carrying capacity. We developed spatially and temporally explicit models throughout the Chena River, Alaska to predict important DI mechanisms that influence Chinook salmon spawning success. We used resource-selection functions to predict suitable spawning habitat based on geomorphic characteristics, a semi-distributed water-and-energy balance hydrologic model to generate stream flow metrics, and modeled stream temperature as a function of climatic variables. Spawner counts were predicted throughout the core and periphery spawning sections of the Chena River from escapement estimates (DD) and DI variables. Additionally, we used isodar analysis to identify whether spawners actively defend spawning habitat or follow an ideal free distribution along the riverscape. Aerial counts were best explained by escapement and reference to the core or periphery, while no models with DI variables were supported in the candidate set. Furthermore, isodar plots indicated habitat selection was best explained by ideal free distributions, although there was strong evidence for active defense of core spawning habitat. Our results are surprising, given salmon commonly defend spawning resources, and are likely due to competition occurring at finer spatial scales than addressed in this study.
Huntsman, Brock M.; Falke, Jeffrey A.; Savereide, James W.; ...
2017-05-22
Density-dependent (DD) and density-independent (DI) habitat selection is strongly linked to a species’ evolutionary history. Determining the relative importance of each is necessary because declining populations are not always the result of altered DI mechanisms but can often be the result of DD via a reduced carrying capacity. Here, we developed spatially and temporally explicit models throughout the Chena River, Alaska to predict important DI mechanisms that influence Chinook salmon spawning success. We used resource-selection functions to predict suitable spawning habitat based on geomorphic characteristics, a semi-distributed water-and-energy balance hydrologic model to generate stream flow metrics, and modeled stream temperaturemore » as a function of climatic variables. Spawner counts were predicted throughout the core and periphery spawning sections of the Chena River from escapement estimates (DD) and DI variables. In addition, we used isodar analysis to identify whether spawners actively defend spawning habitat or follow an ideal free distribution along the riverscape. Aerial counts were best explained by escapement and reference to the core or periphery, while no models with DI variables were supported in the candidate set. Moreover, isodar plots indicated habitat selection was best explained by ideal free distributions, although there was strong evidence for active defense of core spawning habitat. These results are surprising, given salmon commonly defend spawning resources, and are likely due to competition occurring at finer spatial scales than addressed in this study.« less
Optimal timing in biological processes
Williams, B.K.; Nichols, J.D.
1984-01-01
A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.
Huntsman, Brock M.; Falke, Jeffrey A.; Savereide, James W.; Bennett, Katrina E.
2017-01-01
Density-dependent (DD) and density-independent (DI) habitat selection is strongly linked to a species’ evolutionary history. Determining the relative importance of each is necessary because declining populations are not always the result of altered DI mechanisms but can often be the result of DD via a reduced carrying capacity. We developed spatially and temporally explicit models throughout the Chena River, Alaska to predict important DI mechanisms that influence Chinook salmon spawning success. We used resource-selection functions to predict suitable spawning habitat based on geomorphic characteristics, a semi-distributed water-and-energy balance hydrologic model to generate stream flow metrics, and modeled stream temperature as a function of climatic variables. Spawner counts were predicted throughout the core and periphery spawning sections of the Chena River from escapement estimates (DD) and DI variables. Additionally, we used isodar analysis to identify whether spawners actively defend spawning habitat or follow an ideal free distribution along the riverscape. Aerial counts were best explained by escapement and reference to the core or periphery, while no models with DI variables were supported in the candidate set. Furthermore, isodar plots indicated habitat selection was best explained by ideal free distributions, although there was strong evidence for active defense of core spawning habitat. Our results are surprising, given salmon commonly defend spawning resources, and are likely due to competition occurring at finer spatial scales than addressed in this study.
Learning-based controller for biotechnology processing, and method of using
Johnson, John A.; Stoner, Daphne L.; Larsen, Eric D.; Miller, Karen S.; Tolle, Charles R.
2004-09-14
The present invention relates to process control where some of the controllable parameters are difficult or impossible to characterize. The present invention relates to process control in biotechnology of such systems, but not limited to. Additionally, the present invention relates to process control in biotechnology minerals processing. In the inventive method, an application of the present invention manipulates a minerals bioprocess to find local exterma (maxima or minima) for selected output variables/process goals by using a learning-based controller for bioprocess oxidation of minerals during hydrometallurgical processing. The learning-based controller operates with or without human supervision and works to find processor optima without previously defined optima due to the non-characterized nature of the process being manipulated.
Chervyakov, Alexander V.; Sinitsyn, Dmitry O.; Piradov, Michael A.
2016-01-01
HIGHLIGHTS We suggest classifying variability of neuronal responses as follows: false (associated with a lack of knowledge about the influential factors), “genuine harmful” (noise), “genuine neutral” (synonyms, repeats), and “genuine useful” (the basis of neuroplasticity and learning).The genuine neutral variability is considered in terms of the phenomenon of degeneracy.Of particular importance is the genuine useful variability that is considered as a potential basis for neuroplasticity and learning. This type of variability is considered in terms of the neural Darwinism theory. In many cases, neural signals detected under the same external experimental conditions significantly change from trial to trial. The variability phenomenon, which complicates extraction of reproducible results and is ignored in many studies by averaging, has attracted attention of researchers in recent years. In this paper, we classify possible types of variability based on its functional significance and describe features of each type. We describe the key adaptive significance of variability at the neural network level and the degeneracy phenomenon that may be important for learning processes in connection with the principle of neuronal group selection. PMID:27932969
Chervyakov, Alexander V; Sinitsyn, Dmitry O; Piradov, Michael A
2016-01-01
HIGHLIGHTS We suggest classifying variability of neuronal responses as follows: false (associated with a lack of knowledge about the influential factors), "genuine harmful" (noise), "genuine neutral" (synonyms, repeats), and "genuine useful" (the basis of neuroplasticity and learning).The genuine neutral variability is considered in terms of the phenomenon of degeneracy.Of particular importance is the genuine useful variability that is considered as a potential basis for neuroplasticity and learning. This type of variability is considered in terms of the neural Darwinism theory. In many cases, neural signals detected under the same external experimental conditions significantly change from trial to trial. The variability phenomenon, which complicates extraction of reproducible results and is ignored in many studies by averaging, has attracted attention of researchers in recent years. In this paper, we classify possible types of variability based on its functional significance and describe features of each type. We describe the key adaptive significance of variability at the neural network level and the degeneracy phenomenon that may be important for learning processes in connection with the principle of neuronal group selection.
Approach–Avoidance Processes Contribute to Dissociable Impacts of Risk and Loss on Choice
Wright, Nicholas D.; Symmonds, Mkael; Hodgson, Karen; Fitzgerald, Thomas H. B.; Crawford, Bonni; Dolan, Raymond J.
2013-01-01
Value-based choices are influenced both by risk in potential outcomes and by whether outcomes reflect potential gains or losses. These variables are held to be related in a specific fashion, manifest in risk aversion for gains and risk seeking for losses. Instead, we hypothesized that there are independent impacts of risk and loss on choice such that, depending on context, subjects can show either risk aversion for gains and risk seeking for losses or the exact opposite. We demonstrate this independence in a gambling task, by selectively reversing a loss-induced effect (causing more gambling for gains than losses and the reverse) while leaving risk aversion unaffected. Consistent with these dissociable behavioral impacts of risk and loss, fMRI data revealed dissociable neural correlates of these variables, with parietal cortex tracking risk and orbitofrontal cortex and striatum tracking loss. Based on our neural data, we hypothesized that risk and loss influence action selection through approach–avoidance mechanisms, a hypothesis supported in an experiment in which we show valence and risk-dependent reaction time effects in line with this putative mechanism. We suggest that in the choice process risk and loss can independently engage approach–avoidance mechanisms. This can provide a novel explanation for how risk influences action selection and explains both classically described choice behavior as well as behavioral patterns not predicted by existing theory. PMID:22593069
Moreno-Martínez, Francisco Javier; Montoro, Pedro R
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object- and word-processing, both in neurological patients and in healthy controls.
Al-Omar, Sally; Le Rolle, Virginie; Beuchée, Alain; Samson, Nathalie; Praud, Jean-Paul; Carrault, Guy
2018-05-10
A semi-automated processing approach was developed to assess the effects of early postnatal environmental tobacco smoke (ETS) on the cardiorespiratory control of newborn lambs. The system consists of several steps beginning with artifact rejection, followed by the selection of stationary segments, and ending with feature extraction. This approach was used in six lambs exposed to 20 cigarettes/day for the first 15 days of life, while another six control lambs were exposed to room air. On postnatal day 16, electrocardiograph and respiratory signals were obtained from a 6-h polysomnographic recording. The effects of postnatal ETS exposure on heart rate variability, respiratory rate variability, and cardiorespiratory interrelations were explored. The unique results suggest that early postnatal ETS exposure increases respiratory rate variability and decreases the coupling between cardiac and respiratory systems. Potentially harmful consequences in early life include unstable breathing and decreased adaptability of cardiorespiratory function, particularly during early life challenges, such as prematurity or viral infection. Graphical abstract ᅟ.
Apparatus and method for microwave processing of materials
Johnson, A.C.; Lauf, R.J.; Bible, D.W.; Markunas, R.J.
1996-05-28
Disclosed is a variable frequency microwave heating apparatus designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity for testing or other selected applications. The variable frequency heating apparatus is used in the method of the present invention to monitor the resonant processing frequency within the furnace cavity depending upon the material, including the state thereof, from which the workpiece is fabricated. The variable frequency microwave heating apparatus includes a microwave signal generator and a high-power microwave amplifier or a microwave voltage-controlled oscillator. A power supply is provided for operation of the high-power microwave oscillator or microwave amplifier. A directional coupler is provided for detecting the direction and amplitude of signals incident upon and reflected from the microwave cavity. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load. 10 figs.
Chiarle, Alberto; Isaia, Marco
2013-07-01
In this study, we compare the courtship behaviours of Pardosa proxima and P. vlijmi, two species of wolf spiders up to now regarded as "ethospecies", by means of motion analysis methodologies. In particular, we investigate the features of the signals, aiming at understanding the evolution of the courtship and its role in species delimitation and speciation processes. In our model, we highlight a modular structure of the behaviours and the presence of recurring units and phases. According to other similar cases concerning animal communication, we observed one highly variable and one stereotyped phase for both species. The stereotyped phase is here regarded as a signal related to species identity or an honest signal linked directly to the quality of the signaler. On the contrary, the variable phase aims to facilitate signal detection and assessment by the female reducing choice costs or errors. Variable phases include cues arisen from Fisherian runaway selection, female sensory exploitation and remaining of past selections. Copyright © 2013 Elsevier B.V. All rights reserved.
Automatic design of basin-specific drought indexes for highly regulated water systems
NASA Astrophysics Data System (ADS)
Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea Francesco; Pulido-Velazquez, Manuel
2018-04-01
Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc state index
is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
The variability of software scoring of the CDMAM phantom associated with a limited number of images
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying J.; Van Metter, Richard
2007-03-01
Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.
Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang
2014-10-01
In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.
Decision tree modeling using R.
Zhang, Zhongheng
2016-08-01
In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.
PVD thermal barrier coating applications and process development for aircraft engines
NASA Astrophysics Data System (ADS)
Rigney, D. V.; Viguie, R.; Wortman, D. J.; Skelly, D. W.
1997-06-01
Thermal barrier coatings (TBCs) have been developed for application to aircraft engine components to improve service life in an increasingly hostile thermal environment. The choice of TBC type is related to the component, intended use, and economics. Selection of electron beam physical vapor deposition proc-essing for turbine blade is due in part to part size, surface finish requirements, thickness control needs, and hole closure issues. Process development of PVD TBCs has been carried out at several different sites, including GE Aircraft Engines (GEAE). The influence of processing variables on microstructure is dis-cussed, along with the GEAE development coater and initial experiences of pilot line operation.
The emerging conceptualization of groups as information processors.
Hinsz, V B; Tindale, R S; Vollrath, D A
1997-01-01
A selective review of research highlights the emerging view of groups as information processors. In this review, the authors include research on processing objectives, attention, encoding, storage, retrieval, processing, response, feedback, and learning in small interacting task groups. The groups as information processors perspective underscores several characteristic dimensions of variability in group performance of cognitive tasks, namely, commonality-uniqueness of information, convergence-diversity of ideas, accentuation-attenuation of cognitive processes, and belongingness-distinctiveness of members. A combination of contributions framework provides an additional conceptualization of information processing in groups. The authors also address implications, caveats, and questions for future research and theory regarding groups as information processors.
Multistage variable probability forest volume inventory. [the Defiance Unit of the Navajo Nation
NASA Technical Reports Server (NTRS)
Anderson, J. E. (Principal Investigator)
1979-01-01
An inventory scheme based on the use of computer processed LANDSAT MSS data was developed. Output from the inventory scheme provides an estimate of the standing net saw timber volume of a major timber species on a selected forested area of the Navajo Nation. Such estimates are based on the values of parameters currently used for scaled sawlog conversion to mill output. The multistage variable probability sampling appears capable of producing estimates which compare favorably with those produced using conventional techniques. In addition, the reduction in time, manpower, and overall costs lend it to numerous applications.
2008-01-01
Background Sperm morphology can be highly variable among species, but less is known about patterns of population differentiation within species. Most studies of sperm morphometric variation are done in species with internal fertilization, where sexual selection can be mediated by complex mating behavior and the environment of the female reproductive tract. Far less is known about patterns of sperm evolution in broadcast spawners, where reproductive dynamics are largely carried out at the gametic level. We investigated variation in sperm morphology of a broadcast spawner, the green sea urchin (Strongylocentrotus droebachiensis), within and among spawnings of an individual, among individuals within a population, and among populations. We also examined population-level variation between two reproductive seasons for one population. We then compared among-population quantitative genetic divergence (QST) for sperm characters to divergence at neutral microsatellite markers (FST). Results All sperm traits except total length showed strong patterns of high diversity among populations, as did overall sperm morphology quantified using multivariate analysis. We also found significant differences in almost all traits among individuals in all populations. Head length, axoneme length, and total length had high within-male repeatability across multiple spawnings. Only sperm head width had significant within-population variation across two reproductive seasons. We found signatures of directional selection on head length and head width, with strong selection possibly acting on head length between the Pacific and West Atlantic populations. We also discuss the strengths and limitations of the QST-FST comparison. Conclusion Sperm morphology in S. droebachiensis is highly variable, both among populations and among individuals within populations, and has low variation within an individual across multiple spawnings. Selective pressures acting among populations may differ from those acting within, with directional selection implicated in driving divergence among populations and balancing selection as a possible mechanism for producing variability among males. Sexual selection in broadcast spawners may be mediated by different processes from those acting on internal fertilizers. Selective divergence in sperm head length among populations is associated with ecological differences among populations that may play a large role in mediating sexual selection in this broadcast spawner. PMID:18851755
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Artificial neural networks for the performance prediction of heat pump hot water heaters
NASA Astrophysics Data System (ADS)
Mathioulakis, E.; Panaras, G.; Belessiotis, V.
2018-02-01
The rapid progression in the use of heat pumps, due to the decrease in the equipment cost, together with the favourable economics of the consumed electrical energy, has been combined with the wide dissemination of air-to-water heat pumps (AWHPs) in the residential sector. The entrance of the respective systems in the commercial sector has made important the modelling of the processes. In this work, the suitability of artificial neural networks (ANN) in the modelling of AWHPs is investigated. The ambient air temperature in the evaporator inlet and the water temperature in the condenser inlet have been selected as the input variables; energy performance indices and quantities characterising the operation of the system have been selected as output variables. The results verify that the, easy-to-implement, trained ANN can represent an effective tool for the prediction of the AWHP performance in various operation conditions and the parametrical investigation of their behaviour.
Pharmacogenetics and outcome with antipsychotic drugs.
Pouget, Jennie G; Shams, Tahireh A; Tiwari, Arun K; Müller, Daniel J
2014-12-01
Antipsychotic medications are the gold-standard treatment for schizophrenia, and are often prescribed for other mental conditions. However, the efficacy and side-effect profiles of these drugs are heterogeneous, with large interindividual variability. As a result, treatment selection remains a largely trial-and-error process, with many failed treatment regimens endured before finding a tolerable balance between symptom management and side effects. Much of the interindividual variability in response and side effects is due to genetic factors (heritability, h(2)~ 0.60-0.80). Pharmacogenetics is an emerging field that holds the potential to facilitate the selection of the best medication for a particular patient, based on his or her genetic information. In this review we discuss the most promising genetic markers of antipsychotic treatment outcomes, and present current translational research efforts that aim to bring these pharmacogenetic findings to the clinic in the near future.
Pharmacogenetics and outcome with antipsychotic drugs
Pouget, Jennie G.; Shams, Tahireh A.; Tiwari, Arun K.; Müller, Daniel J.
2014-01-01
Antipsychotic medications are the gold-standard treatment for schizophrenia, and are often prescribed for other mental conditions. However, the efficacy and side-effect profiles of these drugs are heterogeneous, with large interindividual variability. As a result, treatment selection remains a largely trial-and-error process, with many failed treatment regimens endured before finding a tolerable balance between symptom management and side effects. Much of the interindividual variability in response and side effects is due to genetic factors (heritability, h2~ 0.60-0.80). Pharmacogenetics is an emerging field that holds the potential to facilitate the selection of the best medication for a particular patient, based on his or her genetic information. In this review we discuss the most promising genetic markers of antipsychotic treatment outcomes, and present current translational research efforts that aim to bring these pharmacogenetic findings to the clinic in the near future. PMID:25733959
A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…
A data mining approach to optimize pellets manufacturing process based on a decision tree algorithm.
Ronowicz, Joanna; Thommes, Markus; Kleinebudde, Peter; Krysiński, Jerzy
2015-06-20
The present study is focused on the thorough analysis of cause-effect relationships between pellet formulation characteristics (pellet composition as well as process parameters) and the selected quality attribute of the final product. The shape using the aspect ratio value expressed the quality of pellets. A data matrix for chemometric analysis consisted of 224 pellet formulations performed by means of eight different active pharmaceutical ingredients and several various excipients, using different extrusion/spheronization process conditions. The data set contained 14 input variables (both formulation and process variables) and one output variable (pellet aspect ratio). A tree regression algorithm consistent with the Quality by Design concept was applied to obtain deeper understanding and knowledge of formulation and process parameters affecting the final pellet sphericity. The clear interpretable set of decision rules were generated. The spehronization speed, spheronization time, number of holes and water content of extrudate have been recognized as the key factors influencing pellet aspect ratio. The most spherical pellets were achieved by using a large number of holes during extrusion, a high spheronizer speed and longer time of spheronization. The described data mining approach enhances knowledge about pelletization process and simultaneously facilitates searching for the optimal process conditions which are necessary to achieve ideal spherical pellets, resulting in good flow characteristics. This data mining approach can be taken into consideration by industrial formulation scientists to support rational decision making in the field of pellets technology. Copyright © 2015 Elsevier B.V. All rights reserved.
A survey of variable selection methods in two Chinese epidemiology journals
2010-01-01
Background Although much has been written on developing better procedures for variable selection, there is little research on how it is practiced in actual studies. This review surveys the variable selection methods reported in two high-ranking Chinese epidemiology journals. Methods Articles published in 2004, 2006, and 2008 in the Chinese Journal of Epidemiology and the Chinese Journal of Preventive Medicine were reviewed. Five categories of methods were identified whereby variables were selected using: A - bivariate analyses; B - multivariable analysis; e.g. stepwise or individual significance testing of model coefficients; C - first bivariate analyses, followed by multivariable analysis; D - bivariate analyses or multivariable analysis; and E - other criteria like prior knowledge or personal judgment. Results Among the 287 articles that reported using variable selection methods, 6%, 26%, 30%, 21%, and 17% were in categories A through E, respectively. One hundred sixty-three studies selected variables using bivariate analyses, 80% (130/163) via multiple significance testing at the 5% alpha-level. Of the 219 multivariable analyses, 97 (44%) used stepwise procedures, 89 (41%) tested individual regression coefficients, but 33 (15%) did not mention how variables were selected. Sixty percent (58/97) of the stepwise routines also did not specify the algorithm and/or significance levels. Conclusions The variable selection methods reported in the two journals were limited in variety, and details were often missing. Many studies still relied on problematic techniques like stepwise procedures and/or multiple testing of bivariate associations at the 0.05 alpha-level. These deficiencies should be rectified to safeguard the scientific validity of articles published in Chinese epidemiology journals. PMID:20920252
Additive Manufacturing in Production: A Study Case Applying Technical Requirements
NASA Astrophysics Data System (ADS)
Ituarte, Iñigo Flores; Coatanea, Eric; Salmi, Mika; Tuomi, Jukka; Partanen, Jouni
Additive manufacturing (AM) is expanding the manufacturing capabilities. However, quality of AM produced parts is dependent on a number of machine, geometry and process parameters. The variability of these parameters affects the manufacturing drastically and therefore standardized processes and harmonized methodologies need to be developed to characterize the technology for end use applications and enable the technology for manufacturing. This research proposes a composite methodology integrating Taguchi Design of Experiments, multi-objective optimization and statistical process control, to optimize the manufacturing process and fulfil multiple requirements imposed to an arbitrary geometry. The proposed methodology aims to characterize AM technology depending upon manufacturing process variables as well as to perform a comparative assessment of three AM technologies (Selective Laser Sintering, Laser Stereolithography and Polyjet). Results indicate that only one machine, laser-based Stereolithography, was feasible to fulfil simultaneously macro and micro level geometrical requirements but mechanical properties were not at required level. Future research will study a single AM system at the time to characterize AM machine technical capabilities and stimulate pre-normative initiatives of the technology for end use applications.
Gabriel, Alonzo A; Cayabyab, Jochelle Elysse C; Tan, Athalie Kaye L; Corook, Mark Lester F; Ables, Errol John O; Tiangson-Bayaga, Cecile Leah P
2015-06-15
A predictive response surface model for the influences of product (soluble solids and titratable acidity) and process (temperature and heating time) parameters on the degradation of ascorbic acid (AA) in heated simulated fruit juices (SFJs) was established. Physicochemical property ranges of freshly squeezed and processed juices, and a previously established decimal reduction times of Escherichiacoli O157:H7 at different heating temperatures were used in establishing a Central Composite Design of Experiment that determined the combinations of product and process variable used in the model building. Only the individual linear effects of temperature and heating time significantly (P<0.05) affected AA reduction (%AAr). Validating systems either over- or underestimated actual %AAr with bias factors 0.80-1.20. However, all validating systems still resulted in acceptable predictive efficacy, with accuracy factor 1.00-1.26. The model may be useful in establishing unique process schedules for specific products, for the simultaneous control and improvement of food safety and quality. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona
2012-01-01
Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical process control study on process. Interpretation of such a study provides information about stability, process variability, changing of trends, and quantification of process ability against defective production. Comparative evaluation of critical quality attributes by Pareto charts provides the least capable and most variable process that is liable for improvement. Statistical process control thus proves to be an important tool for six sigma-capable process development and continuous quality improvement.
Van Schuerbeek, Peter; Baeken, Chris; De Mey, Johan
2016-01-01
Concerns are raising about the large variability in reported correlations between gray matter morphology and affective personality traits as ‘Harm Avoidance’ (HA). A recent review study (Mincic 2015) stipulated that this variability could come from methodological differences between studies. In order to achieve more robust results by standardizing the data processing procedure, as a first step, we repeatedly analyzed data from healthy females while changing the processing settings (voxel-based morphology (VBM) or region-of-interest (ROI) labeling, smoothing filter width, nuisance parameters included in the regression model, brain atlas and multiple comparisons correction method). The heterogeneity in the obtained results clearly illustrate the dependency of the study outcome to the opted analysis settings. Based on our results and the existing literature, we recommended the use of VBM over ROI labeling for whole brain analyses with a small or intermediate smoothing filter (5-8mm) and a model variable selection step included in the processing procedure. Additionally, it is recommended that ROI labeling should only be used in combination with a clear hypothesis and that authors are encouraged to report their results uncorrected for multiple comparisons as supplementary material to aid review studies. PMID:27096608
NASA Astrophysics Data System (ADS)
Kim, S.; Seo, D. J.
2017-12-01
When water temperature (TW) increases due to changes in hydrometeorological conditions, the overall ecological conditions change in the aquatic system. The changes can be harmful to human health and potentially fatal to fish habitat. Therefore, it is important to assess the impacts of thermal disturbances on in-stream processes of water quality variables and be able to predict effectiveness of possible actions that may be taken for water quality protection. For skillful prediction of in-stream water quality processes, it is necessary for the watershed water quality models to be able to reflect such changes. Most of the currently available models, however, assume static parameters for the biophysiochemical processes and hence are not able to capture nonstationaries seen in water quality observations. In this work, we assess the performance of the Hydrological Simulation Program-Fortran (HSPF) in predicting algal dynamics following TW increase. The study area is located in the Republic of Korea where waterway change due to weir construction and drought concurrently occurred around 2012. In this work we use data assimilation (DA) techniques to update model parameters as well as the initial condition of selected state variables for in-stream processes relevant to algal growth. For assessment of model performance and characterization of temporal variability, various goodness-of-fit measures and wavelet analysis are used.
NASA Technical Reports Server (NTRS)
Kimes, Daniel S.; Nelson, Ross F.
1998-01-01
A number of satellite sensor systems will collect large data sets of the Earth's surface during NASA's Earth Observing System (EOS) era. Efforts are being made to develop efficient algorithms that can incorporate a wide variety of spectral data and ancillary data in order to extract vegetation variables required for global and regional studies of ecosystem processes, biosphere-atmosphere interactions, and carbon dynamics. These variables are, for the most part, continuous (e.g. biomass, leaf area index, fraction of vegetation cover, vegetation height, vegetation age, spectral albedo, absorbed photosynthetic active radiation, photosynthetic efficiency, etc.) and estimates may be made using remotely sensed data (e.g. nadir and directional optical wavelengths, multifrequency radar backscatter) and any other readily available ancillary data (e.g., topography, sun angle, ground data, etc.). Using these types of data, neural networks can: 1) provide accurate initial models for extracting vegetation variables when an adequate amount of data is available; 2) provide a performance standard for evaluating existing physically-based models; 3) invert multivariate, physically based models; 4) in a variable selection process, identify those independent variables which best infer the vegetation variable(s) of interest; and 5) incorporate new data sources that would be difficult or impossible to use with conventional techniques. In addition, neural networks employ a more powerful and adaptive nonlinear equation form as compared to traditional linear, index transformations, and simple nonlinear analyses. These neural networks attributes are discussed in the context of the authors' investigations of extracting vegetation variables of ecological interest.
Fayed, Mohamed H; Abdel-Rahman, Sayed I; Alanazi, Fars K; Ahmed, Mahrous O; Tawfeek, Hesham M; Al-Shedfat, Ramadan I
2017-01-01
Application of quality by design (QbD) in high shear granulation process is critical and need to recognize the correlation between the granulation process parameters and the properties of intermediate (granules) and corresponding final product (tablets). The present work examined the influence of water amount (X,) and wet massing time (X2) as independent process variables on the critical quality attributes of granules and corresponding tablets using design of experiment (DoE) technique. A two factor, three level (32) full factorial design was performed; each of these variables was investigated at three levels to characterize their strength and interaction. The dried granules have been analyzed for their size distribution, density and flow pattern. Additionally, the produced tablets have been investigated for weight uniformity, crushing strength, friability and percent capping, disintegration time and drug dissolution. Statistically significant impact (p < 0.05) of water amount was identified for granule growth, percent fines and distribution width and flow behavior. Granule density and compressibility were found to be significantly influenced (p < 0.05) by the two operating conditions. Also, water amount has significant effect (p < 0.05) on tablet weight unifornity, friability and percent capping. Moreover, tablet disintegration time and drug dissolution appears to be significantly influenced (p < 0.05) by the two process variables. On the other hand, the relationship of process parameters with critical quality attributes of granule and final product tablet was identified and correlated. Ultimately, a judicious selection of process parameters in high shear granulation process will allow providing product of desirable quality.
Sharp, T G
1984-02-01
The study was designed to determine whether any one of seven selected variables or a combination of the variables is predictive of performance on the State Board Test Pool Examination. The selected variables studied were: high school grade point average (HSGPA), The University of Tennessee, Knoxville, College of Nursing grade point average (GPA), and American College Test Assessment (ACT) standard scores (English, ENG; mathematics, MA; social studies, SS; natural sciences, NSC; composite, COMP). Data utilized were from graduates of the baccalaureate program of The University of Tennessee, Knoxville, College of Nursing from 1974 through 1979. The sample of 322 was selected from a total population of 572. The Statistical Analysis System (SAS) was designed to accomplish analysis of the predictive relationship of each of the seven selected variables to State Board Test Pool Examination performance (result of pass or fail), a stepwise discriminant analysis was designed for determining the predictive relationship of the strongest combination of the independent variables to overall State Board Test Pool Examination performance (result of pass or fail), and stepwise multiple regression analysis was designed to determine the strongest predictive combination of selected variables for each of the five subexams of the State Board Test Pool Examination. The selected variables were each found to be predictive of SBTPE performance (result of pass or fail). The strongest combination for predicting SBTPE performance (result of pass or fail) was found to be GPA, MA, and NSC.
NASA Astrophysics Data System (ADS)
Di Benedetto, Francesco; D'Acapito, Francesco; Capacci, Fabio; Fornaciai, Gabriele; Innocenti, Massimo; Montegrossi, Giordano; Oberhauser, Werner; Pardi, Luca A.; Romanelli, Maurizio
2014-03-01
We investigated the speciation of Fe in bulk and in suspended respirable quartz dusts coming from ceramic and iron-casting industrial processes via X-ray absorption spectroscopy, with the aim of contributing to a better understanding of the variability of crystalline silica toxicity. Four different bulk industrial quartz powders, nominally pure quartz samples with Fe contents below 200 ppm, and three respirable dusts filters were selected. Fe speciation was determined in all samples through a coupled study of the X-ray absorption near-edge structure and extended X-ray absorption fine structure regions, operating at the Fe-K edge. Fe speciation revealed common features at the beginning of the different production processes, whereas significant differences were observed on both respirable dusts and bulk dusts exiting from the production process. Namely, a common pollution of the raw quartz dusts by elemental Fe was evidenced and attributed to residuals of the industrial production of quartz materials. Moreover, the respirable samples indicated that reactivity occurs after the suspension of the powders in air. The gravitational selection during the particle suspension consistently allowed us to clearly discriminate between suspended and bulk dusts. On the basis of the obtained results, we provide an apparent spectroscopic discrimination between the raw materials used in the considered industrial processes, and those that are effectively inhaled by workers. In particular, an amorphous FeIII oxide, with an unsaturated coordination sphere, can be related to silica reactivity (and health consequences).
Disruptive chemicals, senescence and immortality.
Carnero, Amancio; Blanco-Aparicio, Carmen; Kondoh, Hiroshi; Lleonart, Matilde E; Martinez-Leal, Juan Fernando; Mondello, Chiara; Scovassi, A Ivana; Bisson, William H; Amedei, Amedeo; Roy, Rabindra; Woodrick, Jordan; Colacci, Annamaria; Vaccari, Monica; Raju, Jayadev; Al-Mulla, Fahd; Al-Temaimi, Rabeah; Salem, Hosni K; Memeo, Lorenzo; Forte, Stefano; Singh, Neetu; Hamid, Roslida A; Ryan, Elizabeth P; Brown, Dustin G; Wise, John Pierce; Wise, Sandra S; Yasaei, Hemad
2015-06-01
Carcinogenesis is thought to be a multistep process, with clonal evolution playing a central role in the process. Clonal evolution involves the repeated 'selection and succession' of rare variant cells that acquire a growth advantage over the remaining cell population through the acquisition of 'driver mutations' enabling a selective advantage in a particular micro-environment. Clonal selection is the driving force behind tumorigenesis and possesses three basic requirements: (i) effective competitive proliferation of the variant clone when compared with its neighboring cells, (ii) acquisition of an indefinite capacity for self-renewal, and (iii) establishment of sufficiently high levels of genetic and epigenetic variability to permit the emergence of rare variants. However, several questions regarding the process of clonal evolution remain. Which cellular processes initiate carcinogenesis in the first place? To what extent are environmental carcinogens responsible for the initiation of clonal evolution? What are the roles of genotoxic and non-genotoxic carcinogens in carcinogenesis? What are the underlying mechanisms responsible for chemical carcinogen-induced cellular immortality? Here, we explore the possible mechanisms of cellular immortalization, the contribution of immortalization to tumorigenesis and the mechanisms by which chemical carcinogens may contribute to these processes. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Rapid high-throughput cloning and stable expression of antibodies in HEK293 cells.
Spidel, Jared L; Vaessen, Benjamin; Chan, Yin Yin; Grasso, Luigi; Kline, J Bradford
2016-12-01
Single-cell based amplification of immunoglobulin variable regions is a rapid and powerful technique for cloning antigen-specific monoclonal antibodies (mAbs) for purposes ranging from general laboratory reagents to therapeutic drugs. From the initial screening process involving small quantities of hundreds or thousands of mAbs through in vitro characterization and subsequent in vivo experiments requiring large quantities of only a few, having a robust system for generating mAbs from cloning through stable cell line generation is essential. A protocol was developed to decrease the time, cost, and effort required by traditional cloning and expression methods by eliminating bottlenecks in these processes. Removing the clonal selection steps from the cloning process using a highly efficient ligation-independent protocol and from the stable cell line process by utilizing bicistronic plasmids to generate stable semi-clonal cell pools facilitated an increased throughput of the entire process from plasmid assembly through transient transfections and selection of stable semi-clonal cell pools. Furthermore, the time required by a single individual to clone, express, and select stable cell pools in a high-throughput format was reduced from 4 to 6months to only 4 to 6weeks. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Theory of Laser-Stimulated Surface Processes.
1983-05-01
variables9 4 via the Fourier expansion of AW. The correlation function can be written in terms of specific collective properties of the phonons by...9 -Eg/g 2 T - 11 IgL Ik odK 6(Udk-\\) (2.338)13 dk 1 \\f2k/ d(zk Using the selection rule, Eq. (2.334), along with Eq. (2.323), this can be
ERIC Educational Resources Information Center
Adodo, S. O.
2012-01-01
The use of computer technologies has come to stay, an individual, group of individual and society who is yet to recognize this fact is merely living. The introduction of Information and Communication Technology (ICT) into the education industry has caused transformation in instructional process. The study investigated the in-service teachers…
Apparatus and method for microwave processing of materials using field-perturbing tool
Tucker, Denise A.; Fathi, Zakaryae; Lauf, Robert J.
2001-01-01
A variable frequency microwave heating apparatus designed to allow modulation of the frequency of the microwaves introduced into a multi-mode microwave cavity for heating or other selected applications. A field-perturbing tool is disposed within the cavity to perturb the microwave power distribution in order to apply a desired level of microwave power to the workpiece.
ERIC Educational Resources Information Center
Johnson, Susan Michele
2014-01-01
The purpose of this study was to investigate selected variables among community college transfer students with or without associate's degrees and native students at a 4-year university to determine the impact of the articulation and transfer process on baccalaureate attainment. More specifically, the study examined the differences in demographic…
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Screening of the aerodynamic and biophysical properties of barley malt
NASA Astrophysics Data System (ADS)
Ghodsvali, Alireza; Farzaneh, Vahid; Bakhshabadi, Hamid; Zare, Zahra; Karami, Zahra; Mokhtarian, Mohsen; Carvalho, Isabel. S.
2016-10-01
An understanding of the aerodynamic and biophysical properties of barley malt is necessary for the appropriate design of equipment for the handling, shipping, dehydration, grading, sorting and warehousing of this strategic crop. Malting is a complex biotechnological process that includes steeping; germination and finally, the dehydration of cereal grains under controlled temperature and humidity conditions. In this investigation, the biophysical properties of barley malt were predicted using two models of artificial neural networks as well as response surface methodology. Stepping time and germination time were selected as the independent variables and 1 000 kernel weight, kernel density and terminal velocity were selected as the dependent variables (responses). The obtained outcomes showed that the artificial neural network model, with a logarithmic sigmoid activation function, presents more precise results than the response surface model in the prediction of the aerodynamic and biophysical properties of produced barley malt. This model presented the best result with 8 nodes in the hidden layer and significant correlation coefficient values of 0.783, 0.767 and 0.991 were obtained for responses one thousand kernel weight, kernel density, and terminal velocity, respectively. The outcomes indicated that this novel technique could be successfully applied in quantitative and qualitative monitoring within the malting process.
Ormes, James D; Zhang, Dan; Chen, Alex M; Hou, Shirley; Krueger, Davida; Nelson, Todd; Templeton, Allen
2013-02-01
There has been a growing interest in amorphous solid dispersions for bioavailability enhancement in drug discovery. Spray drying, as shown in this study, is well suited to produce prototype amorphous dispersions in the Candidate Selection stage where drug supply is limited. This investigation mapped the processing window of a micro-spray dryer to achieve desired particle characteristics and optimize throughput/yield. Effects of processing variables on the properties of hypromellose acetate succinate were evaluated by a fractional factorial design of experiments. Parameters studied include solid loading, atomization, nozzle size, and spray rate. Response variables include particle size, morphology and yield. Unlike most other commercial small-scale spray dryers, the ProCepT was capable of producing particles with a relatively wide mean particle size, ca. 2-35 µm, allowing material properties to be tailored to support various applications. In addition, an optimized throughput of 35 g/hour with a yield of 75-95% was achieved, which affords to support studies from Lead-identification/Lead-optimization to early safety studies. A regression model was constructed to quantify the relationship between processing parameters and the response variables. The response surface curves provide a useful tool to design processing conditions, leading to a reduction in development time and drug usage to support drug discovery.
Fink, Herbert; Panne, Ulrich; Niessner, Reinhard
2002-09-01
An experimental setup for direct elemental analysis of recycled thermoplasts from consumer electronics by laser-induced plasma spectroscopy (LIPS, or laser-induced breakdown spectroscopy, LIBS) was realized. The combination of a echelle spectrograph, featuring a high resolution with a broad spectral coverage, with multivariate methods, such as PLS, PCR, and variable subset selection via a genetic algorithm, resulted in considerable improvements in selectivity and sensitivity for this complex matrix. With a normalization to carbon as internal standard, the limits of detection were in the ppm range. A preliminary pattern recognition study points to the possibility of polymer recognition via the line-rich echelle spectra. Several experiments at an extruder within a recycling plant demonstrated successfully the capability of LIPS for different kinds of routine on-line process analysis.
Evaluation of variable selection methods for random forests and omics data sets.
Degenhardt, Frauke; Seifert, Stephan; Szymczak, Silke
2017-10-16
Machine learning methods and in particular random forests are promising approaches for prediction based on high dimensional omics data sets. They provide variable importance measures to rank predictors according to their predictive power. If building a prediction model is the main goal of a study, often a minimal set of variables with good prediction performance is selected. However, if the objective is the identification of involved variables to find active networks and pathways, approaches that aim to select all relevant variables should be preferred. We evaluated several variable selection procedures based on simulated data as well as publicly available experimental methylation and gene expression data. Our comparison included the Boruta algorithm, the Vita method, recurrent relative variable importance, a permutation approach and its parametric variant (Altmann) as well as recursive feature elimination (RFE). In our simulation studies, Boruta was the most powerful approach, followed closely by the Vita method. Both approaches demonstrated similar stability in variable selection, while Vita was the most robust approach under a pure null model without any predictor variables related to the outcome. In the analysis of the different experimental data sets, Vita demonstrated slightly better stability in variable selection and was less computationally intensive than Boruta.In conclusion, we recommend the Boruta and Vita approaches for the analysis of high-dimensional data sets. Vita is considerably faster than Boruta and thus more suitable for large data sets, but only Boruta can also be applied in low-dimensional settings. © The Author 2017. Published by Oxford University Press.
ERIC Educational Resources Information Center
Derry, Julie A.; Phillips, D. Allen
2004-01-01
The purpose of this study was to investigate selected student and teacher variables and compare the differences between these variables for female students and female teachers in coeducation and single-sex physical education classes. Eighteen female teachers and intact classes were selected; 9 teachers from coeducation and 9 teachers from…
Terra, Luciana A; Filgueiras, Paulo R; Tose, Lílian V; Romão, Wanderson; de Souza, Douglas D; de Castro, Eustáquio V R; de Oliveira, Mirela S L; Dias, Júlio C M; Poppi, Ronei J
2014-10-07
Negative-ion mode electrospray ionization, ESI(-), with Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) was coupled to a Partial Least Squares (PLS) regression and variable selection methods to estimate the total acid number (TAN) of Brazilian crude oil samples. Generally, ESI(-)-FT-ICR mass spectra present a power of resolution of ca. 500,000 and a mass accuracy less than 1 ppm, producing a data matrix containing over 5700 variables per sample. These variables correspond to heteroatom-containing species detected as deprotonated molecules, [M - H](-) ions, which are identified primarily as naphthenic acids, phenols and carbazole analog species. The TAN values for all samples ranged from 0.06 to 3.61 mg of KOH g(-1). To facilitate the spectral interpretation, three methods of variable selection were studied: variable importance in the projection (VIP), interval partial least squares (iPLS) and elimination of uninformative variables (UVE). The UVE method seems to be more appropriate for selecting important variables, reducing the dimension of the variables to 183 and producing a root mean square error of prediction of 0.32 mg of KOH g(-1). By reducing the size of the data, it was possible to relate the selected variables with their corresponding molecular formulas, thus identifying the main chemical species responsible for the TAN values.
Osgood, D Wayne; Feinberg, Mark E; Ragan, Daniel T
2015-08-01
Seeking to reduce problematic peer influence is a prominent theme of programs to prevent adolescent problem behavior. To support the refinement of this aspect of prevention programming, we examined peer influence and selection processes for three problem behaviors (delinquency, alcohol use, and smoking). We assessed not only the overall strengths of these peer processes, but also their consistency versus variability across settings. We used dynamic stochastic actor-based models to analyze five waves of friendship network data across sixth through ninth grades for a large sample of U.S. adolescents. Our sample included two successive grade cohorts of youth in 26 school districts participating in the PROSPER study, yielding 51 longitudinal social networks based on respondents' friendship nominations. For all three self-reported antisocial behaviors, we found evidence of both peer influence and selection processes tied to antisocial behavior. There was little reliable variance in these processes across the networks, suggesting that the statistical imprecision of the peer influence and selection estimates in previous studies likely accounts for inconsistencies in results. Adolescent friendship networks play a strong role in shaping problem behavior, but problem behaviors also inform friendship choices. In addition to preferring friends with similar levels of problem behavior, adolescents tend to choose friends who engage in problem behaviors, thus creating broader diffusion.
Osgood, D. Wayne; Feinberg, Mark E.; Ragan, Daniel T.
2015-01-01
Seeking to reduce problematic peer influence is a prominent theme of programs to prevent adolescent problem behavior. To support the refinement of this aspect of prevention programming, we examined peer influence and selection processes for three problem behaviors (delinquency, alcohol use, and smoking). We assessed not only the overall strengths of these peer processes, but also their consistency versus variability across settings. We used dynamic stochastic actor-based models to analyze five waves of friendship network data across sixth through ninth grades for a large sample of U.S. adolescents. Our sample included two successive grade cohorts of youth in 26 school districts participating in the PROSPER study, yielding 51 longitudinal social networks based on respondents’ friendship nominations. For all three self-reported antisocial behaviors, we found evidence of both peer influence and selection processes tied to antisocial behavior. There was little reliable variance in these processes across the networks, suggesting that the statistical imprecision of the peer influence and selection estimates in previous studies likely accounts for inconsistencies in results. Adolescent friendship networks play a strong role in shaping problem behavior, but problem behaviors also inform friendship choices. In addition to preferring friends with similar levels of problem behavior, adolescents tend to choose friends who engage in problem behaviors, thus creating broader diffusion. PMID:25943034
A Selective Overview of Variable Selection in High Dimensional Feature Space
Fan, Jianqing
2010-01-01
High dimensional statistical problems arise from diverse fields of scientific research and technological development. Variable selection plays a pivotal role in contemporary statistical learning and scientific discoveries. The traditional idea of best subset selection methods, which can be regarded as a specific form of penalized likelihood, is computationally too expensive for many modern statistical applications. Other forms of penalized likelihood methods have been successfully developed over the last decade to cope with high dimensionality. They have been widely applied for simultaneously selecting important variables and estimating their effects in high dimensional statistical inference. In this article, we present a brief account of the recent developments of theory, methods, and implementations for high dimensional variable selection. What limits of the dimensionality such methods can handle, what the role of penalty functions is, and what the statistical properties are rapidly drive the advances of the field. The properties of non-concave penalized likelihood and its roles in high dimensional statistical modeling are emphasized. We also review some recent advances in ultra-high dimensional variable selection, with emphasis on independence screening and two-scale methods. PMID:21572976
Contrasting mode of evolution at a coat color locus in wild and domestic pigs.
Fang, Meiying; Larson, Greger; Ribeiro, Helena Soares; Li, Ning; Andersson, Leif
2009-01-01
Despite having only begun approximately 10,000 years ago, the process of domestication has resulted in a degree of phenotypic variation within individual species normally associated with much deeper evolutionary time scales. Though many variable traits found in domestic animals are the result of relatively recent human-mediated selection, uncertainty remains as to whether the modern ubiquity of long-standing variable traits such as coat color results from selection or drift, and whether the underlying alleles were present in the wild ancestor or appeared after domestication began. Here, through an investigation of sequence diversity at the porcine melanocortin receptor 1 (MC1R) locus, we provide evidence that wild and domestic pig (Sus scrofa) haplotypes from China and Europe are the result of strikingly different selection pressures, and that coat color variation is the result of intentional selection for alleles that appeared after the advent of domestication. Asian and European wild boar (evolutionarily distinct subspecies) differed only by synonymous substitutions, demonstrating that camouflage coat color is maintained by purifying selection. In domestic pigs, however, each of nine unique mutations altered the amino acid sequence thus generating coat color diversity. Most domestic MC1R alleles differed by more than one mutation from the wild-type, implying a long history of strong positive selection for coat color variants, during which time humans have cherry-picked rare mutations that would be quickly eliminated in wild contexts. This pattern demonstrates that coat color phenotypes result from direct human selection and not via a simple relaxation of natural selective pressures.
García-Fernández, Alfredo; Iriondo, Jose M; Escudero, Adrián; Aguilar, Javier Fuertes; Feliner, Gonzalo Nieto
2013-08-01
Mountain plants are among the species most vulnerable to global warming, because of their isolation, narrow geographic distribution, and limited geographic range shifts. Stochastic and selective processes can act on the genome, modulating genetic structure and diversity. Fragmentation and historical processes also have a great influence on current genetic patterns, but the spatial and temporal contexts of these processes are poorly known. We aimed to evaluate the microevolutionary processes that may have taken place in Mediterranean high-mountain plants in response to changing historical environmental conditions. Genetic structure, diversity, and loci under selection were analyzed using AFLP markers in 17 populations distributed over the whole geographic range of Armeria caespitosa, an endemic plant that inhabits isolated mountains (Sierra de Guadarrama, Spain). Differences in altitude, geographic location, and climate conditions were considered in the analyses, because they may play an important role in selective and stochastic processes. Bayesian clustering approaches identified nine genetic groups, although some discrepancies in assignment were found between alternative analyses. Spatially explicit analyses showed a weak relationship between genetic parameters and spatial or environmental distances. However, a large proportion of outlier loci were detected, and some outliers were related to environmental variables. A. caespitosa populations exhibit spatial patterns of genetic structure that cannot be explained by the isolation-by-distance model. Shifts along the altitude gradient in response to Pleistocene climatic oscillations and environmentally mediated selective forces might explain the resulting structure and genetic diversity values found.
Ruperto, Nicolino; Pistorio, Angela; Ravelli, Angelo; Rider, Lisa G.; Pilkington, Clarissa; Oliveira, Sheila; Wulffraat, Nico; Espada, Graciela; Garay, Stella; Cuttica, Ruben; Hofer, Michael; Quartier, Pierre; Melo-Gomes, Jose; Reed, Ann M.; Wierzbowska, Malgorzata; Feldman, Brian M.; Harjacek, Miroslav; Huppertz, Hans-Iko; Nielsen, Susan; Flato, Berit; Lahdenne, Pekka; Michels, Harmut; Murray, Kevin J.; Punaro, Lynn; Rennebohm, Robert; Russo, Ricardo; Balogh, Zsolt; Rooney, Madeleine; Pachman, Lauren M.; Wallace, Carol; Hashkes, Philip; Lovell, Daniel J.; Giannini, Edward H.; Martini, Alberto
2010-01-01
Objective To develop a provisional definition for the evaluation of response to therapy in juvenile dermatomyositis (JDM) based on the PRINTO JDM core set of variables. Methods Thirty-seven experienced pediatric rheumatologists from 27 countries, achieved consensus on 128 difficult patient profiles as clinically improved or not improved using a stepwise approach (patients rating, statistical analysis, definition selection). Using the physicians’ consensus ratings as the “gold-standard measure”, chi-square, sensitivity, specificity, false positive and negative rate, area under the ROC, and kappa agreement for candidate definitions of improvement were calculated. Definitions with kappa >0.8 were multiplied with the face validity score to select the top definitions. Results The top definition of improvement was: at least 20% improvement from baseline in 3/6 core set variables with no more than 1 of the remaining worsening by more than 30%, which cannot be muscle strength. The second highest scoring definition was at least 20% improvement from baseline in 3/6 core set variables with no more than 2 of the remaining worsening by more than 25%, which cannot be muscle strength which is definition P1 selected by the IMACS group. The third is similar to the second with the maximum amount of worsening set to 30%. This indicates convergent validity of the process. Conclusion we proposes a provisional data driven definition of improvement that reflects well the consensus rating of experienced clinicians, which incorporates clinically meaningful change in core set variables in a composite endpoint for the evaluation of global response to therapy in JDM. PMID:20583105
Möltgen, C-V; Puchert, T; Menezes, J C; Lochmann, D; Reich, G
2012-04-15
Film coating of tablets is a multivariate pharmaceutical unit operation. In this study an innovative in-line Fourier-Transform Near-Infrared Spectroscopy (FT-NIRS) application is described which enables real-time monitoring of a full industrial scale pan coating process of heart-shaped tablets. The tablets were coated with a thin hydroxypropyl methylcellulose (HPMC) film of up to approx. 28 μm on the tablet face as determined by SEM, corresponding to a weight gain of 2.26%. For a better understanding of the aqueous coating process the NIR probe was positioned inside the rotating tablet bed. Five full scale experimental runs have been performed to evaluate the impact of process variables such as pan rotation, exhaust air temperature, spray rate and pan load and elaborate robust and selective quantitative calibration models for the real-time determination of both coating growth and tablet moisture content. Principal Component (PC) score plots allowed each coating step, namely preheating, spraying and drying to be distinguished and the dominating factors and their spectral effects to be identified (e.g. temperature, moisture, coating growth, change of tablet bed density, and core/coat interactions). The distinct separation of HPMC coating growth and tablet moisture in different PCs enabled a real-time in-line monitoring of both attributes. A PLS calibration model based on Karl Fischer reference values allowed the tablet moisture trajectory to be determined throughout the entire coating process. A 1-latent variable iPLS weight gain calibration model with calibration samples from process stages dominated by the coating growth (i.e. ≥ 30% of the theoretically applied amount of coating) was sufficiently selective and accurate to predict the progress of the thin HPMC coating layer. At-line NIR Chemical Imaging (NIR-CI) in combination with PLS Discriminant Analysis (PLSDA) verified the HPMC coating growth and physical changes at the core/coat interface during the initial stages of the coating process. In addition, inter- and intra-tablet coating variability throughout the process could be assessed. These results clearly demonstrate that in-line NIRS and at-line NIR-CI can be applied as complimentary PAT tools to monitor a challenging pan coating process. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Schneider, Robert; Haberl, Alexander; Rascher, Rolf
2017-06-01
The trend in the optic industry shows, that it is increasingly important to be able to manufacture complex lens geometries on a high level of precision. From a certain limit on the required shape accuracy of optical workpieces, the processing is changed from the two-dimensional to point-shaped processing. It is very important that the process is as stable as possible during the in point-shaped processing. To ensure stability, usually only one process parameter is varied during processing. It is common that this parameter is the feed rate, which corresponds to the dwell time. In the research project ArenA-FOi (Application-oriented analysis of resource-saving and energy-efficient design of industrial facilities for the optical industry), a touching procedure is used in the point-attack, and in this case a close look is made as to whether a change of several process parameters is meaningful during a processing. The ADAPT tool in size R20 from Satisloh AG is used, which is also available for purchase. The behavior of the tool is tested under constant conditions in the MCP 250 CNC by OptoTech GmbH. A series of experiments should enable the TIF (tool influence function) to be determined using three variable parameters. Furthermore, the maximum error frequency that can be processed is calculated as an example for one parameter set and serves as an outlook for further investigations. The test results serve as the basic for the later removal simulation, which must be able to deal with a variable TIF. This topic has already been successfully implemented in another research project of the Institute for Precision Manufacturing and High-Frequency Technology (IPH) and thus this algorithm can be used. The next step is the useful implementation of the collected knowledge. The TIF must be selected on the basis of the measured data. It is important to know the error frequencies to select the optimal TIF. Thus, it is possible to compare the simulated results with real measurement data and to carry out a revision. From this point onwards, it is possible to evaluate the potential of this approach, and in the ideal case it will be further researched and later found in the production.
Jensen, Jacob S; Egebo, Max; Meyer, Anne S
2008-05-28
Accomplishment of fast tannin measurements is receiving increased interest as tannins are important for the mouthfeel and color properties of red wines. Fourier transform mid-infrared spectroscopy allows fast measurement of different wine components, but quantification of tannins is difficult due to interferences from spectral responses of other wine components. Four different variable selection tools were investigated for the identification of the most important spectral regions which would allow quantification of tannins from the spectra using partial least-squares regression. The study included the development of a new variable selection tool, iterative backward elimination of changeable size intervals PLS. The spectral regions identified by the different variable selection methods were not identical, but all included two regions (1485-1425 and 1060-995 cm(-1)), which therefore were concluded to be particularly important for tannin quantification. The spectral regions identified from the variable selection methods were used to develop calibration models. All four variable selection methods identified regions that allowed an improved quantitative prediction of tannins (RMSEP = 69-79 mg of CE/L; r = 0.93-0.94) as compared to a calibration model developed using all variables (RMSEP = 115 mg of CE/L; r = 0.87). Only minor differences in the performance of the variable selection methods were observed.
[Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].
Vanegas, Jairo; Vásquez, Fabián
Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Factors affecting medication-order processing time.
Beaman, M A; Kotzan, J A
1982-11-01
The factors affecting medication-order processing time at one hospital were studied. The order processing time was determined by directly observing the time to process randomly selected new drug orders on all three work shifts during two one-week periods. An order could list more than one drug for an individual patient. The observer recorded the nature, location, and cost of the drugs ordered, as well as the time to process the order. The time and type of interruptions also were noted. The time to process a drug order was classified as six dependent variables: (1) total time, (2) work time, (3) check time, (4) waiting time I--time from arrival on the dumbwaiter until work was initiated, (5) waiting time II--time between completion of the work and initiation of checking, and (6) waiting time III--time after the check was completed until the order left on the dumbwaiter. The significant predictors of each of the six dependent variables were determined using stepwise multiple regression. The total time to process a prescription order was 58.33 +/- 48.72 minutes; the urgency status of the order was the only significant determinant of total time. Urgency status also significantly predicted the three waiting-time variables. Interruptions and the number of drugs on the order were significant determinants of work time and check time. Each telephone interruption increased the work time by 1.72 minutes. While the results of this study cannot be generalized to other institutions, pharmacy managers can use the method of determining factors that affect medication-order processing time to identify problem areas in their institutions.
Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.
2016-01-01
Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.
Processing and domain selection: Quantificational variability effects
Harris, Jesse A.; Clifton, Charles; Frazier, Lyn
2014-01-01
Three studies investigated how readers interpret sentences with variable quantificational domains, e.g., The army was mostly in the capital, where mostly may quantify over individuals or parts (Most of the army was in the capital) or over times (The army was in the capital most of the time). It is proposed that a general conceptual economy principle, No Extra Times (Majewski 2006, in preparation), discourages the postulation of potentially unnecessary times, and thus favors the interpretation quantifying over parts. Disambiguating an ambiguously quantified sentence to a quantification over times interpretation was rated as less natural than disambiguating it to a quantification over parts interpretation (Experiment 1). In an interpretation questionnaire, sentences with similar quantificational variability were constructed so that both interpretations of the sentence would require postulating multiple times; this resulted in the elimination of the preference for a quantification over parts interpretation, suggesting the parts preference observed in Experiment 1 is not reducible to a lexical bias of the adverb mostly (Experiment 2). An eye movement recording study showed that, in the absence of prior evidence for multiple times, readers exhibit greater difficulty when reading material that forces a quantification over times interpretation than when reading material that allows a quantification over parts interpretation (Experiment 3). These experiments contribute to understanding readers’ default assumptions about the temporal properties of sentences, which is essential for understanding the selection of a domain for adverbial quantifiers and, more generally, for understanding how situational constraints influence sentence processing. PMID:25328262
Fukuchi, Claudiane A.; Duarte, Marcos
2017-01-01
Background The goals of this study were (1) to present the set of data evaluating running biomechanics (kinematics and kinetics), including data on running habits, demographics, and levels of muscle strength and flexibility made available at Figshare (DOI: 10.6084/m9.figshare.4543435); and (2) to examine the effect of running speed on selected gait-biomechanics variables related to both running injuries and running economy. Methods The lower-extremity kinematics and kinetics data of 28 regular runners were collected using a three-dimensional (3D) motion-capture system and an instrumented treadmill while the subjects ran at 2.5 m/s, 3.5 m/s, and 4.5 m/s wearing standard neutral shoes. Results A dataset comprising raw and processed kinematics and kinetics signals pertaining to this experiment is available in various file formats. In addition, a file of metadata, including demographics, running characteristics, foot-strike patterns, and muscle strength and flexibility measurements is provided. Overall, there was an effect of running speed on most of the gait-biomechanics variables selected for this study. However, the foot-strike patterns were not affected by running speed. Discussion Several applications of this dataset can be anticipated, including testing new methods of data reduction and variable selection; for educational purposes; and answering specific research questions. This last application was exemplified in the study’s second objective. PMID:28503379
Fukuchi, Reginaldo K; Fukuchi, Claudiane A; Duarte, Marcos
2017-01-01
The goals of this study were (1) to present the set of data evaluating running biomechanics (kinematics and kinetics), including data on running habits, demographics, and levels of muscle strength and flexibility made available at Figshare (DOI: 10.6084/m9.figshare.4543435); and (2) to examine the effect of running speed on selected gait-biomechanics variables related to both running injuries and running economy. The lower-extremity kinematics and kinetics data of 28 regular runners were collected using a three-dimensional (3D) motion-capture system and an instrumented treadmill while the subjects ran at 2.5 m/s, 3.5 m/s, and 4.5 m/s wearing standard neutral shoes. A dataset comprising raw and processed kinematics and kinetics signals pertaining to this experiment is available in various file formats. In addition, a file of metadata, including demographics, running characteristics, foot-strike patterns, and muscle strength and flexibility measurements is provided. Overall, there was an effect of running speed on most of the gait-biomechanics variables selected for this study. However, the foot-strike patterns were not affected by running speed. Several applications of this dataset can be anticipated, including testing new methods of data reduction and variable selection; for educational purposes; and answering specific research questions. This last application was exemplified in the study's second objective.
Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.
2017-01-01
Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.
Implementation of continuous-variable quantum key distribution with discrete modulation
NASA Astrophysics Data System (ADS)
Hirano, Takuya; Ichikawa, Tsubasa; Matsubara, Takuto; Ono, Motoharu; Oguri, Yusuke; Namiki, Ryo; Kasai, Kenta; Matsumoto, Ryutaroh; Tsurumaru, Toyohiro
2017-06-01
We have developed a continuous-variable quantum key distribution (CV-QKD) system that employs discrete quadrature-amplitude modulation and homodyne detection of coherent states of light. We experimentally demonstrated automated secure key generation with a rate of 50 kbps when a quantum channel is a 10 km optical fibre. The CV-QKD system utilises a four-state and post-selection protocol and generates a secure key against the entangling cloner attack. We used a pulsed light source of 1550 nm wavelength with a repetition rate of 10 MHz. A commercially available balanced receiver is used to realise shot-noise-limited pulsed homodyne detection. We used a non-binary LDPC code for error correction (reverse reconciliation) and the Toeplitz matrix multiplication for privacy amplification. A graphical processing unit card is used to accelerate the software-based post-processing.
Linear and nonlinear pattern selection in Rayleigh-Benard stability problems
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1993-01-01
A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Ettner, Randi; Ettner, Frederic; White, Tonya
2016-01-01
Purpose: Selecting a healthcare provider is often a complicated process. Many factors appear to govern the decision as to how to select the provider in the patient-provider relationship. While the possibility of changing primary care physicians or specialists exists, decisions regarding surgeons are immutable once surgery has been performed. This study is an attempt to assess the importance attached to various factors involved in selecting a surgeon to perform gender affirmation surgery (GAS). It was hypothesized that owing to the intimate nature of the surgery, the expense typically involved, the emotional meaning attached to the surgery, and other variables, decisions regarding choice of surgeon for this procedure would involve factors other than those that inform more typical healthcare provider selection or surgeon selection for other plastic/reconstructive procedures. Methods: Questionnaires were distributed to individuals who had undergone GAS and individuals who had undergone elective plastic surgery to assess decision-making. Results: The results generally confirm previous findings regarding how patients select providers. Conclusion: Choosing a surgeon to perform gender-affirming surgery is a challenging process, but patients are quite rational in their decision-making. Unlike prior studies, we did not find a preference for gender-concordant surgeons, even though the surgery involves the genital area. Providing strategies and resources for surgical selection can improve patient satisfaction.
Border, Shana E
2018-01-01
Abstract Natural selection has been shown to drive population differentiation and speciation. The role of sexual selection in this process is controversial; however, most of the work has centered on mate choice while the role of male–male competition in speciation is relatively understudied. Here, we outline how male–male competition can be a source of diversifying selection on male competitive phenotypes, and how this can contribute to the evolution of reproductive isolation. We highlight how negative frequency-dependent selection (advantage of rare phenotype arising from stronger male–male competition between similar male phenotypes compared with dissimilar male phenotypes) and disruptive selection (advantage of extreme phenotypes) drives the evolution of diversity in competitive traits such as weapon size, nuptial coloration, or aggressiveness. We underscore that male–male competition interacts with other life-history functions and that variable male competitive phenotypes may represent alternative adaptive options. In addition to competition for mates, aggressive interference competition for ecological resources can exert selection on competitor signals. We call for a better integration of male–male competition with ecological interference competition since both can influence the process of speciation via comparable but distinct mechanisms. Altogether, we present a more comprehensive framework for studying the role of male–male competition in speciation, and emphasize the need for better integration of insights gained from other fields studying the evolutionary, behavioral, and physiological consequences of agonistic interactions. PMID:29492042
Spatio-Temporal Process Variability in Watershed Scale Wetland Restoration Planning
NASA Astrophysics Data System (ADS)
Evenson, G. R.
2012-12-01
Watershed scale restoration decision making processes are increasingly informed by quantitative methodologies providing site-specific restoration recommendations - sometimes referred to as "systematic planning." The more advanced of these methodologies are characterized by a coupling of search algorithms and ecological models to discover restoration plans that optimize environmental outcomes. Yet while these methods have exhibited clear utility as decision support toolsets, they may be critiqued for flawed evaluations of spatio-temporally variable processes fundamental to watershed scale restoration. Hydrologic and non-hydrologic mediated process connectivity along with post-restoration habitat dynamics, for example, are commonly ignored yet known to appreciably affect restoration outcomes. This talk will present a methodology to evaluate such spatio-temporally complex processes in the production of watershed scale wetland restoration plans. Using the Tuscarawas Watershed in Eastern Ohio as a case study, a genetic algorithm will be coupled with the Soil and Water Assessment Tool (SWAT) to reveal optimal wetland restoration plans as measured by their capacity to maximize nutrient reductions. Then, a so-called "graphical" representation of the optimization problem will be implemented in-parallel to promote hydrologic and non-hydrologic mediated connectivity amongst existing wetlands and sites selected for restoration. Further, various search algorithm mechanisms will be discussed as a means of accounting for temporal complexities such as post-restoration habitat dynamics. Finally, generalized patterns of restoration plan optimality will be discussed as an alternative and possibly superior decision support toolset given the complexity and stochastic nature of spatio-temporal process variability.
No difference in variability of unique hue selections and binary hue selections.
Bosten, J M; Lawrance-Owen, A J
2014-04-01
If unique hues have special status in phenomenological experience as perceptually pure, it seems reasonable to assume that they are represented more precisely by the visual system than are other colors. Following the method of Malkoc et al. (J. Opt. Soc. Am. A22, 2154 [2005]), we gathered unique and binary hue selections from 50 subjects. For these subjects we repeated the measurements in two separate sessions, allowing us to measure test-retest reliabilities (0.52≤ρ≤0.78; p≪0.01). We quantified the within-individual variability for selections of each hue. Adjusting for the differences in variability intrinsic to different regions of chromaticity space, we compared the within-individual variability for unique hues to that for binary hues. Surprisingly, we found that selections of unique hues did not show consistently lower variability than selections of binary hues. We repeated hue measurements in a single session for an independent sample of 58 subjects, using a different relative scaling of the cardinal axes of MacLeod-Boynton chromaticity space. Again, we found no consistent difference in adjusted within-individual variability for selections of unique and binary hues. Our finding does not depend on the particular scaling chosen for the Y axis of MacLeod-Boynton chromaticity space.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
Variable screening via quantile partial correlation
Ma, Shujie; Tsai, Chih-Ling
2016-01-01
In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683
Kim, Wondae; Buchanan, John; Gabbard, Carl
2011-01-01
With an interest in identifying the variables that constrain arm choice when reaching, the authors had 11 right-handed participants perform free-choice and assigned-limb reaches at 9 object positions. The right arm was freely selected 100% of the time when reaching to positions at 30° and 40° into right hemispace. However, the left arm was freely selected to reach to positions at -30° and -40° in left hemispace 85% of the time. A comparison between free- and assigned-limb reaching kinematics revealed that free limb selection when reaching to the farthest positions was constrained by joint amplitude requirements and the time devoted to limb deceleration. Differences between free- and assigned-arm reaches were not evident when reaching to the midline and positions of ±10°, even though the right arm was freely selected most often for these positions. Different factors contribute to limb selection as a function of distance into a specific hemispace.
Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef
2012-10-01
The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.
Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin
2018-04-05
This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.
Reiter, Harold I; Lockyer, Jocelyn; Ziola, Barry; Courneya, Carol-Ann; Eva, Kevin
2012-04-01
Traditional medical school admissions assessment tools may be limiting diversity. This study investigates whether the Multiple Mini-Interview (MMI) is diversity-neutral and, if so, whether applying it with greater weight would dilute the anticipated negative impact of diversity-limiting admissions measures. Interviewed applicants to six medical schools in 2008 and 2009 underwent MMI. Predictor variables of MMI scores, grade point average (GPA), and Medical College Admission Test (MCAT) scores were correlated with diversity measures of age, gender, size of community of origin, income level, and self-declared aboriginal status. A subset of the data was then combined with variable weight assigned to predictor variables to determine whether weighting during the applicant selection process would affect diversity among chosen applicants. MMI scores were unrelated to gender, size of community of origin, and income level. They correlated positively with age and negatively with aboriginal status. GPA and MCAT correlated negatively with age and aboriginal status, GPA correlated positively with income level, and MCAT correlated positively with size of community of origin. Even extreme combinations of MMI and GPA weightings failed to increase diversity among applicants who would be selected on the basis of weighted criteria. MMI could not neutralize the diversity-limiting properties of academic scores as selection criteria to interview. Using academic scores in this way causes range restriction, counteracting attempts to enhance diversity using downstream admissions selection measures such as MMI. Diversity efforts should instead be focused upstream. These results lend further support for the development of pipeline programs.
Decision support for the selection of reference sites using 137Cs as a soil erosion tracer
NASA Astrophysics Data System (ADS)
Arata, Laura; Meusburger, Katrin; Bürge, Alexandra; Zehringer, Markus; Ketterer, Michael E.; Mabit, Lionel; Alewell, Christine
2017-08-01
The classical approach of using 137Cs as a soil erosion tracer is based on the comparison between stable reference sites and sites affected by soil redistribution processes; it enables the derivation of soil erosion and deposition rates. The method is associated with potentially large sources of uncertainty with major parts of this uncertainty being associated with the selection of the reference sites. We propose a decision support tool to Check the Suitability of reference Sites (CheSS). Commonly, the variation among 137Cs inventories of spatial replicate reference samples is taken as the sole criterion to decide on the suitability of a reference inventory. Here we propose an extension of this procedure using a repeated sampling approach, in which the reference sites are resampled after a certain time period. Suitable reference sites are expected to present no significant temporal variation in their decay-corrected 137Cs depth profiles. Possible causes of variation are assessed by a decision tree. More specifically, the decision tree tests for (i) uncertainty connected to small-scale variability in 137Cs due to its heterogeneous initial fallout (such as in areas affected by the Chernobyl fallout), (ii) signs of erosion or deposition processes and (iii) artefacts due to the collection, preparation and measurement of the samples; (iv) finally, if none of the above can be assigned, this variation might be attributed to turbation
processes (e.g. bioturbation, cryoturbation and mechanical turbation, such as avalanches or rockfalls). CheSS was exemplarily applied in one Swiss alpine valley where the apparent temporal variability called into question the suitability of the selected reference sites. In general we suggest the application of CheSS as a first step towards a comprehensible approach to test for the suitability of reference sites.
NASA Technical Reports Server (NTRS)
McDonald, Kyle; Kimball, John; Zimmermann, Reiner; Way, JoBea; Frolking, Steve; Running, Steve
1999-01-01
Landscape freeze/thaw transitions coincide with marked shifts in albedo, surface energy and mass exchange, and associated snow dynamics. Monitoring landscape freeze/thaw dynamics would improve our ability to quantify the interannual variability of boreal hydrology and river runoff/flood dynamics. The annual duration of frost-free period also bounds the period of photosynthetic activity in boreal and arctic regions thus affecting the annual carbon budget and the interannual variability of regional carbon fluxes. In this study, we use the NASA scatterometer (NSCAT) to monitor the temporal change in the radar backscatter signature across selected ecoregions of the boreal zone. We have measured vegetation tissue temperatures, soil temperature profiles, and micrometeorological parameters in situ at selected sites along a north-south transect extending across Alaska from Prudhoe Bay to the Kenai Peninsula and in Siberia near the Yenisey River. Data from these stations have been used to quantify the scatterometer's sensitivity to freeze/thaw state under a variety of terrain and landcover conditions. Analysis of the NSCAT temporal response over the 1997 spring thaw cycle shows a 3 to 5 dB change in measured backscatter that is well correlated with the landscape springtime thaw process. Having verified the instrument's capability to monitor freeze/thaw transitions, regional scale mosaicked data are applied to derive temporal series of freeze/thaw transition maps for selected circumpolar high latitude regions. These maps are applied to derive areal extent of frozen and thawed landscape and demonstrate the utility of spaceborne radar for operational monitoring of seasonal freeze-thaw dynamics and associated biophysical processes for the circumpolar high latitudes.
Bayesian Group Bridge for Bi-level Variable Selection.
Mallick, Himel; Yi, Nengjun
2017-06-01
A Bayesian bi-level variable selection method (BAGB: Bayesian Analysis of Group Bridge) is developed for regularized regression and classification. This new development is motivated by grouped data, where generic variables can be divided into multiple groups, with variables in the same group being mechanistically related or statistically correlated. As an alternative to frequentist group variable selection methods, BAGB incorporates structural information among predictors through a group-wise shrinkage prior. Posterior computation proceeds via an efficient MCMC algorithm. In addition to the usual ease-of-interpretation of hierarchical linear models, the Bayesian formulation produces valid standard errors, a feature that is notably absent in the frequentist framework. Empirical evidence of the attractiveness of the method is illustrated by extensive Monte Carlo simulations and real data analysis. Finally, several extensions of this new approach are presented, providing a unified framework for bi-level variable selection in general models with flexible penalties.
A High-Linearity Low-Noise Amplifier with Variable Bandwidth for Neural Recoding Systems
NASA Astrophysics Data System (ADS)
Yoshida, Takeshi; Sueishi, Katsuya; Iwata, Atsushi; Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi
2011-04-01
This paper describes a low-noise amplifier with multiple adjustable parameters for neural recording applications. An adjustable pseudo-resistor implemented by cascade metal-oxide-silicon field-effect transistors (MOSFETs) is proposed to achieve low-signal distortion and wide variable bandwidth range. The amplifier has been implemented in 0.18 µm standard complementary metal-oxide-semiconductor (CMOS) process and occupies 0.09 mm2 on chip. The amplifier achieved a selectable voltage gain of 28 and 40 dB, variable bandwidth from 0.04 to 2.6 Hz, total harmonic distortion (THD) of 0.2% with 200 mV output swing, input referred noise of 2.5 µVrms over 0.1-100 Hz and 18.7 µW power consumption at a supply voltage of 1.8 V.
Environmental variability and acoustic signals: a multi-level approach in songbirds.
Medina, Iliana; Francis, Clinton D
2012-12-23
Among songbirds, growing evidence suggests that acoustic adaptation of song traits occurs in response to habitat features. Despite extensive study, most research supporting acoustic adaptation has only considered acoustic traits averaged for species or populations, overlooking intraindividual variation of song traits, which may facilitate effective communication in heterogeneous and variable environments. Fewer studies have explicitly incorporated sexual selection, which, if strong, may favour variation across environments. Here, we evaluate the prevalence of acoustic adaptation among 44 species of songbirds by determining how environmental variability and sexual selection intensity are associated with song variability (intraindividual and intraspecific) and short-term song complexity. We show that variability in precipitation can explain short-term song complexity among taxonomically diverse songbirds, and that precipitation seasonality and the intensity of sexual selection are related to intraindividual song variation. Our results link song complexity to environmental variability, something previously found for mockingbirds (Family Mimidae). Perhaps more importantly, our results illustrate that individual variation in song traits may be shaped by both environmental variability and strength of sexual selection.
Prunier, Jérôme G.; Dewulf, Alexandre; Kuhlmann, Michael; Michez, Denis
2017-01-01
Morphological traits can be highly variable over time in a particular geographical area. Different selective pressures shape those traits, which is crucial in evolutionary biology. Among these traits, insect wing morphometry has already been widely used to describe phenotypic variability at the inter-specific level. On the contrary, fewer studies have focused on intra-specific wing morphometric variability. Yet, such investigations are relevant to study potential convergences of variation that could highlight micro-evolutionary processes. The recent sampling and sequencing of three solitary bees of the genus Melitta across their entire species range provides an excellent opportunity to jointly analyse genetic and morphometric variability. In the present study, we first aim to analyse the spatial distribution of the wing shape and centroid size (used as a proxy for body size) variability. Secondly, we aim to test different potential predictors of this variability at both the intra- and inter-population levels, which includes genetic variability, but also geographic locations and distances, elevation, annual mean temperature and precipitation. The comparison of spatial distribution of intra-population morphometric diversity does not reveal any convergent pattern between species, thus undermining the assumption of a potential local and selective adaptation at the population level. Regarding intra-specific wing shape differentiation, our results reveal that some tested predictors, such as geographic and genetic distances, are associated with a significant correlation for some species. However, none of these predictors are systematically identified for the three species as an important factor that could explain the intra-specific morphometric variability. As a conclusion, for the three solitary bee species and at the scale of this study, our results clearly tend to discard the assumption of the existence of a common pattern of intra-specific signal/structure within the intra-specific wing shape and body size variability. PMID:28273178
Exploring the repetition bias in voluntary task switching.
Mittelstädt, Victor; Dignath, David; Schmidt-Ott, Magdalena; Kiesel, Andrea
2018-01-01
In the voluntary task-switching paradigm, participants are required to randomly select tasks. We reasoned that the consistent finding of a repetition bias (i.e., participants repeat tasks more often than expected by chance) reflects reasonable adaptive task selection behavior to balance the goal of random task selection with the goals to minimize the time and effort for task performance. We conducted two experiments in which participants were provided with variable amount of preview for the non-chosen task stimuli (i.e., potential switch stimuli). We assumed that switch stimuli would initiate some pre-processing resulting in improved performance in switch trials. Results showed that reduced switch costs due to extra-preview in advance of each trial were accompanied by more task switches. This finding is in line with the characteristics of rational adaptive behavior. However, participants were not biased to switch tasks more often than chance despite large switch benefits. We suggest that participants might avoid effortful additional control processes that modulate the effects of preview on task performance and task choice.
Laminar Organization of Attentional Modulation in Macaque Visual Area V4.
Nandy, Anirvan S; Nassi, Jonathan J; Reynolds, John H
2017-01-04
Attention is critical to perception, serving to select behaviorally relevant information for privileged processing. To understand the neural mechanisms of attention, we must discern how attentional modulation varies by cell type and across cortical layers. Here, we test whether attention acts non-selectively across cortical layers or whether it engages the laminar circuit in specific and selective ways. We find layer- and cell-class-specific differences in several different forms of attentional modulation in area V4. Broad-spiking neurons in the superficial layers exhibit attention-mediated increases in firing rate and decreases in variability. Spike count correlations are highest in the input layer and attention serves to reduce these correlations. Superficial and input layer neurons exhibit attention-dependent decreases in low-frequency (<10 Hz) coherence, but deep layer neurons exhibit increases in coherence in the beta and gamma frequency ranges. Our study provides a template for attention-mediated laminar information processing that might be applicable across sensory modalities. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Khalilpourazari, Soheyl; Khalilpourazary, Saman
2017-05-01
In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.
Thiry, Valentine; Stark, Danica J; Goossens, Benoît; Slachmuylder, Jean-Louis; Vercauteren Drubbel, Régine; Vercauteren, Martine
2016-01-01
The choice of a sleeping site is crucial for primates and may influence their survival. In this study, we investigated several tree characteristics influencing the sleeping site selection by proboscis monkeys (Nasalis larvatus) along Kinabatangan River, in Sabah, Malaysia. We identified 81 sleeping trees used by one-male and all-male social groups from November 2011 to January 2012. We recorded 15 variables for each tree. Within sleeping sites, sleeping trees were taller, had a larger trunk, with larger and higher first branches than surrounding trees. The crown contained more mature leaves, ripe and unripe fruits but had vines less often than surrounding trees. In addition, in this study, we also focused on a larger scale, considering sleeping and non-sleeping sites. Multivariate analyses highlighted a combination of 6 variables that revealed the significance of sleeping trees as well as surrounding trees in the selection process. During our boat surveys, we observed that adult females and young individuals stayed higher in the canopy than adult males. This pattern may be driven by their increased vulnerability to predation. Finally, we suggest that the selection of particular sleeping tree features (i.e. tall, high first branch) by proboscis monkeys is mostly influenced by antipredation strategies. © 2016 S. Karger AG, Basel.
Evaluation of RPE-Select: A Web-Based Respiratory Protective Equipment Selector Tool.
Vaughan, Nick; Rajan-Sithamparanadarajah, Bob; Atkinson, Robert
2016-08-01
This article describes the evaluation of an open-access web-based respiratory protective equipment selector tool (RPE-Select, accessible at http://www.healthyworkinglives.com/rpe-selector). This tool is based on the principles of the COSHH-Essentials (C-E) control banding (CB) tool, which was developed for the exposure risk management of hazardous chemicals in the workplace by small and medium sized enterprises (SMEs) and general practice H&S professionals. RPE-Select can be used for identifying adequate and suitable RPE for dusts, fibres, mist (solvent, water, and oil based), sprays, volatile solids, fumes, gases, vapours, and actual or potential oxygen deficiency. It can be applied for substances and products with safety data sheets as well as for a large number of commonly encountered process-generated substances (PGS), such as poultry house dusts or welding fume. Potential international usability has been built-in by using the Hazard Statements developed for the Globally Harmonised System (GHS) and providing recommended RPE in picture form as well as with a written specification. Illustration helps to compensate for the variabilities in assigned protection factors across the world. RPE-Select uses easily understandable descriptions/explanations and an interactive stepwise flow for providing input/answers at each step. The output of the selection process is a report summarising the user input data and a selection of RPE, including types of filters where applicable, from which the user can select the appropriate one for each wearer. In addition, each report includes 'Dos' and 'Don'ts' for the recommended RPE. RPE-Select outcomes, based on up to 20 hypothetical use scenarios, were evaluated in comparison with other available RPE selection processes and tools, and by 32 independent users with a broad range of familiarities with industrial use scenarios in general and respiratory protection in particular. For scenarios involving substances having safety data sheets, 87% of RPE-Select outcomes resulted in a 'safe' RPE selection, while 98% 'safe' outcomes were achieved for scenarios involving process-generated substances. Reasons for the outliers were examined. User comments and opinions on the mechanics and usability of RPE-Select are also presented. © Crown copyright 2016.
Evaluation of RPE-Select: A Web-Based Respiratory Protective Equipment Selector Tool
Vaughan, Nick; Rajan-Sithamparanadarajah, Bob; Atkinson, Robert
2016-01-01
This article describes the evaluation of an open-access web-based respiratory protective equipment selector tool (RPE-Select, accessible at http://www.healthyworkinglives.com/rpe-selector). This tool is based on the principles of the COSHH-Essentials (C-E) control banding (CB) tool, which was developed for the exposure risk management of hazardous chemicals in the workplace by small and medium sized enterprises (SMEs) and general practice H&S professionals. RPE-Select can be used for identifying adequate and suitable RPE for dusts, fibres, mist (solvent, water, and oil based), sprays, volatile solids, fumes, gases, vapours, and actual or potential oxygen deficiency. It can be applied for substances and products with safety data sheets as well as for a large number of commonly encountered process-generated substances (PGS), such as poultry house dusts or welding fume. Potential international usability has been built-in by using the Hazard Statements developed for the Globally Harmonised System (GHS) and providing recommended RPE in picture form as well as with a written specification. Illustration helps to compensate for the variabilities in assigned protection factors across the world. RPE-Select uses easily understandable descriptions/explanations and an interactive stepwise flow for providing input/answers at each step. The output of the selection process is a report summarising the user input data and a selection of RPE, including types of filters where applicable, from which the user can select the appropriate one for each wearer. In addition, each report includes ‘Dos’ and ‘Don’ts’ for the recommended RPE. RPE-Select outcomes, based on up to 20 hypothetical use scenarios, were evaluated in comparison with other available RPE selection processes and tools, and by 32 independent users with a broad range of familiarities with industrial use scenarios in general and respiratory protection in particular. For scenarios involving substances having safety data sheets, 87% of RPE-Select outcomes resulted in a ‘safe’ RPE selection, while 98% ‘safe’ outcomes were achieved for scenarios involving process-generated substances. Reasons for the outliers were examined. User comments and opinions on the mechanics and usability of RPE-Select are also presented. PMID:27286763
ERIC Educational Resources Information Center
Clarke, A. J. Benjamin; Ludington, Jason D.
2018-01-01
Normative databases containing psycholinguistic variables are commonly used to aid stimulus selection for investigations into language and other cognitive processes. Norms exist for many languages, but not for Thai. The aim of the present research, therefore, was to obtain Thai normative data for the BOSS, a set of 480 high resolution color…
ERIC Educational Resources Information Center
Rodriguez, Manuel; Wilder, David A.; Therrien, Kelly; Wine, Byron; Miranti, Reylissa; Daratany, Kenneth; Salume, Gloria; Baranovsky, Greg; Rodriquez, Matias
2006-01-01
The performance diagnostic checklist (PDC) was administered to examine the variables influencing the offering of promotional stamps by employees at two sites of a restaurant franchise. PDC results suggested that a lack of appropriate antecedents, equipment and processes, and consequences were responsible for the deficits. Based on these results,…
ERIC Educational Resources Information Center
Young, I. Phillip
2005-01-01
This study addresses the screening decisions for a national random sample of high school principals as viewed from the attraction-similarity theory of interpersonal perceptions. Independent variables are the sex of principals, sex of applicants, and the type of focal positions sought by hypothetical job applicants (teacher or counselor). Dependent…
Motion Picture Attendance and Factors Influencing Movie Selection among High School Students.
ERIC Educational Resources Information Center
Austin, Bruce A.
In an audience research study, 64 high school students responded to a questionnaire concerning their movie attendance habits and the importance of ten variables to their decision-making process when choosing a movie to see. The results indicated that 26.6% attended movies once a month, 23.4% twice monthly, 6.3% three times a month, 4.7% four times…
Parker, T H; Wilkin, T A; Barr, I R; Sheldon, B C; Rowe, L; Griffith, S C
2011-07-01
Avian plumage colours are some of the most conspicuous sexual ornaments, and yet standardized selection gradients for plumage colour have rarely been quantified. We examined patterns of fecundity selection on plumage colour in blue tits (Cyanistes caeruleus L.). When not accounting for environmental heterogeneity, we detected relatively few cases of selection. We found significant disruptive selection on adult male crown colour and yearling female chest colour and marginally nonsignificant positive linear selection on adult female crown colour. We discovered no new significant selection gradients with canonical rotation of the matrix of nonlinear selection. Next, using a long-term data set, we identified territory-level environmental variables that predicted fecundity to determine whether these variables influenced patterns of plumage selection. The first of these variables, the density of oaks within 50 m of the nest, influenced selection gradients only for yearling males. The second variable, an inverse function of nesting density, interacted with a subset of plumage selection gradients for yearling males and adult females, although the strength and direction of selection did not vary predictably with population density across these analyses. Overall, fecundity selection on plumage colour in blue tits appeared rare and inconsistent among sexes and age classes. © 2011 The Authors. Journal of Evolutionary Biology © 2011 European Society For Evolutionary Biology.
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
Wang, Pei; Zhang, Hui; Yang, Hailong; Nie, Lei; Zang, Hengchang
2015-02-25
Near-infrared (NIR) spectroscopy has been developed into an indispensable tool for both academic research and industrial quality control in a wide field of applications. The feasibility of NIR spectroscopy to monitor the concentration of puerarin, daidzin, daidzein and total isoflavonoid (TIF) during the extraction process of kudzu (Pueraria lobata) was verified in this work. NIR spectra were collected in transmission mode and pretreated with smoothing and derivative. Partial least square regression (PLSR) was used to establish calibration models. Three different variable selection methods, including correlation coefficient method, interval partial least squares (iPLS), and successive projections algorithm (SPA) were performed and compared with models based on all of the variables. The results showed that the approach was very efficient and environmentally friendly for rapid determination of the four quality indices (QIs) in the kudzu extraction process. This method established may have the potential to be used as a process analytical technological (PAT) tool in the future. Copyright © 2014 Elsevier B.V. All rights reserved.
Antwi, Philip; Li, Jianzheng; Boadi, Portia Opoku; Meng, Jia; Shi, En; Deng, Kaiwen; Bondinuba, Francis Kwesi
2017-03-01
Three-layered feedforward backpropagation (BP) artificial neural networks (ANN) and multiple nonlinear regression (MnLR) models were developed to estimate biogas and methane yield in an upflow anaerobic sludge blanket (UASB) reactor treating potato starch processing wastewater (PSPW). Anaerobic process parameters were optimized to identify their importance on methanation. pH, total chemical oxygen demand, ammonium, alkalinity, total Kjeldahl nitrogen, total phosphorus, volatile fatty acids and hydraulic retention time selected based on principal component analysis were used as input variables, whiles biogas and methane yield were employed as target variables. Quasi-Newton method and conjugate gradient backpropagation algorithms were best among eleven training algorithms. Coefficient of determination (R 2 ) of the BP-ANN reached 98.72% and 97.93% whiles MnLR model attained 93.9% and 91.08% for biogas and methane yield, respectively. Compared with the MnLR model, BP-ANN model demonstrated significant performance, suggesting possible control of the anaerobic digestion process with the BP-ANN model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Robust model selection and the statistical classification of languages
NASA Astrophysics Data System (ADS)
García, J. E.; González-López, V. A.; Viola, M. L. L.
2012-10-01
In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating a model which represent the main law for each language. Our findings agree with the linguistic conjecture, related to the rhythm of the languages included on our dataset.
Oubel, Estanislao; Bonnard, Eric; Sueoka-Aragane, Naoko; Kobayashi, Naomi; Charbonnier, Colette; Yamamichi, Junta; Mizobe, Hideaki; Kimura, Shinya
2015-02-01
Lesion volume is considered as a promising alternative to Response Evaluation Criteria in Solid Tumors (RECIST) to make tumor measurements more accurate and consistent, which would enable an earlier detection of temporal changes. In this article, we report the results of a pilot study aiming at evaluating the effects of a consensual lesion selection on volume-based response (VBR) assessments. Eleven patients with lung computed tomography scans acquired at three time points were selected from Reference Image Database to Evaluate Response to therapy in lung cancer (RIDER) and proprietary databases. Images were analyzed according to RECIST 1.1 and VBR criteria by three readers working in different geographic locations. Cloud solutions were used to connect readers and carry out a consensus process on the selection of lesions used for computing response. Because there are not currently accepted thresholds for computing VBR, we have applied a set of thresholds based on measurement variability (-35% and +55%). The benefit of this consensus was measured in terms of multiobserver agreement by using Fleiss kappa (κfleiss) and corresponding standard errors (SE). VBR after consensual selection of target lesions allowed to obtain κfleiss = 0.85 (SE = 0.091), which increases up to 0.95 (SE = 0.092), if an extra consensus on new lesions is added. As a reference, the agreement when applying RECIST without consensus was κfleiss = 0.72 (SE = 0.088). These differences were found to be statistically significant according to a z-test. An agreement on the selection of lesions allows reducing the inter-reader variability when computing VBR. Cloud solutions showed to be an interesting and feasible strategy for standardizing response evaluations, reducing variability, and increasing consistency of results in multicenter clinical trials. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Time course of word production in fast and slow speakers: a high density ERP topographic study.
Laganaro, Marina; Valente, Andrea; Perret, Cyril
2012-02-15
The transformation of an abstract concept into an articulated word is achieved through a series of encoding processes, which time course has been repeatedly investigated in the psycholinguistic and neuroimaging literature on single word production. The estimates of the time course issued from previous investigations represent the timing of process duration for mean processing speed: as production speed varies significantly across speakers, a crucial question is how the timing of encoding processing varies with speed. Here we investigated whether between-subjects variability in the speed of speech production is distributed along all encoding processes or if it is accounted for by a specific processing stage. We analysed event-related electroencephalographical (ERP) correlates during overt picture naming in 45 subjects divided into three speed subgroups according to their production latencies. Production speed modulated waveform amplitudes in the time window ranging from about 200 to 350 ms after picture presentation and the duration of a stable electrophysiological spatial configuration in the same time period. The remaining time windows from picture onset to 200 ms before articulation were unaffected by speed. By contrast, the manipulation of a psycholinguistic variable, word age-of-acquisition, modulated ERPs in all speed subgroups in a different and later time period, starting at around 400 ms after picture presentation, associated with phonological encoding processes. These results indicate that the between-subject variability in the speed of single word production is principally accounted for by the timing of a stable electrophysiological activity in the 200-350 ms time period, presumably associated with lexical selection. Copyright © 2011 Elsevier Inc. All rights reserved.
Age estimation using pulp/tooth area ratio in maxillary canines-A digital image analysis.
Juneja, Manjushree; Devi, Yashoda B K; Rakesh, N; Juneja, Saurabh
2014-09-01
Determination of age of a subject is one of the most important aspects of medico-legal cases and anthropological research. Radiographs can be used to indirectly measure the rate of secondary dentine deposition which is depicted by reduction in the pulp area. In this study, 200 patients of Karnataka aged between 18-72 years were selected for the study. Panoramic radiographs were made and indirectly digitized. Radiographic images of maxillary canines (RIC) were processed using a computer-aided drafting program (ImageJ). The variables pulp/root length (p), pulp/tooth length (r), pulp/root width at enamel-cementum junction (ECJ) level (a), pulp/root width at mid-root level (c), pulp/root width at midpoint level between ECJ level and mid-root level (b) and pulp/tooth area ratio (AR) were recorded. All the morphological variables including gender were statistically analyzed to derive regression equation for estimation of age. It was observed that 2 variables 'AR' and 'b' contributed significantly to the fit and were included in the regression model, yielding the formula: Age = 87.305-480.455(AR)+48.108(b). Statistical analysis indicated that the regression equation with selected variables explained 96% of total variance with the median of the residuals of 0.1614 years and standard error of estimate of 3.0186 years. There is significant correlation between age and morphological variables 'AR' and 'b' and the derived population specific regression equation can be potentially used for estimation of chronological age of individuals of Karnataka origin.
Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.
2015-01-01
In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687
2013-01-01
Background High–throughput (HT) technologies provide huge amount of gene expression data that can be used to identify biomarkers useful in the clinical practice. The most frequently used approaches first select a set of genes (i.e. gene signature) able to characterize differences between two or more phenotypical conditions, and then provide a functional assessment of the selected genes with an a posteriori enrichment analysis, based on biological knowledge. However, this approach comes with some drawbacks. First, gene selection procedure often requires tunable parameters that affect the outcome, typically producing many false hits. Second, a posteriori enrichment analysis is based on mapping between biological concepts and gene expression measurements, which is hard to compute because of constant changes in biological knowledge and genome analysis. Third, such mapping is typically used in the assessment of the coverage of gene signature by biological concepts, that is either score–based or requires tunable parameters as well, limiting its power. Results We present Knowledge Driven Variable Selection (KDVS), a framework that uses a priori biological knowledge in HT data analysis. The expression data matrix is transformed, according to prior knowledge, into smaller matrices, easier to analyze and to interpret from both computational and biological viewpoints. Therefore KDVS, unlike most approaches, does not exclude a priori any function or process potentially relevant for the biological question under investigation. Differently from the standard approach where gene selection and functional assessment are applied independently, KDVS embeds these two steps into a unified statistical framework, decreasing the variability derived from the threshold–dependent selection, the mapping to the biological concepts, and the signature coverage. We present three case studies to assess the usefulness of the method. Conclusions We showed that KDVS not only enables the selection of known biological functionalities with accuracy, but also identification of new ones. An efficient implementation of KDVS was devised to obtain results in a fast and robust way. Computing time is drastically reduced by the effective use of distributed resources. Finally, integrated visualization techniques immediately increase the interpretability of results. Overall, KDVS approach can be considered as a viable alternative to enrichment–based approaches. PMID:23302187
A comparative test of adaptive hypotheses for sexual size dimorphism in lizards.
Cox, Robert M; Skelly, Stephanie L; John-Alder, Henry B
2003-07-01
It is commonly argued that sexual size dimorphism (SSD) in lizards has evolved in response to two primary, nonexclusive processes: (1) sexual selection for large male size, which confers an advantage in intrasexual mate competition (intrasexual selection hypothesis), and (2) natural selection for large female size, which confers a fecundity advantage (fecundity advantage hypothesis). However, outside of several well-studied lizard genera, the empirical support for these hypotheses has not been examined with appropriate phylogenetic control. We conducted a comparative phylogenetic analysis to test these hypotheses using literature data from 497 lizard populations representing 302 species and 18 families. As predicted by the intrasexual selection hypothesis, male aggression and territoriality are correlated with SSD, but evolutionary shifts in these categorical variables each explain less than 2% of the inferred evolutionary change in SSD. We found stronger correlations between SSD and continuous estimates of intrasexual selection such as male to female home range ratio and female home range size. These results are consistent with the criticism that categorical variables may obscure much of the actual variation in intrasexual selection intensity needed to explain patterns in SSD. In accordance with the fecundity advantage hypothesis, SSD is correlated with clutch size, reproductive frequency, and reproductive mode (but not fecundity slope, reduced major axis estimator of fecundity slope, length of reproductive season, or latitude). However, evolutionary shifts in clutch size explain less than 8% of the associated change in SSD, which also varies significantly in the absence of evolutionary shifts in reproductive frequency and mode. A multiple regression model retained territoriality and clutch size as significant predictors of SSD, but only 16% of the variation in SSD is explained using these variables. Intrasexual selection for large male size and fecundity selection for large female size have undoubtedly helped to shape patterns of SSD across lizards, but the comparative data at present provide only weak support for these hypotheses as general explanations for SSD in this group. Future work would benefit from the consideration of alternatives to these traditional evolutionary hypotheses, and the elucidation of proximate mechanisms influencing growth and SSD within populations.
A Sensory Material Approach for Reducing Variability in Additively Manufactured Metal Parts.
Franco, B E; Ma, J; Loveall, B; Tapia, G A; Karayagiz, K; Liu, J; Elwany, A; Arroyave, R; Karaman, I
2017-06-15
Despite the recent growth in interest for metal additive manufacturing (AM) in the biomedical and aerospace industries, variability in the performance, composition, and microstructure of AM parts remains a major impediment to its widespread adoption. The underlying physical mechanisms, which cause variability, as well as the scale and nature of variability are not well understood, and current methods are ineffective at capturing these details. Here, a Nickel-Titanium alloy is used as a sensory material in order to quantitatively, and rather rapidly, observe compositional and/or microstructural variability in selective laser melting manufactured parts; thereby providing a means to evaluate the role of process parameters on the variability. We perform detailed microstructural investigations using transmission electron microscopy at various locations to reveal the origins of microstructural variability in this sensory material. This approach helped reveal how reducing the distance between adjacent laser scans below a critical value greatly reduces both the in-sample and sample-to-sample variability. Microstructural investigations revealed that when the laser scan distance is wide, there is an inhomogeneity in subgrain size, precipitate distribution, and dislocation density in the microstructure, responsible for the observed variability. These results provide an important first step towards understanding the nature of variability in additively manufactured parts.
Measurement of talent in team handball: the questionable use of motor and physical tests.
Lidor, Ronnie; Falk, Bareket; Arnon, Michal; Cohen, Yoram; Segal, Gil; Lander, Yael
2005-05-01
Testing for selection is one of the most important fundamentals in any multistep sport program. In most ball games, coaches assess motor, physical, and technical skills on a regular basis in early stages of talent identification and development. However, selection processes are complex, are often unstructured, and lack clear-cut theory-based knowledge. For example, little is known about the relevance of the testing process to the final selection of the young prospects. The purpose of this study was to identify motor, physical, and skill variables that could provide coaches with relevant information in the selection process of young team handball players. In total, 405 players (12-13 years of age at the beginning of the testing period) were recommended by their coaches to undergo a battery of tests prior to selection to the Junior National Team. This number is the sum of all players participating in the different phases of the program. However, not all of them took part in each testing phase. The battery included physical measurements (height and weight), a 4 x 10-m running test, explosive power tests (medicine ball throw and standing long jump), speed tests (a 20-m sprint from a standing position and a 20-m sprint with a flying start), and a slalom dribbling test. Comparisons between those players eventually selected to the Junior National Team 2-3 years later with those not selected demonstrated that only the skill test served as a good indicator. In all other measurements, a wide overlap could be seen between the results of the selected and nonselected players. It is suggested that future studies investigate the usefulness of tests reflecting more specific physical ability and cognitive characteristics.
Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul
2015-04-01
Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.
Variable Selection for Nonparametric Quantile Regression via Smoothing Spline AN OVA
Lin, Chen-Yen; Bondell, Howard; Zhang, Hao Helen; Zou, Hui
2014-01-01
Quantile regression provides a more thorough view of the effect of covariates on a response. Nonparametric quantile regression has become a viable alternative to avoid restrictive parametric assumption. The problem of variable selection for quantile regression is challenging, since important variables can influence various quantiles in different ways. We tackle the problem via regularization in the context of smoothing spline ANOVA models. The proposed sparse nonparametric quantile regression (SNQR) can identify important variables and provide flexible estimates for quantiles. Our numerical study suggests the promising performance of the new procedure in variable selection and function estimation. Supplementary materials for this article are available online. PMID:24554792
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Face-Likeness and Image Variability Drive Responses in Human Face-Selective Ventral Regions
Davidenko, Nicolas; Remus, David A.; Grill-Spector, Kalanit
2012-01-01
The human ventral visual stream contains regions that respond selectively to faces over objects. However, it is unknown whether responses in these regions correlate with how face-like stimuli appear. Here, we use parameterized face silhouettes to manipulate the perceived face-likeness of stimuli and measure responses in face- and object-selective ventral regions with high-resolution fMRI. We first use “concentric hyper-sphere” (CH) sampling to define face silhouettes at different distances from the prototype face. Observers rate the stimuli as progressively more face-like the closer they are to the prototype face. Paradoxically, responses in both face- and object-selective regions decrease as face-likeness ratings increase. Because CH sampling produces blocks of stimuli whose variability is negatively correlated with face-likeness, this effect may be driven by more adaptation during high face-likeness (low-variability) blocks than during low face-likeness (high-variability) blocks. We tested this hypothesis by measuring responses to matched-variability (MV) blocks of stimuli with similar face-likeness ratings as with CH sampling. Critically, under MV sampling, we find a face-specific effect: responses in face-selective regions gradually increase with perceived face-likeness, but responses in object-selective regions are unchanged. Our studies provide novel evidence that face-selective responses correlate with the perceived face-likeness of stimuli, but this effect is revealed only when image variability is controlled across conditions. Finally, our data show that variability is a powerful factor that drives responses across the ventral stream. This indicates that controlling variability across conditions should be a critical tool in future neuroimaging studies of face and object representation. PMID:21823208
Zhang, Xia; Hu, Changqin
2017-09-08
Penicillins are typical of complex ionic samples which likely contain large number of degradation-related impurities (DRIs) with different polarities and charge properties. It is often a challenge to develop selective and robust high performance liquid chromatography (HPLC) methods for the efficient separation of all DRIs. In this study, an analytical quality by design (AQbD) approach was proposed for stability-indicating method development of cloxacillin. The structures, retention and UV characteristics rules of penicillins and their impurities were summarized and served as useful prior knowledge. Through quality risk assessment and screen design, 3 critical process parameters (CPPs) were defined, including 2 mixture variables (MVs) and 1 process variable (PV). A combined mixture-process variable (MPV) design was conducted to evaluate the 3 CPPs simultaneously and a response surface methodology (RSM) was used to achieve the optimal experiment parameters. A dual gradient elution was performed to change buffer pH, mobile-phase type and strength simultaneously. The design spaces (DSs) was evaluated using Monte Carlo simulation to give their possibility of meeting the specifications of CQAs. A Plackett-Burman design was performed to test the robustness around the working points and to decide the normal operating ranges (NORs). Finally, validation was performed following International Conference on Harmonisation (ICH) guidelines. To our knowledge, this is the first study of using MPV design and dual gradient elution to develop HPLC methods and improve separations for complex ionic samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Serotonin Decreases the Gain of Visual Responses in Awake Macaque V1.
Seillier, Lenka; Lorenz, Corinna; Kawaguchi, Katsuhisa; Ott, Torben; Nieder, Andreas; Pourriahi, Paria; Nienborg, Hendrikje
2017-11-22
Serotonin, an important neuromodulator in the brain, is implicated in affective and cognitive functions. However, its role even for basic cortical processes is controversial. For example, in the mammalian primary visual cortex (V1), heterogenous serotonergic modulation has been observed in anesthetized animals. Here, we combined extracellular single-unit recordings with iontophoresis in awake animals. We examined the role of serotonin on well-defined tuning properties (orientation, spatial frequency, contrast, and size) in V1 of two male macaque monkeys. We find that in the awake macaque the modulatory effect of serotonin is surprisingly uniform: it causes a mainly multiplicative decrease of the visual responses and a slight increase in the stimulus-selective response latency. Moreover, serotonin neither systematically changes the selectivity or variability of the response, nor the interneuronal correlation unexplained by the stimulus ("noise-correlation"). The modulation by serotonin has qualitative similarities with that for a decrease in stimulus contrast, but differs quantitatively from decreasing contrast. It can be captured by a simple additive change to a threshold-linear spiking nonlinearity. Together, our results show that serotonin is well suited to control the response gain of neurons in V1 depending on the animal's behavioral or motivational context, complementing other known state-dependent gain-control mechanisms. SIGNIFICANCE STATEMENT Serotonin is an important neuromodulator in the brain and a major target for drugs used to treat psychiatric disorders. Nonetheless, surprisingly little is known about how it shapes information processing in sensory areas. Here we examined the serotonergic modulation of visual processing in the primary visual cortex of awake behaving macaque monkeys. We found that serotonin mainly decreased the gain of the visual responses, without systematically changing their selectivity, variability, or covariability. This identifies a simple computational function of serotonin for state-dependent sensory processing, depending on the animal's affective or motivational state. Copyright © 2017 Seillier, Lorenz et al.
Serotonin Decreases the Gain of Visual Responses in Awake Macaque V1
Seillier, Lenka; Lorenz, Corinna; Kawaguchi, Katsuhisa; Ott, Torben; Pourriahi, Paria
2017-01-01
Serotonin, an important neuromodulator in the brain, is implicated in affective and cognitive functions. However, its role even for basic cortical processes is controversial. For example, in the mammalian primary visual cortex (V1), heterogenous serotonergic modulation has been observed in anesthetized animals. Here, we combined extracellular single-unit recordings with iontophoresis in awake animals. We examined the role of serotonin on well-defined tuning properties (orientation, spatial frequency, contrast, and size) in V1 of two male macaque monkeys. We find that in the awake macaque the modulatory effect of serotonin is surprisingly uniform: it causes a mainly multiplicative decrease of the visual responses and a slight increase in the stimulus-selective response latency. Moreover, serotonin neither systematically changes the selectivity or variability of the response, nor the interneuronal correlation unexplained by the stimulus (“noise-correlation”). The modulation by serotonin has qualitative similarities with that for a decrease in stimulus contrast, but differs quantitatively from decreasing contrast. It can be captured by a simple additive change to a threshold-linear spiking nonlinearity. Together, our results show that serotonin is well suited to control the response gain of neurons in V1 depending on the animal's behavioral or motivational context, complementing other known state-dependent gain-control mechanisms. SIGNIFICANCE STATEMENT Serotonin is an important neuromodulator in the brain and a major target for drugs used to treat psychiatric disorders. Nonetheless, surprisingly little is known about how it shapes information processing in sensory areas. Here we examined the serotonergic modulation of visual processing in the primary visual cortex of awake behaving macaque monkeys. We found that serotonin mainly decreased the gain of the visual responses, without systematically changing their selectivity, variability, or covariability. This identifies a simple computational function of serotonin for state-dependent sensory processing, depending on the animal's affective or motivational state. PMID:29042433
The politics of the face-in-the-crowd.
Mills, Mark; Smith, Kevin B; Hibbing, John R; Dodd, Michael D
2014-06-01
Recent work indicates that the more conservative one is, the faster one is to fixate on negative stimuli, whereas the less conservative one is, the faster one is to fixate on positive stimuli. The present series of experiments used the face-in-the-crowd paradigm to examine whether variability in the efficiency with which positive and negative stimuli are detected underlies such speed differences. Participants searched for a discrepant facial expression (happy or angry) amid a varying number of neutral distractors (Experiments 1 and 4). A combination of response time and eye movement analyses indicated that variability in search efficiency explained speed differences for happy expressions, whereas variability in post-selectional processes explained speed differences for angry expressions. These results appear to be emotionally mediated as search performance did not vary with political temperament when displays were inverted (Experiment 2) or when controlled processing was required for successful task performance (Experiment 3). Taken together, the present results suggest political temperament is at least partially instantiated by attentional biases for emotional material. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Moreno-Martínez, Francisco Javier; Montoro, Pedro R.
2012-01-01
This work presents a new set of 360 high quality colour images belonging to 23 semantic subcategories. Two hundred and thirty-six Spanish speakers named the items and also provided data from seven relevant psycholinguistic variables: age of acquisition, familiarity, manipulability, name agreement, typicality and visual complexity. Furthermore, we also present lexical frequency data derived from Internet search hits. Apart from the high number of variables evaluated, knowing that it affects the processing of stimuli, this new set presents important advantages over other similar image corpi: (a) this corpus presents a broad number of subcategories and images; for example, this will permit researchers to select stimuli of appropriate difficulty as required, (e.g., to deal with problems derived from ceiling effects); (b) the fact of using coloured stimuli provides a more realistic, ecologically-valid, representation of real life objects. In sum, this set of stimuli provides a useful tool for research on visual object-and word- processing, both in neurological patients and in healthy controls. PMID:22662166
Apparatus and method for microwave processing of materials
Johnson, Arvid C.; Lauf, Robert J.; Bible, Don W.; Markunas, Robert J.
1996-01-01
A variable frequency microwave heating apparatus (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34) for testing or other selected applications. The variable frequency heating apparatus (10) is used in the method of the present invention to monitor the resonant processing frequency within the furnace cavity (34) depending upon the material, including the state thereof, from which the workpiece (36) is fabricated. The variable frequency microwave heating apparatus (10) includes a microwave signal generator (12) and a high-power microwave amplifier (20) or a microwave voltage-controlled oscillator (14). A power supply (22) is provided for operation of the high-power microwave oscillator (14) or microwave amplifier (20). A directional coupler (24) is provided for detecting the direction and amplitude of signals incident upon and reflected from the microwave cavity (34). A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load (28).
Personal Familiarity Influences the Processing of Upright and Inverted Faces in Infants
Balas, Benjamin J.; Nelson, Charles A.; Westerlund, Alissa; Vogel-Farley, Vanessa; Riggins, Tracy; Kuefner, Dana
2009-01-01
Infant face processing becomes more selective during the first year of life as a function of varying experience with distinct face categories defined by species, race, and age. Given that any individual face belongs to many such categories (e.g. A young Caucasian man's face) we asked how the neural selectivity for one aspect of facial appearance was affected by category membership along another dimension of variability. 6-month-old infants were shown upright and inverted pictures of either their own mother or a stranger while event-related potentials (ERPs) were recorded. We found that the amplitude of the P400 (a face-sensitive ERP component) was only sensitive to the orientation of the mother's face, suggesting that “tuning” of the neural response to faces is realized jointly across multiple dimensions of face appearance. PMID:20204154
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
Optimization of porthole die geometrical variables by Taguchi method
NASA Astrophysics Data System (ADS)
Gagliardi, F.; Ciancio, C.; Ambrogio, G.; Filice, L.
2017-10-01
Porthole die extrusion is commonly used to manufacture hollow profiles made of lightweight alloys for numerous industrial applications. The reliability of extruded parts is affected strongly by the quality of the longitudinal and transversal seam welds. According to that, the die geometry must be designed correctly and the process parameters must be selected properly to achieve the desired product quality. In this study, numerical 3D simulations have been created and run to investigate the role of various geometrical variables on punch load and maximum pressure inside the welding chamber. These are important outputs to take into account affecting, respectively, the necessary capacity of the extrusion press and the quality of the welding lines. The Taguchi technique has been used to reduce the number of the required numerical simulations necessary for considering the influence of twelve different geometric variables. Moreover, the Analysis of variance (ANOVA) has been implemented to individually analyze the effect of each input parameter on the two responses. Then, the methodology has been utilized to determine the optimal process configuration individually optimizing the two investigated process outputs. Finally, the responses of the optimized parameters have been verified through finite element simulations approximating the predicted value closely. This study shows the feasibility of the Taguchi technique for predicting performance, optimization and therefore for improving the design of a porthole extrusion process.
Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.
Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo
2014-05-19
Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.
VARIABLE SELECTION FOR REGRESSION MODELS WITH MISSING DATA
Garcia, Ramon I.; Ibrahim, Joseph G.; Zhu, Hongtu
2009-01-01
We consider the variable selection problem for a class of statistical models with missing data, including missing covariate and/or response data. We investigate the smoothly clipped absolute deviation penalty (SCAD) and adaptive LASSO and propose a unified model selection and estimation procedure for use in the presence of missing data. We develop a computationally attractive algorithm for simultaneously optimizing the penalized likelihood function and estimating the penalty parameters. Particularly, we propose to use a model selection criterion, called the ICQ statistic, for selecting the penalty parameters. We show that the variable selection procedure based on ICQ automatically and consistently selects the important covariates and leads to efficient estimates with oracle properties. The methodology is very general and can be applied to numerous situations involving missing data, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Simulations are given to demonstrate the methodology and examine the finite sample performance of the variable selection procedures. Melanoma data from a cancer clinical trial is presented to illustrate the proposed methodology. PMID:20336190
Use of Gene Expression Programming in regionalization of flow duration curve
NASA Astrophysics Data System (ADS)
Hashmi, Muhammad Z.; Shamseldin, Asaad Y.
2014-06-01
In this paper, a recently introduced artificial intelligence technique known as Gene Expression Programming (GEP) has been employed to perform symbolic regression for developing a parametric scheme of flow duration curve (FDC) regionalization, to relate selected FDC characteristics to catchment characteristics. Stream flow records of selected catchments located in the Auckland Region of New Zealand were used. FDCs of the selected catchments were normalised by dividing the ordinates by their median value. Input for the symbolic regression analysis using GEP was (a) selected characteristics of normalised FDCs; and (b) 26 catchment characteristics related to climate, morphology, soil properties and land cover properties obtained using the observed data and GIS analysis. Our study showed that application of this artificial intelligence technique expedites the selection of a set of the most relevant independent variables out of a large set, because these are automatically selected through the GEP process. Values of the FDC characteristics obtained from the developed relationships have high correlations with the observed values.
NASA Astrophysics Data System (ADS)
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-01
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively.
Sheykhizadeh, Saheleh; Naseri, Abdolhossein
2018-04-05
Variable selection plays a key role in classification and multivariate calibration. Variable selection methods are aimed at choosing a set of variables, from a large pool of available predictors, relevant to the analyte concentrations estimation, or to achieve better classification results. Many variable selection techniques have now been introduced among which, those which are based on the methodologies of swarm intelligence optimization have been more respected during a few last decades since they are mainly inspired by nature. In this work, a simple and new variable selection algorithm is proposed according to the invasive weed optimization (IWO) concept. IWO is considered a bio-inspired metaheuristic mimicking the weeds ecological behavior in colonizing as well as finding an appropriate place for growth and reproduction; it has been shown to be very adaptive and powerful to environmental changes. In this paper, the first application of IWO, as a very simple and powerful method, to variable selection is reported using different experimental datasets including FTIR and NIR data, so as to undertake classification and multivariate calibration tasks. Accordingly, invasive weed optimization - linear discrimination analysis (IWO-LDA) and invasive weed optimization- partial least squares (IWO-PLS) are introduced for multivariate classification and calibration, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
The Assessment of Climatological Impacts on Agricultural Production and Residential Energy Demand
NASA Astrophysics Data System (ADS)
Cooter, Ellen Jean
The assessment of climatological impacts on selected economic activities is presented as a multi-step, inter -disciplinary problem. The assessment process which is addressed explicitly in this report focuses on (1) user identification, (2) direct impact model selection, (3) methodological development, (4) product development and (5) product communication. Two user groups of major economic importance were selected for study; agriculture and gas utilities. The broad agricultural sector is further defined as U.S.A. corn production. The general category of utilities is narrowed to Oklahoma residential gas heating demand. The CERES physiological growth model was selected as the process model for corn production. The statistical analysis for corn production suggests that (1) although this is a statistically complex model, it can yield useful impact information, (2) as a result of output distributional biases, traditional statistical techniques are not adequate analytical tools, (3) the model yield distribution as a whole is probably non-Gausian, particularly in the tails and (4) there appears to be identifiable weekly patterns of forecasted yields throughout the growing season. Agricultural quantities developed include point yield impact estimates and distributional characteristics, geographic corn weather distributions, return period estimates, decision making criteria (confidence limits) and time series of indices. These products were communicated in economic terms through the use of a Bayesian decision example and an econometric model. The NBSLD energy load model was selected to represent residential gas heating consumption. A cursory statistical analysis suggests relationships among weather variables across the Oklahoma study sites. No linear trend in "technology -free" modeled energy demand or input weather variables which would correspond to that contained in observed state -level residential energy use was detected. It is suggested that this trend is largely the result of non-weather factors such as population and home usage patterns rather than regional climate change. Year-to-year changes in modeled residential heating demand on the order of 10('6) Btu's per household were determined and later related to state -level components of the Oklahoma economy. Products developed include the definition of regional forecast areas, likelihood estimates of extreme seasonal conditions and an energy/climate index. This information is communicated in economic terms through an input/output model which is used to estimate changes in Gross State Product and Household income attributable to weather variability.
Separation of plastics: The importance of kinetics knowledge in the evaluation of froth flotation.
Censori, Matteo; La Marca, Floriana; Carvalho, M Teresa
2016-08-01
Froth flotation is a promising technique to separate polymers of similar density. The present paper shows the need for performing kinetic tests to evaluate and optimize the process. In the experimental study, batch flotation tests were performed on samples of ABS and PS. The floated product was collected at increasing flotation time. Two variables were selected for modification: the concentration of the depressor (tannic acid) and airflow rate. The former is associated with the chemistry of the process and the latter with the transport of particles. It was shown that, like mineral flotation, plastics flotation can be adequately assumed as a first order rate process. The results of the kinetic tests showed that the kinetic parameters change with the operating conditions. When the depressing action is weak and the airflow rate is low, the kinetic is fast. Otherwise, the kinetic is slow and a variable percentage of the plastics never floats. Concomitantly, the time at which the maximum difference in the recovery of the plastics in the floated product is attained changes with the operating conditions. The prediction of flotation results, process evaluation and comparisons should be done considering the process kinetics. Copyright © 2016 Elsevier Ltd. All rights reserved.
GRCop-84 Rolling Parameter Study
NASA Technical Reports Server (NTRS)
Loewenthal, William S.; Ellis, David L.
2008-01-01
This report is a section of the final report on the GRCop-84 task of the Constellation Program and incorporates the results obtained between October 2000 and September 2005, when the program ended. NASA Glenn Research Center (GRC) has developed a new copper alloy, GRCop-84 (Cu-8 at.% Cr-4 at.% Nb), for rocket engine main combustion chamber components that will improve rocket engine life and performance. This work examines the sensitivity of GRCop-84 mechanical properties to rolling parameters as a means to better define rolling parameters for commercial warm rolling. Experiment variables studied were total reduction, rolling temperature, rolling speed, and post rolling annealing heat treatment. The responses were tensile properties measured at 23 and 500 C, hardness, and creep at three stress-temperature combinations. Understanding these relationships will better define boundaries for a robust commercial warm rolling process. The four processing parameters were varied within limits consistent with typical commercial production processes. Testing revealed that the rolling-related variables selected have a minimal influence on tensile, hardness, and creep properties over the range of values tested. Annealing had the expected result of lowering room temperature hardness and strength while increasing room temperature elongations with 600 C (1112 F) having the most effect. These results indicate that the process conditions to warm roll plate and sheet for these variables can range over wide levels without negatively impacting mechanical properties. Incorporating broader process ranges in future rolling campaigns should lower commercial rolling costs through increased productivity.
Kumar, Raushan; Xavier, Ka Martin; Lekshmi, Manjusha; Dhanabalan, Vignaesh; Thachil, Madonna T; Balange, Amjad K; Gudipati, Venkateshwarlu
2018-04-01
Functional extruded snacks were prepared using paste shrimp powder (Acetes spp.), which is rich in protein. The process variables required for the preparation of extruded snacks was optimized using response surface methodology. Extrusion temperature (130-144 °C), level of Acetes powder (100-200 g kg -1 ) and feed moisture (140-200 g kg -1 ) were selected as design variables, and expansion ratio, porosity, hardness, crispness and thiobarbituric acid reactive substance value were taken as the response variables. Extrusion temperature significantly influenced all the response variables, while Acetes inclusion influenced all variables except porosity. Feed moisture content showed a significant quadratic effect on all responses and an interactive effect on expansion ratio and hardness. Shrimp powder incorporation increased the protein and mineral content of the final product. The extruded snack made with the combination of extrusion temperature 144.59 °C, feed moisture 178.5 g kg -1 and Acetes inclusion level 146.7 g kg -1 was found to be the best one based on sensory evaluation. The study suggests that use of Acetes species for the development of extruded snacks will serve as a means of utilization of Acetes as well as being a rich source of proteins for human consumption, which would otherwise remain unexploited as a by-catch. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Nikolić, Vlastimir; Petković, Dalibor; Lazov, Lyubomir; Milovančević, Miloš
2016-07-01
Water-jet assisted underwater laser cutting has shown some advantages as it produces much less turbulence, gas bubble and aerosols, resulting in a more gentle process. However, this process has relatively low efficiency due to different losses in water. It is important to determine which parameters are the most important for the process. In this investigation was analyzed the water-jet assisted underwater laser cutting parameters forecasting based on the different parameters. The method of ANFIS (adaptive neuro fuzzy inference system) was applied to the data in order to select the most influential factors for water-jet assisted underwater laser cutting parameters forecasting. Three inputs are considered: laser power, cutting speed and water-jet speed. The ANFIS process for variable selection was also implemented in order to detect the predominant factors affecting the forecasting of the water-jet assisted underwater laser cutting parameters. According to the results the combination of laser power cutting speed forms the most influential combination foe the prediction of water-jet assisted underwater laser cutting parameters. The best prediction was observed for the bottom kerf-width (R2 = 0.9653). The worst prediction was observed for dross area per unit length (R2 = 0.6804). According to the results, a greater improvement in estimation accuracy can be achieved by removing the unnecessary parameter.
Selecting clinical quality indicators for laboratory medicine.
Barth, Julian H
2012-05-01
Quality in laboratory medicine is often described as doing the right test at the right time for the right person. Laboratory processes currently operate under the oversight of an accreditation body which gives confidence that the process is good. However, there are aspects of quality that are not measured by these processes. These are largely focused on ensuring that the most clinically appropriate test is performed and interpreted correctly. Clinical quality indicators were selected through a two-phase process. Firstly, a series of focus groups of clinical scientists were held with the aim of developing a list of quality indicators. These were subsequently ranked in order by an expert panel of primary and secondary care physicians. The 10 top indicators included the communication of critical results, comprehensive education to all users and adequate quality assurance for point-of-care testing. Laboratories should ensure their tests are used to national standards, that they have clinical utility, are calibrated to national standards and have long-term stability for chronic disease management. Laboratories should have error logs and demonstrate evidence of measures introduced to reduce chances of similar future errors. Laboratories should make a formal scientific evaluation of analytical quality. This paper describes the process of selection of quality indicators for laboratory medicine that have been validated sequentially by deliverers and users of the service. They now need to be converted into measureable variables related to outcome and validated in practice.
Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.
Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205
Temporary disaster debris management site identification using binomial cluster analysis and GIS.
Grzeda, Stanislaw; Mazzuchi, Thomas A; Sarkani, Shahram
2014-04-01
An essential component of disaster planning and preparation is the identification and selection of temporary disaster debris management sites (DMS). However, since DMS identification is a complex process involving numerous variable constraints, many regional, county and municipal jurisdictions initiate this process during the post-disaster response and recovery phases, typically a period of severely stressed resources. Hence, a pre-disaster approach in identifying the most likely sites based on the number of locational constraints would significantly contribute to disaster debris management planning. As disasters vary in their nature, location and extent, an effective approach must facilitate scalability, flexibility and adaptability to variable local requirements, while also being generalisable to other regions and geographical extents. This study demonstrates the use of binomial cluster analysis in potential DMS identification in a case study conducted in Hamilton County, Indiana. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.
Chapin, Thomas
2015-01-01
Hand-collected grab samples are the most common water sampling method but using grab sampling to monitor temporally variable aquatic processes such as diel metal cycling or episodic events is rarely feasible or cost-effective. Currently available automated samplers are a proven, widely used technology and typically collect up to 24 samples during a deployment. However, these automated samplers are not well suited for long-term sampling in remote areas or in freezing conditions. There is a critical need for low-cost, long-duration, high-frequency water sampling technology to improve our understanding of the geochemical response to temporally variable processes. This review article will examine recent developments in automated water sampler technology and utilize selected field data from acid mine drainage studies to illustrate the utility of high-frequency, long-duration water sampling.
Yoo, Jin Eun
2018-01-01
A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective.
Yoo, Jin Eun
2018-01-01
A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes advantage of both LASSO and ridge in terms of variable selection and multicollinearity, respectively. A logistic regression model was also employed to predict TIMSS 2011 Korean 4th graders' mathematics achievement. Ten-fold cross-validation with mean squared error was employed to determine the elastic net regularization parameter. Among 162 TIMSS variables explored, 12 student and 5 teacher variables were selected in the elastic net model, and the prediction accuracy, sensitivity, and specificity were 76.06, 70.23, and 80.34%, respectively. This study showed that the elastic net method can be successfully applied to educational large-scale data by selecting a subset of variables with reasonable prediction accuracy and finding new variables to predict students' mathematics achievement. Newly found variables via machine learning can shed light on the existing theories from a totally different perspective, which in turn propagates creation of a new theory or complement of existing ones. This study also examined the current scale development convention from a machine learning perspective. PMID:29599736
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Patterson, Fiona; Lopes, Safiatu; Harding, Stephen; Vaux, Emma; Berkin, Liz; Black, David
2017-02-01
The aim of this study was to follow up a sample of physicians who began core medical training (CMT) in 2009. This paper examines the long-term validity of CMT and GP selection methods in predicting performance in the Membership of Royal College of Physicians (MRCP(UK)) examinations. We performed a longitudinal study, examining the extent to which the GP and CMT selection methods (T1) predict performance in the MRCP(UK) examinations (T2). A total of 2,569 applicants from 2008-09 who completed CMT and GP selection methods were included in the study. Looking at MRCP(UK) part 1, part 2 written and PACES scores, both CMT and GP selection methods show evidence of predictive validity for the outcome variables, and hierarchical regressions show the GP methods add significant value to the CMT selection process. CMT selection methods predict performance in important outcomes and have good evidence of validity; the GP methods may have an additional role alongside the CMT selection methods. © Royal College of Physicians 2017. All rights reserved.
Baraki, Zeray; Girmay, Fiseha; Kidanu, Kalayou; Gerensea, Hadgu; Gezehgne, Dejen; Teklay, Hafte
2017-01-01
The nursing process is a systematic method of planning, delivering, and evaluating individualized care for clients in any state of health or illness. Many countries have adopted the nursing process as the standard of care to guide nursing practice; however, the problem is its implementation. If nurses fail to carry out the necessary nursing care through the nursing process; the effectiveness of patient progress may be compromised and can lead to preventable adverse events. This study was aimed to assess the implementation of nursing process and associated factors among nurses working in selected hospitals of central and northwest zones of Tigray, Ethiopia, 2015. A cross sectional observational study design was utilized. Data was collected from 200 participants using structured self-administered questionnaire which was contextually adapted from standardized, reliable and validated measures. The data were entered using Epi Info version 7 and analyzed using SPSS version 20 software. Data were summarized and described using descriptive statistics and multivariate logistic regression was used to determine the relationship of independent and dependent variable. Then, finally, data were presented in tables, graphs, frequency percentage of different variables. Seventy (35%) of participants have implemented nursing process. Different factors showed significant association. Nurses who worked in a stressful atmosphere of the workplace were 99% less likely to implement the nursing process than nurses who worked at a very good atmosphere. The nurses with an educational level of BSc. Degree were 6.972 times more likely to implement the nursing process than those who were diploma qualified. Nurses with no consistent material supply to use the nursing process were 95.1% less likely to implement the nursing process than nurses with consistent material supply. The majority of the participants were not implementing the nursing process properly. There are many factors that hinder them from applying the nursing process of which level of education, knowledge of nurses, skill of nurses, atmosphere of the work place, shortage of material supply to use the nursing process and high number of patient load were scientifically significant for the association test.
Exhaustive Search for Sparse Variable Selection in Linear Regression
NASA Astrophysics Data System (ADS)
Igarashi, Yasuhiko; Takenaka, Hikaru; Nakanishi-Ohno, Yoshinori; Uemura, Makoto; Ikeda, Shiro; Okada, Masato
2018-04-01
We propose a K-sparse exhaustive search (ES-K) method and a K-sparse approximate exhaustive search method (AES-K) for selecting variables in linear regression. With these methods, K-sparse combinations of variables are tested exhaustively assuming that the optimal combination of explanatory variables is K-sparse. By collecting the results of exhaustively computing ES-K, various approximate methods for selecting sparse variables can be summarized as density of states. With this density of states, we can compare different methods for selecting sparse variables such as relaxation and sampling. For large problems where the combinatorial explosion of explanatory variables is crucial, the AES-K method enables density of states to be effectively reconstructed by using the replica-exchange Monte Carlo method and the multiple histogram method. Applying the ES-K and AES-K methods to type Ia supernova data, we confirmed the conventional understanding in astronomy when an appropriate K is given beforehand. However, we found the difficulty to determine K from the data. Using virtual measurement and analysis, we argue that this is caused by data shortage.
Lafuente, Victoria; Herrera, Luis J; Pérez, María del Mar; Val, Jesús; Negueruela, Ignacio
2015-08-15
In this work, near infrared spectroscopy (NIR) and an acoustic measure (AWETA) (two non-destructive methods) were applied in Prunus persica fruit 'Calrico' (n = 260) to predict Magness-Taylor (MT) firmness. Separate and combined use of these measures was evaluated and compared using partial least squares (PLS) and least squares support vector machine (LS-SVM) regression methods. Also, a mutual-information-based variable selection method, seeking to find the most significant variables to produce optimal accuracy of the regression models, was applied to a joint set of variables (NIR wavelengths and AWETA measure). The newly proposed combined NIR-AWETA model gave good values of the determination coefficient (R(2)) for PLS and LS-SVM methods (0.77 and 0.78, respectively), improving the reliability of MT firmness prediction in comparison with separate NIR and AWETA predictions. The three variables selected by the variable selection method (AWETA measure plus NIR wavelengths 675 and 697 nm) achieved R(2) values 0.76 and 0.77, PLS and LS-SVM. These results indicated that the proposed mutual-information-based variable selection algorithm was a powerful tool for the selection of the most relevant variables. © 2014 Society of Chemical Industry.
Mukerjee, Shaibal; Smith, Luther A; Johnson, Mary M; Neas, Lucas M; Stallings, Casson A
2009-08-01
Passive ambient air sampling for nitrogen dioxide (NO(2)) and volatile organic compounds (VOCs) was conducted at 25 school and two compliance sites in Detroit and Dearborn, Michigan, USA during the summer of 2005. Geographic Information System (GIS) data were calculated at each of 116 schools. The 25 selected schools were monitored to assess and model intra-urban gradients of air pollutants to evaluate impact of traffic and urban emissions on pollutant levels. Schools were chosen to be statistically representative of urban land use variables such as distance to major roadways, traffic intensity around the schools, distance to nearest point sources, population density, and distance to nearest border crossing. Two approaches were used to investigate spatial variability. First, Kruskal-Wallis analyses and pairwise comparisons on data from the schools examined coarse spatial differences based on city section and distance from heavily trafficked roads. Secondly, spatial variation on a finer scale and as a response to multiple factors was evaluated through land use regression (LUR) models via multiple linear regression. For weeklong exposures, VOCs did not exhibit spatial variability by city section or distance from major roads; NO(2) was significantly elevated in a section dominated by traffic and industrial influence versus a residential section. Somewhat in contrast to coarse spatial analyses, LUR results revealed spatial gradients in NO(2) and selected VOCs across the area. The process used to select spatially representative sites for air sampling and the results of coarse and fine spatial variability of air pollutants provide insights that may guide future air quality studies in assessing intra-urban gradients.
Ruperto, Nicolino; Pistorio, Angela; Ravelli, Angelo; Rider, Lisa G; Pilkington, Clarissa; Oliveira, Sheila; Wulffraat, Nico; Espada, Graciela; Garay, Stella; Cuttica, Ruben; Hofer, Michael; Quartier, Pierre; Melo-Gomes, Jose; Reed, Ann M; Wierzbowska, Malgorzata; Feldman, Brian M; Harjacek, Miroslav; Huppertz, Hans-Iko; Nielsen, Susan; Flato, Berit; Lahdenne, Pekka; Michels, Harmut; Murray, Kevin J; Punaro, Lynn; Rennebohm, Robert; Russo, Ricardo; Balogh, Zsolt; Rooney, Madeleine; Pachman, Lauren M; Wallace, Carol; Hashkes, Philip; Lovell, Daniel J; Giannini, Edward H; Gare, Boel Andersson; Martini, Alberto
2010-11-01
To develop a provisional definition for the evaluation of response to therapy in juvenile dermatomyositis (DM) based on the Paediatric Rheumatology International Trials Organisation juvenile DM core set of variables. Thirty-seven experienced pediatric rheumatologists from 27 countries achieved consensus on 128 difficult patient profiles as clinically improved or not improved using a stepwise approach (patient's rating, statistical analysis, definition selection). Using the physicians' consensus ratings as the "gold standard measure," chi-square, sensitivity, specificity, false-positive and-negative rates, area under the receiver operating characteristic curve, and kappa agreement for candidate definitions of improvement were calculated. Definitions with kappa values >0.8 were multiplied by the face validity score to select the top definitions. The top definition of improvement was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 1 of the remaining worsening by more than 30%, which cannot be muscle strength. The second-highest scoring definition was at least 20% improvement from baseline in 3 of 6 core set variables with no more than 2 of the remaining worsening by more than 25%, which cannot be muscle strength (definition P1 selected by the International Myositis Assessment and Clinical Studies group). The third is similar to the second with the maximum amount of worsening set to 30%. This indicates convergent validity of the process. We propose a provisional data-driven definition of improvement that reflects well the consensus rating of experienced clinicians, which incorporates clinically meaningful change in core set variables in a composite end point for the evaluation of global response to therapy in juvenile DM. Copyright © 2010 by the American College of Rheumatology.
Talent identification and selection in elite youth football: An Australian context.
O'Connor, Donna; Larkin, Paul; Mark Williams, A
2016-10-01
We identified the perceptual-cognitive skills and player history variables that differentiate players selected or not selected into an elite youth football (i.e. soccer) programme in Australia. A sample of elite youth male football players (n = 127) completed an adapted participation history questionnaire and video-based assessments of perceptual-cognitive skills. Following data collection, 22 of these players were offered a full-time scholarship for enrolment at an elite player residential programme. Participants selected for the scholarship programme recorded superior performance on the combined perceptual-cognitive skills tests compared to the non-selected group. There were no significant between group differences on the player history variables. Stepwise discriminant function analysis identified four predictor variables that resulted in the best categorization of selected and non-selected players (i.e. recent match-play performance, region, number of other sports participated, combined perceptual-cognitive performance). The effectiveness of the discriminant function is reflected by 93.7% of players being correctly classified, with the four variables accounting for 57.6% of the variance. Our discriminating model for selection may provide a greater understanding of the factors that influence elite youth talent selection and identification.
Growth models of Rhizophora mangle L. seedlings in tropical southwestern Atlantic
NASA Astrophysics Data System (ADS)
Lima, Karen Otoni de Oliveira; Tognella, Mônica Maria Pereira; Cunha, Simone Rabelo; Andrade, Humber Agrelli de
2018-07-01
The present study selected and compared regression models that best describe the growth curves of Rhizophora mangle seedlings based on the height (cm) and time (days) variables. The Linear, Exponential, Power Law, Monomolecular, Logistic, and Gompertz models were adjusted with non-linear formulations and minimization of the sum of the squares of the residues. The Akaike Information Criterion was used to select the best model for each seedling. After this selection, the determination coefficient, which evaluates how well a model describes height variation as a time function, was inspected. Differing from the classic population ecology studies, the Monomolecular, Three-parameter Logistic, and Gompertz models presented the best performance in describing growth, suggesting they are the most adequate options for long-term studies. The different growth curves reflect the complexity of stem growth at the seedling stage for R. mangle. The analysis of the joint distribution of the parameters initial height, growth rate, and, asymptotic size allowed the study of the species ecological attributes and to observe its intraspecific variability in each model. Our results provide a basis for interpretation of the dynamics of seedlings growth during their establishment in a mature forest, as well as its regeneration processes.
NASA Astrophysics Data System (ADS)
Kirby, Nicola Frances; Dempster, Edith Roslyn
2014-11-01
The Foundation Programme of the Centre for Science Access at the University of KwaZulu-Natal, South Africa provides access to tertiary science studies to educationally disadvantaged students who do not meet formal faculty entrance requirements. The low number of students proceeding from the programme into mainstream is of concern, particularly given the national imperative to increase participation and levels of performance in tertiary-level science. An attempt was made to understand foundation student performance in a campus of this university, with the view to identifying challenges and opportunities for remediation in the curriculum and processes of selection into the programme. A classification and regression tree analysis was used to identify which variables best described student performance. The explanatory variables included biographical and school-history data, performance in selection tests, and socio-economic data pertaining to their year in the programme. The results illustrate the prognostic reliability of the model used to select students, raise concerns about the inefficiency of school performance indicators as a measure of students' academic potential in the Foundation Programme, and highlight the importance of accommodation arrangements and financial support for student success in their access year.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Abdelazim, Ahmed H.
2018-01-01
The first three UV spectrophotometric methods have been developed of simultaneous determination of two new FDA approved drugs namely; elbasvir and grazoprevir in their combined pharmaceutical dosage form. These methods include simultaneous equation, partial least squares with and without variable selection procedure (genetic algorithm). For simultaneous equation method, the absorbance values at 369 (λmax of elbasvir) and 253 nm (λmax of grazoprevir) have been selected for the formation of two simultaneous equations required for the mathematical processing and quantitative analysis of the studied drugs. Alternatively, the partial least squares with and without variable selection procedure (genetic algorithm) have been applied in the spectra analysis because the synchronous inclusion of many unreal wavelengths rather than by using a single or dual wavelength which greatly increases the precision and predictive ability of the methods. Successfully assay of the drugs in their pharmaceutical formulation has been done by the proposed methods. Statistically comparative analysis for the obtained results with the manufacturing methods has been performed. It is noteworthy to mention that there was no significant difference between the proposed methods and the manufacturing one with respect to the validation parameters.
Teaching undergraduates the process of peer review: learning by doing.
Rangachari, P K
2010-09-01
An active approach allowed undergraduates in Health Sciences to learn the dynamics of peer review at first hand. A four-stage process was used. In stage 1, students formed self-selected groups to explore specific issues. In stage 2, each group posted their interim reports online on a specific date. Each student read all the other reports and prepared detailed critiques. In stage 3, each report was discussed at sessions where the lead discussant was selected at random. All students participated in the peer review process. The written critiques were collated and returned to each group, who were asked to resubmit their revised reports within 2 wk. In stage 4, final submissions accompanied by rebuttals were graded. Student responses to a questionnaire were highly positive. They recognized the individual steps in the standard peer review, appreciated the complexities involved, and got a first-hand experience of some of the inherent variabilities involved. The absence of formal presentations and the opportunity to read each other's reports permitted them to study issues in greater depth.
Brambilla, Ada; Lo Scalzo, Roberto; Bertolo, Gianni; Torreggiani, Danila
2008-04-23
High-quality standards in blueberry juice can be obtained only taking into account fruit compositional variability and its preservation along the processing chain. In this work, five highbush blueberry cultivars from the same environmental growing conditions were individually processed into juice after an initial blanching step and the influence was studied of the cultivar on juice phenolic content, distribution and relative antioxidant activity, measured as scavenging capacity on the artificial free-radical 2,2-diphenyl-1-picrylhydrazyl (DPPH*). A chromatographic protocol was developed to separate all main phenolic compounds in berries. A total of 15 glycosylated anthocyanins, catechin, galactoside, glucoside, and rhamnoside quercetin 3-derivatives, and main benzoic and cinnamic acids were identified. The total content and relative distribution in anthocyanins, chlorogenic acid, and quercetin of each juice were dependent upon cultivar, and the total content was highly correlated (rxy=0.97) to the antioxidant capacity. A selective protective effect of berry blanching in juice processing can be observed on more labile anthocyanin compounds.
Multicomponent Supramolecular Systems: Self-Organization in Coordination-Driven Self-Assembly
Zheng, Yao-Rong; Yang, Hai-Bo; Ghosh, Koushik; Zhao, Liang; Stang, Peter J.
2009-01-01
The self-organization of multicomponent supramolecular systems involving a variety of two-dimensional (2-D) polygons and three-dimensional (3-D) cages is presented. Nine self-organizing systems, SS1–SS9, have been studied. Each involving the simultaneous mixing of organoplatinum acceptors and pyridyl donors of varying geometry and their selective self-assembly into three to four specific 2-D (rectangular, triangular, and rhomboid) and/or 3-D (triangular prism and distorted and nondistorted trigonal bipyramidal) supramolecules. The formation of these discrete structures is characterized using NMR spectroscopy and electrospray ionization mass spectrometry (ESI-MS). In all cases, the self-organization process is directed by: (1) the geometric information encoded within the molecular subunits and (2) a thermodynamically driven dynamic self-correction process. The result is the selective self-assembly of multiple discrete products from a randomly formed complex. The influence of key experimental variables – temperature and solvent – on the self-correction process and the fidelity of the resulting self-organization systems is also described. PMID:19544512
Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L
2016-10-01
We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures. © The Author(s) 2015.
NASA Technical Reports Server (NTRS)
Deland, R. J.
1974-01-01
The selection process for sector structure boundary crossings used in vorticity correlation studies is examined and the possible influence of ascending planetary scale waves is assessed. It is proposed that some of the observed correlations between geomagnetic and meteorological variations may be due to meteorological effects on the geometric variables, rather than due to common solar origin.
ERIC Educational Resources Information Center
Kumar, R. Renjith
2017-01-01
The study of formal logic helps to improve the process of thinking and tries to refine and improve the thinking ability. The objectives of this study are to know the effectiveness of formal logic course and to determine the critical thinking variables that are effective and that are ineffective. A sample of 214 students is selected from all the…
ERIC Educational Resources Information Center
Starrett, Richard A.; And Others
The study examined relationships among factors influencing utilization of social services by Hispanic elderly, particularly factors categorized as: (1) informal, such as support groups of family, kin, neighbors, friends, and (2) quasi-formal, such as church groups. Thirty-seven variables and data selected from a 1979-80 15-state survey of 1,805…
[Selective attention and schizophrenia before the administration of neuroleptics].
Lussier, I; Stip, E
1999-01-01
In recent years, the presence of attention deficits has been recognized as a key feature of schizophrenia. Past studies reveal that selective attention, or the ability to select relevant information while ignoring simultaneously irrelevant information, is disturbed in schizophrenic patients. According to Treisman feature-integration theory of selective attention, visual search for conjunctive targets (e.g., shape and color) requires controlled processes, that necessitate attention and operate in a serial manner. Reaction times (RTs) are therefore function of the number of stimuli in the display. When subjects are asked to detect the presence or absence of a target in an array of a variable number of stimuli, different performance patterns are expected for positive (present target) and negative trials (absent target). For positive trials, a self-terminating search is triggered, that is, the search is ended when the target is encountered. For negative trials, an exhaustive search strategy is displayed, where each stimulus is examined before the search can end; the RT slope pattern is thus double that of the positive trials. To assess the integrity of these processes, thirteen drug naive schizophrenic patients were compared to twenty normal control subjects. Neuroleptic naive patients were chosen as subjects to avoid the potential influence of medication and chronicity-related factors on performance. The subjects had to specify as fast as possible the presence or absence of the target in an array of a variable number of stimuli presented in a circular display, and comprising or not the target. Results showed that the patients can use self-terminating search strategies as well as normal control subjects. However, their ability to trigger exhaustive search strategies is impaired. Not only were patients slower than controls, but their pattern of RT results was different. These results argue in favor of an early impairment in selective attention capacities in schizophrenia, which appears before the introduction of neuroleptics. The attention performance was also shown to present some association to clinical symptoms.
López Gialdi, A I; Moschen, S; Villán, C S; López Fernández, M P; Maldonado, S; Paniego, N; Heinz, R A; Fernandez, P
2016-09-01
Leaf senescence is a complex mechanism ruled by multiple genetic and environmental variables that affect crop yields. It is the last stage in leaf development, is characterized by an active decline in photosynthetic rate, nutrients recycling and cell death. The aim of this work was to identify contrasting sunflower inbred lines differing in leaf senescence and to deepen the study of this process in sunflower. Ten sunflower genotypes, previously selected by physiological analysis from 150 inbred genotypes, were evaluated under field conditions through physiological, cytological and molecular analysis. The physiological measurement allowed the identification of two contrasting senescence inbred lines, R453 and B481-6, with an increase in yield in the senescence delayed genotype. These findings were confirmed by cytological and molecular analysis using TUNEL, genomic DNA gel electrophoresis, flow sorting and gene expression analysis by qPCR. These results allowed the selection of the two most promising contrasting genotypes, which enables future studies and the identification of new biomarkers associated to early senescence in sunflower. In addition, they allowed the tuning of cytological techniques for a non-model species and its integration with molecular variables. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pei, Ji; Wang, Wenjie; Yuan, Shouqi; Zhang, Jinfeng
2016-09-01
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0 Q d and 1.4 Q d is proposed. Three parameters, namely, the blade outlet width b 2, blade outlet angle β 2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0 Q d and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
NASA Astrophysics Data System (ADS)
Stopyra, Wojciech; Kurzac, Jarosław; Gruber, Konrad; Kurzynowski, Tomasz; Chlebus, Edward
2016-12-01
SLM technology allows production of a fully functional objects from metal and ceramic powders, with true density of more than 99,9%. The quality of manufactured items in SLM method affects more than 100 parameters, which can be divided into fixed and variable. Fixed parameters are those whose value before the process should be defined and maintained in an appropriate range during the process, e.g. chemical composition and morphology of the powder, oxygen level in working chamber, heating temperature of the substrate plate. In SLM technology, five parameters are variables that optimal set allows to produce parts without defects (pores, cracks) and with an acceptable speed. These parameters are: laser power, distance between points, time of exposure, distance between lines and layer thickness. To develop optimal parameters thin walls or single track experiments are performed, to select the best sets narrowed to three parameters: laser power, exposure time and distance between points. In this paper, the effect of laser power on the penetration depth and geometry of scanned single track was shown. In this experiment, titanium (grade 2) substrate plate was used and scanned by fibre laser of 1064 nm wavelength. For each track width, height and penetration depth of laser beam was measured.
Gayán, E; Torres, J A; Alvarez, I; Condón, S
2014-02-01
The effect of bactericidal UV-C treatments (254 nm) on Escherichia coli O157:H7 suspended in apple juice increased synergistically with temperature up to a threshold value. The optimum UV-C treatment temperature was 55 °C, yielding a 58.9% synergistic lethal effect. Under these treatment conditions, the UV-heat (UV-H55 °C) lethal variability achieving 5-log reductions had a logistic distribution (α = 37.92, β = 1.10). Using this distribution, UV-H55 °C doses to achieve the required juice safety goal with 95, 99, and 99.9% confidence were 41.17, 42.97, and 46.00 J/ml, respectively, i.e., doses higher than the 37.58 J/ml estimated by a deterministic procedure. The public health impact of these results is that the larger UV-H55 °C dose required for achieving 5-log reductions with 95, 99, and 99.9% confidence would reduce the probability of hemolytic uremic syndrome in children by 76.3, 88.6, and 96.9%, respectively. This study illustrates the importance of including the effect of data variability when selecting operational parameters for novel and conventional preservation processes to achieve high food safety standards with the desired confidence level.
Dynamics of excitatory and inhibitory networks are differentially altered by selective attention.
Snyder, Adam C; Morais, Michael J; Smith, Matthew A
2016-10-01
Inhibition and excitation form two fundamental modes of neuronal interaction, yet we understand relatively little about their distinct roles in service of perceptual and cognitive processes. We developed a multidimensional waveform analysis to identify fast-spiking (putative inhibitory) and regular-spiking (putative excitatory) neurons in vivo and used this method to analyze how attention affects these two cell classes in visual area V4 of the extrastriate cortex of rhesus macaques. We found that putative inhibitory neurons had both greater increases in firing rate and decreases in correlated variability with attention compared with putative excitatory neurons. Moreover, the time course of attention effects for putative inhibitory neurons more closely tracked the temporal statistics of target probability in our task. Finally, the session-to-session variability in a behavioral measure of attention covaried with the magnitude of this effect. Together, these results suggest that selective targeting of inhibitory neurons and networks is a critical mechanism for attentional modulation. Copyright © 2016 the American Physiological Society.
Dynamics of excitatory and inhibitory networks are differentially altered by selective attention
Snyder, Adam C.; Morais, Michael J.
2016-01-01
Inhibition and excitation form two fundamental modes of neuronal interaction, yet we understand relatively little about their distinct roles in service of perceptual and cognitive processes. We developed a multidimensional waveform analysis to identify fast-spiking (putative inhibitory) and regular-spiking (putative excitatory) neurons in vivo and used this method to analyze how attention affects these two cell classes in visual area V4 of the extrastriate cortex of rhesus macaques. We found that putative inhibitory neurons had both greater increases in firing rate and decreases in correlated variability with attention compared with putative excitatory neurons. Moreover, the time course of attention effects for putative inhibitory neurons more closely tracked the temporal statistics of target probability in our task. Finally, the session-to-session variability in a behavioral measure of attention covaried with the magnitude of this effect. Together, these results suggest that selective targeting of inhibitory neurons and networks is a critical mechanism for attentional modulation. PMID:27466133
Langenheder, Silke; Bulling, Mark T; Prosser, James I; Solan, Martin
2012-07-30
Theory suggests that biodiversity can act as a buffer against disturbances and environmental variability via two major mechanisms: Firstly, a stabilising effect by decreasing the temporal variance in ecosystem functioning due to compensatory processes; and secondly, a performance enhancing effect by raising the level of community response through the selection of better performing species. Empirical evidence for the stabilizing effect of biodiversity is readily available, whereas experimental confirmation of the performance-enhancing effect of biodiversity is sparse. Here, we test the effect of different environmental regimes (constant versus fluctuating temperature) on bacterial biodiversity-ecosystem functioning relations. We show that positive effects of species richness on ecosystem functioning are enhanced by stronger temperature fluctuations due to the increased performance of individual species. Our results provide evidence for the performance enhancing effect and suggest that selection towards functionally dominant species is likely to benefit the maintenance of ecosystem functioning under more variable conditions.
Automatic measurement of images on astrometric plates
NASA Astrophysics Data System (ADS)
Ortiz Gil, A.; Lopez Garcia, A.; Martinez Gonzalez, J. M.; Yershov, V.
1994-04-01
We present some results on the process of automatic detection and measurement of objects in overlapped fields of astrometric plates. The main steps of our algorithm are the following: determination of the Scale and Tilt between charge coupled devices (CCD) and microscope coordinate systems and estimation of signal-to-noise ratio in each field;--image identification and improvement of its position and size;--image final centering;--image selection and storage. Several parameters allow the use of variable criteria for image identification, characterization and selection. Problems related with faint images and crowded fields will be approached by special techniques (morphological filters, histogram properties and fitting models).
Influences on choice of surgery as a career: a study of consecutive cohorts in a medical school.
Sobral, Dejano T
2006-06-01
To examine the differential impact of person-based and programme-related features on graduates' dichotomous choice between surgical or non-surgical field specialties for first-year residency. A 10-year cohort study was conducted, following 578 students (55.4% male) who graduated from a university medical school during 1994-2003. Data were collected as follows: at the beginning of medical studies, on career preference and learning frame; during medical studies, on academic achievement, cross-year peer tutoring and selective clinical traineeship, and at graduation, on the first-year residency selected. Contingency and logistic regression analyses were performed, with graduates grouped by the dichotomous choice of surgery or not. Overall, 23% of graduates selected a first-year residency in surgery. Seven time-steady features related to this choice: male sex, high self-confidence, option of surgery at admission, active learning style, preference for surgery after Year 1, peer tutoring on clinical surgery, and selective training in clinical surgery. Logistic regression analysis, including all features, predicted 87.1% of the graduates' choices. Male sex, updated preference, peer tutoring and selective training were the most significant predictors in the pathway to choice. The relative roles of person-based and programme-related factors in the choice process are discussed. The findings suggest that for most students the choice of surgery derives from a temporal summation of influences that encompass entry and post-entry factors blended in variable patterns. It is likely that sex-unbiased peer tutoring and selective training supported the students' search process for personal compatibility with specialty-related domains of content and process.
Integrated control-structure design
NASA Technical Reports Server (NTRS)
Hunziker, K. Scott; Kraft, Raymond H.; Bossi, Joseph A.
1991-01-01
A new approach for the design and control of flexible space structures is described. The approach integrates the structure and controller design processes thereby providing extra opportunities for avoiding some of the disastrous effects of control-structures interaction and for discovering new, unexpected avenues of future structural design. A control formulation based on Boyd's implementation of Youla parameterization is employed. Control design parameters are coupled with structural design variables to produce a set of integrated-design variables which are selected through optimization-based methodology. A performance index reflecting spacecraft mission goals and constraints is formulated and optimized with respect to the integrated design variables. Initial studies have been concerned with achieving mission requirements with a lighter, more flexible space structure. Details of the formulation of the integrated-design approach are presented and results are given from a study involving the integrated redesign of a flexible geostationary platform.
Kerber, Kevin A; Hofer, Timothy P; Meurer, William J; Fendrick, A Mark; Morgenstern, Lewis B
2011-03-24
Clinical documentation systems, such as templates, have been associated with process utilization. The T-System emergency department (ED) templates are widely used but lacking are analyses of the templates association with processes. This system is also unique because of the many different template options available, and thus the selection of the template may also be important. We aimed to describe the selection of templates in ED dizziness presentations and to investigate the association between items on templates and process utilization. Dizziness visits were captured from a population-based study of EDs that use documentation templates. Two relevant process outcomes were assessed: head computerized tomography (CT) scan and nystagmus examination. Multivariable logistic regression was used to estimate the probability of each outcome for patients who did or did not receive a relevant-item template. Propensity scores were also used to adjust for selection effects. The final cohort was 1,485 visits. Thirty-one different templates were used. Use of a template with a head CT item was associated with an increase in the adjusted probability of head CT utilization from 12.2% (95% CI, 8.9%-16.6%) to 29.3% (95% CI, 26.0%-32.9%). The adjusted probability of documentation of a nystagmus assessment increased from 12.0% (95%CI, 8.8%-16.2%) when a nystagmus-item template was not used to 95.0% (95% CI, 92.8%-96.6%) when a nystagmus-item template was used. The associations remained significant after propensity score adjustments. Providers use many different templates in dizziness presentations. Important differences exist in the various templates and the template that is used likely impacts process utilization, even though selection may be arbitrary. The optimal design and selection of templates may offer a feasible and effective opportunity to improve care delivery.
Sierra-de-Grado, Rosario; Pando, Valentín; Martínez-Zurimendi, Pablo; Peñalvo, Alejandro; Báscones, Esther; Moulia, Bruno
2008-06-01
Stem straightness is an important selection trait in Pinus pinaster Ait. breeding programs. Despite the stability of stem straightness rankings in provenance trials, the efficiency of breeding programs based on a quantitative index of stem straightness remains low. An alternative approach is to analyze biomechanical processes that underlie stem form. The rationale for this selection method is that genetic differences in the biomechanical processes that maintain stem straightness in young plants will continue to control stem form throughout the life of the tree. We analyzed the components contributing most to genetic differences among provenances in stem straightening processes by kinetic analysis and with a biomechanical model defining the interactions between the variables involved (Fournier's model). This framework was tested on three P. pinaster provenances differing in adult stem straightness and growth. One-year-old plants were tilted at 45 degrees, and individual stem positions and sizes were recorded weekly for 5 months. We measured the radial extension of reaction wood and the anatomical features of wood cells in serial stem cross sections. The integral effect of reaction wood on stem leaning was computed with Fournier's model. Responses driven by both primary and secondary growth were involved in the stem straightening process, but secondary-growth-driven responses accounted for most differences among provenances. Plants from the straight-stemmed provenance showed a greater capacity for stem straightening than plants from the sinuous provenances mainly because of (1) more efficient reaction wood (higher maturation strains) and (2) more pronounced secondary-growth-driven autotropic decurving. These two process-based traits are thus good candidates for early selection of stem straightness, but additional tests on a greater number of genotypes over a longer period are required.
NASA Astrophysics Data System (ADS)
Laute, Katja; Beylich, Achim A.
2014-05-01
The various rates at which mountain landscapes are changing today are a response to (i) the long-term landscape history, (ii) the contemporary imprint of current tectonic activity, (iii) climate variability and (iv) anthropogenic influences. Large areas of the Norwegian mountainous fjord landscapes are today occupied by hillslopes which reflect the influence of glacial inheritance from the Last Glacial Maximum (LGM) as a direct response to climate variability during the Pleistocene. This study deals with the quantitative analysis of denudational slope processes and relief development occurring in selected mountain valleys in western Norway from the end of the LGM until today. The main focus of this research is two-fold: (i) analyzing the complexity of hillslope development since the LGM in glacially formed valleys, and (ii) assessing the spatio-temporal variability, controls and rates of relevant denudational slope processes operating under Holocene to contemporary environmental conditions in western Norway. Five years of research (2009-2013) were conducted in two steep, parabolic-shaped and glacier-connected neighbouring drainage basins, Erdalen (79.5 km2) and Bødalen (60.1 km2), located on the western side of the Jostedalsbreen ice cap in western Norway. The process-based approach applied encompasses techniques and methods for geomorphic process analysis of past (e.g. DEM/GIS based spatial data analysis, geophysical investigation of slope storage) and contemporary (process monitoring in field, in-depth studies of relevant slope processes) denudational process activity. The main results of this work reveal that the Holocene to contemporary slope processes and the connected relief development during this time period are primarily controlled by the imprint of the glacial history of the study areas which can be seen as a direct response to climate variability. Apart from that, a significant influence of the Little Ice Age (LIA) glacier advance on hillslope morphometry was discovered, with the LIA glacier advance causing higher intensities of post-LIA denudation on hillslope systems affected by the LIA glacier advance as compared to non-affected hillslope systems. Distinct differences are found between single headwater systems of the Erdalen and Bødalen drainage basins (i) regarding the absolute and relative importance of different contemporary slope processes as well as (ii) with respect to the importance of sediment delivery from headwater systems for the sedimentary budgets of the entire drainage basin systems. The detected differences are seen as a direct consequence of the varying glacially inherited valley morphometries which determine hillslope storage capacity, the average process transport distances and the intensity of hillslope-channel coupling. A comparison to geomorphic process rates published for other cold climate environments situated at high latitudes of the northern hemisphere permits the statement that the general intensity of present-day denudational processes in Erdalen and Bødalen is in a comparable range of magnitude. Even though denudational slope processes are leading to an ongoing valley widening, the Holocene modification of the inherited glacial relief until today is regarded to be minor. The results from the two selected typical drainage basins are considered to be representative on a regional scale for the mountainous fjord landscape in western Norway.
Schmutz, Joel A.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
Stochastic variation in survival rates is expected to decrease long-term population growth rates. This expectation influences both life-history theory and the conservation of species. From this expectation, Pfister (1998) developed the important life-history prediction that natural selection will have minimized variability in those elements of the annual life cycle (such as adult survival rate) with high sensitivity. This prediction has not been rigorously evaluated for bird populations, in part due to statistical difficulties related to variance estimation. I here overcome these difficulties, and in an analysis of 62 populations, I confirm her prediction by showing a negative relationship between the proportional sensitivity (elasticity) of adult survival and the proportional variance (CV) of adult survival. However, several species deviated significantly from this expectation, with more process variance in survival than predicted. For instance, projecting the magnitude of process variance in annual survival for American redstarts (Setophaga ruticilla) for 25 years resulted in a 44% decline in abundance without assuming any change in mean survival rate. For most of these species with high process variance, recent changes in harvest, habitats, or changes in climate patterns are the likely sources of environmental variability causing this variability in survival. Because of climate change, environmental variability is increasing on regional and global scales, which is expected to increase stochasticity in vital rates of species. Increased stochasticity in survival will depress population growth rates, and this result will magnify the conservation challenges we face.
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
Age-related decline in cognitive control: the role of fluid intelligence and processing speed
2014-01-01
Background Research on cognitive control suggests an age-related decline in proactive control abilities whereas reactive control seems to remain intact. However, the reason of the differential age effect on cognitive control efficiency is still unclear. This study investigated the potential influence of fluid intelligence and processing speed on the selective age-related decline in proactive control. Eighty young and 80 healthy older adults were included in this study. The participants were submitted to a working memory recognition paradigm, assessing proactive and reactive cognitive control by manipulating the interference level across items. Results Repeated measures ANOVAs and hierarchical linear regressions indicated that the ability to appropriately use cognitive control processes during aging seems to be at least partially affected by the amount of available cognitive resources (assessed by fluid intelligence and processing speed abilities). Conclusions This study highlights the potential role of cognitive resources on the selective age-related decline in proactive control, suggesting the importance of a more exhaustive approach considering the confounding variables during cognitive control assessment. PMID:24401034
NASA Astrophysics Data System (ADS)
Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.
2011-03-01
Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.
Scheperle, Rachel A; Abbas, Paul J
2015-01-01
The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.
Hydrological flow predictions in ungauged and sparsely gauged watersheds use regionalization or classification of hydrologically similar watersheds to develop empirical relationships between hydrologic, climatic, and watershed variables. The watershed classifications may be based...
A Variable-Selection Heuristic for K-Means Clustering.
ERIC Educational Resources Information Center
Brusco, Michael J.; Cradit, J. Dennis
2001-01-01
Presents a variable selection heuristic for nonhierarchical (K-means) cluster analysis based on the adjusted Rand index for measuring cluster recovery. Subjected the heuristic to Monte Carlo testing across more than 2,200 datasets. Results indicate that the heuristic is extremely effective at eliminating masking variables. (SLD)
González-Rodríguez, M L; Barros, L B; Palma, J; González-Rodríguez, P L; Rabasco, A M
2007-06-07
In this paper, we have used statistical experimental design to investigate the effect of several factors in coating process of lidocaine hydrochloride (LID) liposomes by a biodegradable polymer (chitosan, CH). These variables were the concentration of CH coating solution, the dripping rate of this solution on the liposome colloidal dispersion, the stirring rate, the time since the liposome production to the liposome coating and finally the amount of drug entrapped into liposomes. The selected response variables were drug encapsulation efficiency (EE, %), coating efficiency (CE, %) and zeta potential. Liposomes were obtained by thin-layer evaporation method. They were subsequently coated with CH according the experimental plan provided by a fractional factorial (2(5-1)) screening matrix. We have used spectroscopic methods to determine the zeta potential values. The EE (%) assay was carried out in dialysis bags and the brilliant red probe was used to determine CE (%) due to its property of forming molecular complexes with CH. The graphic analysis of the effects allowed the identification of the main formulation and technological factors by the analysis of the selected responses and permitted the determination of the proper level of these factors for the response improvement. Moreover, fractional design allowed quantifying the interactions between the factors, which will consider in next experiments. The results obtained pointed out that LID amount was the predominant factor that increased the drug entrapment capacity (EE). The CE (%) response was mainly affected by the concentration of the CH solution and the stirring rate, although all the interactions between the main factors have statistical significance.
Boykin, K.G.; Thompson, B.C.; Propeck-Gray, S.
2010-01-01
Despite widespread and long-standing efforts to model wildlife-habitat associations using remotely sensed and other spatially explicit data, there are relatively few evaluations of the performance of variables included in predictive models relative to actual features on the landscape. As part of the National Gap Analysis Program, we specifically examined physical site features at randomly selected sample locations in the Southwestern U.S. to assess degree of concordance with predicted features used in modeling vertebrate habitat distribution. Our analysis considered hypotheses about relative accuracy with respect to 30 vertebrate species selected to represent the spectrum of habitat generalist to specialist and categorization of site by relative degree of conservation emphasis accorded to the site. Overall comparison of 19 variables observed at 382 sample sites indicated ???60% concordance for 12 variables. Directly measured or observed variables (slope, soil composition, rock outcrop) generally displayed high concordance, while variables that required judgments regarding descriptive categories (aspect, ecological system, landform) were less concordant. There were no differences detected in concordance among taxa groups, degree of specialization or generalization of selected taxa, or land conservation categorization of sample sites with respect to all sites. We found no support for the hypothesis that accuracy of habitat models is inversely related to degree of taxa specialization when model features for a habitat specialist could be more difficult to represent spatially. Likewise, we did not find support for the hypothesis that physical features will be predicted with higher accuracy on lands with greater dedication to biodiversity conservation than on other lands because of relative differences regarding available information. Accuracy generally was similar (>60%) to that observed for land cover mapping at the ecological system level. These patterns demonstrate resilience of gap analysis deductive model processes to the type of remotely sensed or interpreted data used in habitat feature predictions. ?? 2010 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Dalla Rosa, Luciano; Ford, John K. B.; Trites, Andrew W.
2012-03-01
Humpback whales are common in feeding areas off British Columbia (BC) from spring to fall, and are widely distributed along the coast. Climate change and the increase in population size of North Pacific humpback whales may lead to increased anthropogenic impact and require a better understanding of species-habitat relationships. We investigated the distribution and relative abundance of humpback whales in relation to environmental variables and processes in BC waters using GIS and generalized additive models (GAMs). Six non-systematic cetacean surveys were conducted between 2004 and 2006. Whale encounter rates and environmental variables (oceanographic and remote sensing data) were recorded along transects divided into 4 km segments. A combined 3-year model and individual year models (two surveys each) were fitted with the mgcv R package. Model selection was based primarily on GCV scores. The explained deviance of our models ranged from 39% for the 3-year model to 76% for the 2004 model. Humpback whales were strongly associated with latitude and bathymetric features, including depth, slope and distance to the 100-m isobath. Distance to sea-surface-temperature fronts and salinity (climatology) were also constantly selected by the models. The shapes of smooth functions estimated for variables based on chlorophyll concentration or net primary productivity with different temporal resolutions and time lags were not consistent, even though higher numbers of whales seemed to be associated with higher primary productivity for some models. These and other selected explanatory variables may reflect areas of higher biological productivity that favor top predators. Our study confirms the presence of at least three important regions for humpback whales along the BC coast: south Dixon Entrance, middle and southwestern Hecate Strait and the area between La Perouse Bank and the southern edge of Juan de Fuca Canyon.
Analysis on electronic control unit of continuously variable transmission
NASA Astrophysics Data System (ADS)
Cao, Shuanggui
Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.
Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite
NASA Astrophysics Data System (ADS)
Gupta, Anand; Soni, P. K.; Krishna, C. M.
2018-04-01
The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.
Neuropsychological and FDG-PET profiles in VGKC autoimmune limbic encephalitis.
Dodich, Alessandra; Cerami, Chiara; Iannaccone, Sandro; Marcone, Alessandra; Alongi, Pierpaolo; Crespi, Chiara; Canessa, Nicola; Andreetta, Francesca; Falini, Andrea; Cappa, Stefano F; Perani, Daniela
2016-10-01
Limbic encephalitis (LE) is characterized by an acute or subacute onset with memory impairments, confusional state, behavioral disorders, variably associated with seizures and dystonic movements. It is due to inflammatory processes that selectively affect the medial temporal lobe structures. Voltage-gate potassium channel (VGKC) autoantibodies are frequently observed. In this study, we assessed at the individual level FDG-PET brain metabolic dysfunctions and neuropsychological profiles in three autoimmune LE cases seropositive for neuronal VGKC-complex autoantibodies. LGI1 and CASPR2 potassium channel complex autoantibody subtyping was performed. Cognitive abilities were evaluated with an in-depth neuropsychological battery focused on episodic memory and affective recognition/processing skills. FDG-PET data were analyzed at single-subject level according to a standardized and validated voxel-based Statistical Parametric Mapping (SPM) method. Patients showed severe episodic memory and fear recognition deficits at the neuropsychological assessment. No disorder of mentalizing processing was present. Variable patterns of increases and decreases of brain glucose metabolism emerged in the limbic structures, highlighting the pathology-driven selective vulnerability of this system. Additional involvement of cortical and subcortical regions, particularly in the sensorimotor system and basal ganglia, was found. Episodic memory and fear recognition deficits characterize the cognitive profile of LE. Commonalities and differences may occur in the brain metabolic patterns. Single-subject voxel-based analysis of FDG-PET imaging could be useful in the early detection of the metabolic correlates of cognitive and non-cognitive deficits characterizing LE condition. Copyright © 2016 Elsevier Inc. All rights reserved.
Zeinab, Jalambadani; Gholamreza, Garmaroudi; Mehdi, Yaseri; Mahmood, Tavousi; Korush, Jafarian
2017-09-21
The Trans-Theoretical model (TTM) and Theory of Planned Behaviour (TPB) may be promising models for understanding and predicting reduction in the consumption of fast food. The aim of this study was to examine the applicability of the Trans-Theoretical model (TTM) and the additional predictive role of the subjective norms and perceived behavioural control in predicting reduction consumption of fast food in obese Iranian adolescent girls. A cross sectional study design was conducted among twelve randomly selected schools in Sabzevar, Iran from 2015 to 2017. Four hundred eighty five randomly selected students consented to participate in the study. Hierarchical regression models used to predict the role of important variables that can influence the reduction in the consumption of fast food among students. using SPSS version 22. Variables Perceived behavioural control (r=0.58, P<0.001), Subjective norms (r=0.51, P<0.001), self-efficacy (r=0.49, P<0.001), decisional balance (pros) (r=0.29, P<0.001), decisional balance (cons) (r=0.25, P<0.001), stage of change (r=0.38, P<0.001), were significantly and positively correlated while experiential processes of change (r=0.08, P=0.135) and behavioural processes of change (r=0.09, P=0.145), were not significant. The study demonstrated that the TTM (except the experiential and behavioural processes of change) focusing on the perceived behavioural control and subjective norms are useful models for reduction in the consumption of fast food.
Parpinelli, R S; Ruvolo-Takasusuki, M C C; Toledo, V A A
2014-08-28
It is important to select the best honeybees that produce royal jelly to identify important molecular markers, such as major royal jelly proteins (MRJPs), and hence contribute to the development of new breeding strategies to improve the production of this substance. Therefore, this study focused on evaluating the genetic variability of mrjp3, mrjp5, and mrjp8 and associated allele maintenance during the process of selective reproduction in Africanized Apis mellifera individuals, which were chosen based on royal jelly production. The three loci analyzed were polymorphic, and produced a total of 16 alleles, with 4 new alleles, which were identified at mrjp5. The effective number of alleles at mrjp3 was 3.81. The observed average heterozygosity was 0.4905, indicating a high degree of genetic variability at these loci. The elevated FIS values for mrjp3, mrjp5, and mrjp8 (0.4188, 0.1077, and 0.2847, respectively) indicate an excess of homozygotes. The selection of Africanized A. mellifera queens for royal jelly production has maintained the mrjp3 C, D, and E alleles; although, the C allele occurred at a low frequency. The heterozygosity and FIS values show that the genetic variability of the queens is decreasing at the analyzed loci, generating an excess of homozygotes. However, the large numbers of drones that fertilize the queens make it difficult to develop homozygotes at mrjp3. Mating through instrumental insemination using the drones of known genotypes is required to increase the efficiency of Africanized A. mellifera-breeding programs, and to improve the quality and efficiency of commercial royal jelly apiaries.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Hamm, Jeremy M; Perry, Raymond P; Chipperfield, Judith G; Stewart, Tara L; Heckhausen, Jutta
2015-01-01
Developmental transitions are experienced throughout the life course and necessitate adapting to consequential and unpredictable changes that can undermine health. Our six-month study (n = 239) explored whether selective secondary control striving (motivation-focused thinking) protects against the elevated levels of stress and depressive symptoms increasingly common to young adults navigating the challenging school-to-university transition. Path analyses supplemented with tests of moderated mediation revealed that, for young adults who face challenging obstacles to goal attainment, selective secondary control indirectly reduced long-term stress-related physical and depressive symptoms through selective primary control and previously unexamined measures of discrete emotions. Results advance the existing literature by demonstrating that (a) selective secondary control has health benefits for vulnerable young adults and (b) these benefits are largely a consequence of the process variables proposed in Heckhausen et al.'s (2010) theory.
Sex ratio dynamics and fluctuating selection on personality.
Del Giudice, Marco
2012-03-21
Fluctuating selection has often been proposed as an explanation for the maintenance of genetic variation in personality. Here I argue that the temporal dynamics of the sex ratio can be a powerful source of fluctuating selection on personality traits, and develop this hypothesis with respect to humans. First, I review evidence that sex ratios modulate a wide range of social processes related to mating and parenting. Since most personality traits affect mating and parenting behavior, changes in the sex ratio can be expected to result in variable selection on personality. I then show that the temporal dynamics of the sex ratio are intrinsically characterized by fluctuations at various timescales. Finally, I address a number of evolutionary genetic challenges to the hypothesis. I conclude that the sex ratio hypothesis is a plausible explanation of genetic variation in human personality, and may be fruitfully applied to other species as well. Copyright © 2011 Elsevier Ltd. All rights reserved.
Novel Harmonic Regularization Approach for Variable Selection in Cox's Proportional Hazards Model
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods. PMID:25506389
Covariate Selection for Multilevel Models with Missing Data
Marino, Miguel; Buxton, Orfeu M.; Li, Yi
2017-01-01
Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457
NASA Technical Reports Server (NTRS)
Orcutt, John M.; Brenton, James C.
2016-01-01
An accurate database of meteorological data is essential for designing any aerospace vehicle and for preparing launch commit criteria. Meteorological instrumentation were recently placed on the three Lightning Protection System (LPS) towers at Kennedy Space Center (KSC) launch complex 39B (LC-39B), which provide a unique meteorological dataset existing at the launch complex over an extensive altitude range. Data records of temperature, dew point, relative humidity, wind speed, and wind direction are produced at 40, 78, 116, and 139 m at each tower. The Marshall Space Flight Center Natural Environments Branch (EV44) received an archive that consists of one-minute averaged measurements for the period of record of January 2011 - April 2015. However, before the received database could be used EV44 needed to remove any erroneous data from within the database through a comprehensive quality control (QC) process. The QC process applied to the LPS towers' meteorological data is similar to other QC processes developed by EV44, which were used in the creation of meteorological databases for other towers at KSC. The QC process utilized in this study has been modified specifically for use with the LPS tower database. The QC process first includes a check of each individual sensor. This check includes removing any unrealistic data and checking the temporal consistency of each variable. Next, data from all three sensors at each height are checked against each other, checked against climatology, and checked for sensors that erroneously report a constant value. Then, a vertical consistency check of each variable at each tower is completed. Last, the upwind sensor at each level is selected to minimize the influence of the towers and other structures at LC-39B on the measurements. The selection process for the upwind sensor implemented a study of tower-induced turbulence. This paper describes in detail the QC process, QC results, and the attributes of the LPS towers meteorological database.
Ozone delignification of pine and eucalyptus kraft pulps. 2: Selectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simoes, R.M.S.; Castro, J.A.A.M.
1999-12-01
The selectivity of ozone in the delignification of unbleached pine and eucalyptus kraft pulps is studied at ultralow consistency in a stirred reactor under closely controlled experimental conditions. The effect of several operating variables is analyzed, but special attention is paid to the depolymerization rate of polysaccharides with the particular goal of evaluating the influence of the lignin contents on its kinetics. By using substantially different ozone concentrations in the pulp suspension and different reaction temperatures, it is possible to show that ozone selectivity can only be slightly improved by manipulating these operating variables. Furthermore, for the same type ofmore » material, it was observed that the initial rate of delignification plays the most important role on selectivity. In fact, for a given pulp, selectivity decreases with a decrease of the initial lignin contents, and such results can be well justified by the corresponding reduction of the initial rates of delignification. To further investigate the effect of lignin on pulp degradation, experiments were carried out at 4 C between ozone and holocellulose, which represent the polysaccharides of the unbleached pulps. The results suggest that molecular ozone can be responsible for an important part of the polysaccharides depolymerization during the delignification process. Moreover, the comparison of the kinetic behavior of holocellulose and of the corresponding unbleached pulp also reveals that the presence of lignin in the pulp enhances both the depolymerization and the degradation rates of polysaccharides.« less
Whittaker, Kerry A; Rynearson, Tatiana A
2017-03-07
The ability for organisms to disperse throughout their environment is thought to strongly influence population structure and thus evolution of diversity within species. A decades-long debate surrounds processes that generate and support high microbial diversity, particularly in the ocean. The debate concerns whether diversification occurs primarily through geographic partitioning (where distance limits gene flow) or through environmental selection, and remains unresolved due to lack of empirical data. Here we show that gene flow in a diatom, an ecologically important eukaryotic microbe, is not limited by global-scale geographic distance. Instead, environmental and ecological selection likely play a more significant role than dispersal in generating and maintaining diversity. We detected significantly diverged populations ( F ST > 0.130) and discovered temporal genetic variability at a single site that was on par with spatial genetic variability observed over distances of 15,000 km. Relatedness among populations was decoupled from geographic distance across the global ocean and instead, correlated significantly with water temperature and whole-community chlorophyll a Correlations with temperature point to the importance of environmental selection in structuring populations. Correlations with whole-community chlorophyll a , a proxy for autotrophic biomass, suggest that ecological selection via interactions with other plankton may generate and maintain population genetic structure in marine microbes despite global-scale dispersal. Here, we provide empirical evidence for global gene flow in a marine eukaryotic microbe, suggesting that everything holds the potential to be everywhere, with environmental and ecological selection rather than geography or dispersal dictating the structure and evolution of diversity over space and time.
Es'kov, E K; Es'kova, M D
2014-01-01
High variability of cells size is used selectively for reproduction of working bees and drones. A decrease in both distance between cells and cells size themselves causes similar effects to body mass and morphometric traits of developing individuals. Adaptation of honey bees to living in shelters has led to their becoming tolerant to hypoxia. Improvement of ethological and physiological mechanisms of thermal regulation is associated with limitation of ecological valence and acquiring of stenothermic features by breed. Optimal thermal conditions for breed are limited by the interval 33-34.5 degrees C. Deviations of temperature by 3-4 degrees C beyond this range have minimum lethal effect at embryonic stage of development and medium effect at the stage of pre-pupa and pupa. Developing at the low bound of the vital range leads to increasing, while developing at the upper bound--to decreasing of body mass, mandibular and hypopharyngeal glands, as well as other organs, which, later, affects the variability of these traits during the adult stage of development. Eliminative and teratogenic efficiency of ecological factors that affect a breed is most often manifested in underdevelopment of wings. However, their size (in case of wing laminas formation). is characterized by relatively low variability and size-dependent asymmetry. Asymmetry variability of wings and other pair organs is expressed through realignment of size excess from right- to left-side one with respect to their increase. Selective elimination by those traits whose emerging probability increases as developmental conditions deviate from the optimal ones promotes restrictions on individual variability. Physiological mechanisms that facilitate adaptability enhancement under conditions of increasing anthropogenic contamination of eivironment and trophic substrates consumed by honey bees, arrear to be toxicants accumulation in rectum and crops' ability to absorb contaminants from nectar in course of its processing to honey.
Predicting rates of inbreeding in populations undergoing selection.
Woolliams, J A; Bijma, P
2000-01-01
Tractable forms of predicting rates of inbreeding (DeltaF) in selected populations with general indices, nonrandom mating, and overlapping generations were developed, with the principal results assuming a period of equilibrium in the selection process. An existing theorem concerning the relationship between squared long-term genetic contributions and rates of inbreeding was extended to nonrandom mating and to overlapping generations. DeltaF was shown to be approximately (1)/(4)(1 - omega) times the expected sum of squared lifetime contributions, where omega is the deviation from Hardy-Weinberg proportions. This relationship cannot be used for prediction since it is based upon observed quantities. Therefore, the relationship was further developed to express DeltaF in terms of expected long-term contributions that are conditional on a set of selective advantages that relate the selection processes in two consecutive generations and are predictable quantities. With random mating, if selected family sizes are assumed to be independent Poisson variables then the expected long-term contribution could be substituted for the observed, providing (1)/(4) (since omega = 0) was increased to (1)/(2). Established theory was used to provide a correction term to account for deviations from the Poisson assumptions. The equations were successfully applied, using simple linear models, to the problem of predicting DeltaF with sib indices in discrete generations since previously published solutions had proved complex. PMID:10747074
Chipman, Hugh A.; Hamada, Michael S.
2016-06-02
Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.
Variable Selection in Logistic Regression.
1987-06-01
23 %. AUTIOR(.) S. CONTRACT OR GRANT NUMBE Rf.i %Z. D. Bai, P. R. Krishnaiah and . C. Zhao F49620-85- C-0008 " PERFORMING ORGANIZATION NAME AND AOORESS...d I7 IOK-TK- d 7 -I0 7’ VARIABLE SELECTION IN LOGISTIC REGRESSION Z. D. Bai, P. R. Krishnaiah and L. C. Zhao Center for Multivariate Analysis...University of Pittsburgh Center for Multivariate Analysis University of Pittsburgh Y !I VARIABLE SELECTION IN LOGISTIC REGRESSION Z- 0. Bai, P. R. Krishnaiah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chipman, Hugh A.; Hamada, Michael S.
Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. Here, we show how Bayesian variable selection can be used to analyze experiments that use such designs. In addition to sparsity and hierarchy, Bayesian variable selection naturally incorporates heredity . This prior information is used to identify the most likely combinations of active terms. We also demonstrate the method on simulated and real experiments.
Variable selection under multiple imputation using the bootstrap in a prognostic study
Heymans, Martijn W; van Buuren, Stef; Knol, Dirk L; van Mechelen, Willem; de Vet, Henrica CW
2007-01-01
Background Missing data is a challenging problem in many prognostic studies. Multiple imputation (MI) accounts for imputation uncertainty that allows for adequate statistical testing. We developed and tested a methodology combining MI with bootstrapping techniques for studying prognostic variable selection. Method In our prospective cohort study we merged data from three different randomized controlled trials (RCTs) to assess prognostic variables for chronicity of low back pain. Among the outcome and prognostic variables data were missing in the range of 0 and 48.1%. We used four methods to investigate the influence of respectively sampling and imputation variation: MI only, bootstrap only, and two methods that combine MI and bootstrapping. Variables were selected based on the inclusion frequency of each prognostic variable, i.e. the proportion of times that the variable appeared in the model. The discriminative and calibrative abilities of prognostic models developed by the four methods were assessed at different inclusion levels. Results We found that the effect of imputation variation on the inclusion frequency was larger than the effect of sampling variation. When MI and bootstrapping were combined at the range of 0% (full model) to 90% of variable selection, bootstrap corrected c-index values of 0.70 to 0.71 and slope values of 0.64 to 0.86 were found. Conclusion We recommend to account for both imputation and sampling variation in sets of missing data. The new procedure of combining MI with bootstrapping for variable selection, results in multivariable prognostic models with good performance and is therefore attractive to apply on data sets with missing values. PMID:17629912
Maintenance of Genetic Variability under Strong Stabilizing Selection: A Two-Locus Model
Gavrilets, S.; Hastings, A.
1993-01-01
We study a two locus model with additive contributions to the phenotype to explore the relationship between stabilizing selection and recombination. We show that if the double heterozygote has the optimum phenotype and the contributions of the loci to the trait are different, then any symmetric stabilizing selection fitness function can maintain genetic variability provided selection is sufficiently strong relative to linkage. We present results of a detailed analysis of the quadratic fitness function which show that selection need not be extremely strong relative to recombination for the polymorphic equilibria to be stable. At these polymorphic equilibria the mean value of the trait, in general, is not equal to the optimum phenotype, there exists a large level of negative linkage disequilibrium which ``hides'' additive genetic variance, and different equilibria can be stable simultaneously. We analyze dependence of different characteristics of these equilibria on the location of optimum phenotype, on the difference in allelic effect, and on the strength of selection relative to recombination. Our overall result that stabilizing selection does not necessarily eliminate genetic variability is compatible with some experimental results where the lines subject to strong stabilizing selection did not have significant reductions in genetic variability. PMID:8514145
Do bioclimate variables improve performance of climate envelope models?
Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.
2012-01-01
Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.
Safari, Parviz; Danyali, Syyedeh Fatemeh; Rahimi, Mehdi
2018-06-02
Drought is the main abiotic stress seriously influencing wheat production. Information about the inheritance of drought tolerance is necessary to determine the most appropriate strategy to develop tolerant cultivars and populations. In this study, generation means analysis to identify the genetic effects controlling grain yield inheritance in water deficit and normal conditions was considered as a model selection problem in a Bayesian framework. Stochastic search variable selection (SSVS) was applied to identify the most important genetic effects and the best fitted models using different generations obtained from two crosses applying two water regimes in two growing seasons. The SSVS is used to evaluate the effect of each variable on the dependent variable via posterior variable inclusion probabilities. The model with the highest posterior probability is selected as the best model. In this study, the grain yield was controlled by the main effects (additive and non-additive effects) and epistatic. The results demonstrate that breeding methods such as recurrent selection and subsequent pedigree method and hybrid production can be useful to improve grain yield.
Engine Icing Data - An Analytics Approach
NASA Technical Reports Server (NTRS)
Fitzgerald, Brooke A.; Flegel, Ashlie B.
2017-01-01
Engine icing researchers at the NASA Glenn Research Center use the Escort data acquisition system in the Propulsion Systems Laboratory (PSL) to generate and collect a tremendous amount of data every day. Currently these researchers spend countless hours processing and formatting their data, selecting important variables, and plotting relationships between variables, all by hand, generally analyzing data in a spreadsheet-style program (such as Microsoft Excel). Though spreadsheet-style analysis is familiar and intuitive to many, processing data in spreadsheets is often unreproducible and small mistakes are easily overlooked. Spreadsheet-style analysis is also time inefficient. The same formatting, processing, and plotting procedure has to be repeated for every dataset, which leads to researchers performing the same tedious data munging process over and over instead of making discoveries within their data. This paper documents a data analysis tool written in Python hosted in a Jupyter notebook that vastly simplifies the analysis process. From the file path of any folder containing time series datasets, this tool batch loads every dataset in the folder, processes the datasets in parallel, and ingests them into a widget where users can search for and interactively plot subsets of columns in a number of ways with a click of a button, easily and intuitively comparing their data and discovering interesting dynamics. Furthermore, comparing variables across data sets and integrating video data (while extremely difficult with spreadsheet-style programs) is quite simplified in this tool. This tool has also gathered interest outside the engine icing branch, and will be used by researchers across NASA Glenn Research Center. This project exemplifies the enormous benefit of automating data processing, analysis, and visualization, and will help researchers move from raw data to insight in a much smaller time frame.
The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth system models
NASA Astrophysics Data System (ADS)
Eichinger, R.; Jöckel, P.
2014-07-01
The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.
The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth System Models
NASA Astrophysics Data System (ADS)
Eichinger, R.; Jöckel, P.
2014-04-01
The tendencies of prognostic variables in Earth System Models are usually only accessible, e.g., for output, as sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover a standard interface allows the access to the individual process tendencies by other submodels, e.g., for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the models susceptibility. TENDENCY is independent of the time integration scheme and therefore applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large-scale clouds, convective clouds, large-scale advection, vertical diffusion and methane-oxidation) show that the upward propagating water vapour signal dissolves mainly because of the chemical and the advective contribution. The TENDENCY submodel is part of version 2.42 or later of MESSy.
Torija, Antonio J; Ruiz, Diego P
2015-02-01
The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.
Trevisan, Francesco; Calignano, Flaviana; Lorusso, Massimo; Pakkanen, Jukka; Aversa, Alberta; Ambrosio, Elisa Paola; Lombardi, Mariangela; Fino, Paolo; Manfredi, Diego
2017-01-01
The aim of this review is to analyze and to summarize the state of the art of the processing of aluminum alloys, and in particular of the AlSi10Mg alloy, obtained by means of the Additive Manufacturing (AM) technique known as Selective Laser Melting (SLM). This process is gaining interest worldwide, thanks to the possibility of obtaining a freeform fabrication coupled with high mechanical properties related to a very fine microstructure. However, SLM is very complex, from a physical point of view, due to the interaction between a concentrated laser source and metallic powders, and to the extremely rapid melting and the subsequent fast solidification. The effects of the main process variables on the properties of the final parts are analyzed in this review: from the starting powder properties, such as shape and powder size distribution, to the main process parameters, such as laser power and speed, layer thickness, and scanning strategy. Furthermore, a detailed overview on the microstructure of the AlSi10Mg material, with the related tensile and fatigue properties of the final SLM parts, in some cases after different heat treatments, is presented. PMID:28772436
Fatty acid methyl ester analysis to identify sources of soil in surface water.
Banowetz, Gary M; Whittaker, Gerald W; Dierksen, Karen P; Azevedo, Mark D; Kennedy, Ann C; Griffith, Stephen M; Steiner, Jeffrey J
2006-01-01
Efforts to improve land-use practices to prevent contamination of surface waters with soil are limited by an inability to identify the primary sources of soil present in these waters. We evaluated the utility of fatty acid methyl ester (FAME) profiles of dry reference soils for multivariate statistical classification of soils collected from surface waters adjacent to agricultural production fields and a wooded riparian zone. Trials that compared approaches to concentrate soil from surface water showed that aluminum sulfate precipitation provided comparable yields to that obtained by vacuum filtration and was more suitable for handling large numbers of samples. Fatty acid methyl ester profiles were developed from reference soils collected from contrasting land uses in different seasons to determine whether specific fatty acids would consistently serve as variables in multivariate statistical analyses to permit reliable classification of soils. We used a Bayesian method and an independent iterative process to select appropriate fatty acids and found that variable selection was strongly impacted by the season during which soil was collected. The apparent seasonal variation in the occurrence of marker fatty acids in FAME profiles from reference soils prevented preparation of a standardized set of variables. Nevertheless, accurate classification of soil in surface water was achieved utilizing fatty acid variables identified in seasonally matched reference soils. Correlation analysis of entire chromatograms and subsequent discriminant analyses utilizing a restricted number of fatty acid variables showed that FAME profiles of soils exposed to the aquatic environment still had utility for classification at least 1 wk after submersion.
Juhasz, Barbara J; Lai, Yun-Hsuan; Woodcock, Michelle L
2015-12-01
Since the work of Taft and Forster (1976), a growing literature has examined how English compound words are recognized and organized in the mental lexicon. Much of this research has focused on whether compound words are decomposed during recognition by manipulating the word frequencies of their lexemes. However, many variables may impact morphological processing, including relational semantic variables such as semantic transparency, as well as additional form-related and semantic variables. In the present study, ratings were collected on 629 English compound words for six variables [familiarity, age of acquisition (AoA), semantic transparency, lexeme meaning dominance (LMD), imageability, and sensory experience ratings (SER)]. All of the compound words selected for this study are contained within the English Lexicon Project (Balota et al., 2007), which made it possible to use a regression approach to examine the predictive power of these variables for lexical decision and word naming performance. Analyses indicated that familiarity, AoA, imageability, and SER were all significant predictors of both lexical decision and word naming performance when they were added separately to a model containing the length and frequency of the compounds, as well as the lexeme frequencies. In addition, rated semantic transparency also predicted lexical decision performance. The database of English compound words should be beneficial to word recognition researchers who are interested in selecting items for experiments on compound words, and it will also allow researchers to conduct further analyses using the available data combined with word recognition times included in the English Lexicon Project.
Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory
Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo
2013-01-01
Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.
NASA Astrophysics Data System (ADS)
Simonton, Dean Keith
2010-06-01
Campbell (1960) proposed that creative thought should be conceived as a blind-variation and selective-retention process (BVSR). This article reviews the developments that have taken place in the half century that has elapsed since his proposal, with special focus on the use of combinatorial models as formal representations of the general theory. After defining the key concepts of blind variants, creative thought, and disciplinary context, the combinatorial models are specified in terms of individual domain samples, variable field size, ideational combination, and disciplinary communication. Empirical implications are then derived with respect to individual, domain, and field systems. These abstract combinatorial models are next provided substantive reinforcement with respect to findings concerning the cognitive processes, personality traits, developmental factors, and social contexts that contribute to creativity. The review concludes with some suggestions regarding future efforts to explicate creativity according to BVSR theory.
Montoya, Joseph H; Tsai, Charlie; Vojvodic, Aleksandra; Nørskov, Jens K
2015-07-08
The electrochemical production of NH3 under ambient conditions represents an attractive prospect for sustainable agriculture, but electrocatalysts that selectively reduce N2 to NH3 remain elusive. In this work, we present insights from DFT calculations that describe limitations on the low-temperature electrocatalytic production of NH3 from N2 . In particular, we highlight the linear scaling relations of the adsorption energies of intermediates that can be used to model the overpotential requirements in this process. By using a two-variable description of the theoretical overpotential, we identify fundamental limitations on N2 reduction analogous to those present in processes such as oxygen evolution. Using these trends, we propose new strategies for catalyst design that may help guide the search for an electrocatalyst that can achieve selective N2 reduction. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.