NASA Astrophysics Data System (ADS)
Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer
2015-03-01
Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.
San, Phyo Phyo; Ling, Sai Ho; Nguyen, Hung T
2012-01-01
Hypoglycemia, or low blood glucose, is the most common complication experienced by Type 1 diabetes mellitus (T1DM) patients. It is dangerous and can result in unconsciousness, seizures and even death. The most common physiological parameter to be effected from hypoglycemic reaction are heart rate (HR) and correct QT interval (QTc) of the electrocardiogram (ECG) signal. Based on physiological parameters, an intelligent diagnostics system, using the hybrid approach of adaptive neural fuzzy inference system (ANFIS), is developed to recognize the presence of hypoglycemia. The proposed ANFIS is characterized by adaptive neural network capabilities and the fuzzy inference system. To optimize the membership functions and adaptive network parameters, a global learning optimization algorithm called hybrid particle swarm optimization with wavelet mutation (HPSOWM) is used. For clinical study, 15 children with Type 1 diabetes volunteered for an overnight study. All the real data sets are collected from the Department of Health, Government of Western Australia. Several experiments were conducted with 5 patients each, for a training set (184 data points), a validation set (192 data points) and a testing set (153 data points), which are randomly selected. The effectiveness of the proposed detection method is found to be satisfactory by giving better sensitivity, 79.09% and acceptable specificity, 51.82%. PMID:23367375
NASA Astrophysics Data System (ADS)
Tien Bui, Dieu; Pradhan, Biswajeet; Nampak, Haleh; Bui, Quang-Thanh; Tran, Quynh-An; Nguyen, Quoc-Phi
2016-09-01
This paper proposes a new artificial intelligence approach based on neural fuzzy inference system and metaheuristic optimization for flood susceptibility modeling, namely MONF. In the new approach, the neural fuzzy inference system was used to create an initial flood susceptibility model and then the model was optimized using two metaheuristic algorithms, Evolutionary Genetic and Particle Swarm Optimization. A high-frequency tropical cyclone area of the Tuong Duong district in Central Vietnam was used as a case study. First, a GIS database for the study area was constructed. The database that includes 76 historical flood inundated areas and ten flood influencing factors was used to develop and validate the proposed model. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Receiver Operating Characteristic (ROC) curve, and area under the ROC curve (AUC) were used to assess the model performance and its prediction capability. Experimental results showed that the proposed model has high performance on both the training (RMSE = 0.306, MAE = 0.094, AUC = 0.962) and validation dataset (RMSE = 0.362, MAE = 0.130, AUC = 0.911). The usability of the proposed model was evaluated by comparing with those obtained from state-of-the art benchmark soft computing techniques such as J48 Decision Tree, Random Forest, Multi-layer Perceptron Neural Network, Support Vector Machine, and Adaptive Neuro Fuzzy Inference System. The results show that the proposed MONF model outperforms the above benchmark models; we conclude that the MONF model is a new alternative tool that should be used in flood susceptibility mapping. The result in this study is useful for planners and decision makers for sustainable management of flood-prone areas.
Kwong, C K; Fung, K Y; Jiang, Huimin; Chan, K Y; Siu, Kin Wai Michael
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
Kwong, C. K.; Fung, K. Y.; Jiang, Huimin; Chan, K. Y.
2013-01-01
Affective design is an important aspect of product development to achieve a competitive edge in the marketplace. A neural-fuzzy network approach has been attempted recently to model customer satisfaction for affective design and it has been proved to be an effective one to deal with the fuzziness and non-linearity of the modeling as well as generate explicit customer satisfaction models. However, such an approach to modeling customer satisfaction has two limitations. First, it is not suitable for the modeling problems which involve a large number of inputs. Second, it cannot adapt to new data sets, given that its structure is fixed once it has been developed. In this paper, a modified dynamic evolving neural-fuzzy approach is proposed to address the above mentioned limitations. A case study on the affective design of mobile phones was conducted to illustrate the effectiveness of the proposed methodology. Validation tests were conducted and the test results indicated that: (1) the conventional Adaptive Neuro-Fuzzy Inference System (ANFIS) failed to run due to a large number of inputs; (2) the proposed dynamic neural-fuzzy model outperforms the subtractive clustering-based ANFIS model and fuzzy c-means clustering-based ANFIS model in terms of their modeling accuracy and computational effort. PMID:24385884
Neural fuzzy modeling of anaerobic biological wastewater treatment systems
Tay, J.H.; Zhang, X.
1999-12-01
Anaerobic biological wastewater treatment systems are difficult to model because their performance is complex and varies significantly with different reactor configurations, influent characteristics, and operational conditions. Instead of conventional kinetic modeling, advanced neural fuzzy technology was employed to develop a conceptual adaptive model for anaerobic treatment systems. The conceptual neural fuzzy model contains the robustness of fuzzy systems, the learning ability of neural networks, and can adapt to various situations. The conceptual model was used to simulate the daily performance of two high-rate anaerobic wastewater treatment systems with satisfactory results obtained.
Heddam, Salim
2014-01-01
In this study, we present application of an artificial intelligence (AI) technique model called dynamic evolving neural-fuzzy inference system (DENFIS) based on an evolving clustering method (ECM), for modelling dissolved oxygen concentration in a river. To demonstrate the forecasting capability of DENFIS, a one year period from 1 January 2009 to 30 December 2009, of hourly experimental water quality data collected by the United States Geological Survey (USGS Station No: 420853121505500) station at Klamath River at Miller Island Boat Ramp, OR, USA, were used for model development. Two DENFIS-based models are presented and compared. The two DENFIS systems are: (1) offline-based system named DENFIS-OF, and (2) online-based system, named DENFIS-ON. The input variables used for the two models are water pH, temperature, specific conductance, and sensor depth. The performances of the models are evaluated using root mean square errors (RMSE), mean absolute error (MAE), Willmott index of agreement (d) and correlation coefficient (CC) statistics. The lowest root mean square error and highest correlation coefficient values were obtained with the DENFIS-ON method. The results obtained with DENFIS models are compared with linear (multiple linear regression, MLR) and nonlinear (multi-layer perceptron neural networks, MLPNN) methods. This study demonstrates that DENFIS-ON investigated herein outperforms all the proposed techniques for DO modelling. PMID:24705953
Multimodel inference and adaptive management
Rehme, S.E.; Powell, L.A.; Allen, C.R.
2011-01-01
Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.
Cheu, Eng Yeow; Quek, Chai; Ng, See Kiong
2012-02-01
Appetitive operant conditioning in Aplysia for feeding behavior via the electrical stimulation of the esophageal nerve contingently reinforces each spontaneous bite during the feeding process. This results in the acquisition of operant memory by the contingently reinforced animals. Analysis of the cellular and molecular mechanisms of the feeding motor circuitry revealed that activity-dependent neuronal modulation occurs at the interneurons that mediate feeding behaviors. This provides evidence that interneurons are possible loci of plasticity and constitute another mechanism for memory storage in addition to memory storage attributed to activity-dependent synaptic plasticity. In this paper, an associative ambiguity correction-based neuro-fuzzy network, called appetitive reward-based pseudo-outer-product-compositional rule of inference [ARPOP-CRI(S)], is trained based on an appetitive reward-based learning algorithm which is biologically inspired by the appetitive operant conditioning of the feeding behavior in Aplysia. A variant of the Hebbian learning rule called Hebbian concomitant learning is proposed as the building block in the neuro-fuzzy network learning algorithm. The proposed algorithm possesses the distinguishing features of the sequential learning algorithm. In addition, the proposed ARPOP-CRI(S) neuro-fuzzy system encodes fuzzy knowledge in the form of linguistic rules that satisfies the semantic criteria for low-level fuzzy model interpretability. ARPOP-CRI(S) is evaluated and compared against other modeling techniques using benchmark time-series datasets. Experimental results are encouraging and show that ARPOP-CRI(S) is a viable modeling technique for time-variant problem domains.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
eFSM--a novel online neural-fuzzy semantic memory model.
Tung, Whye Loon; Quek, Chai
2010-01-01
Fuzzy rule-based systems (FRBSs) have been successfully applied to many areas. However, traditional fuzzy systems are often manually crafted, and their rule bases that represent the acquired knowledge are static and cannot be trained to improve the modeling performance. This subsequently leads to intensive research on the autonomous construction and tuning of a fuzzy system directly from the observed training data to address the knowledge acquisition bottleneck, resulting in well-established hybrids such as neural-fuzzy systems (NFSs) and genetic fuzzy systems (GFSs). However, the complex and dynamic nature of real-world problems demands that fuzzy rule-based systems and models be able to adapt their parameters and ultimately evolve their rule bases to address the nonstationary (time-varying) characteristics of their operating environments. Recently, considerable research efforts have been directed to the study of evolving Tagaki-Sugeno (T-S)-type NFSs based on the concept of incremental learning. In contrast, there are very few incremental learning Mamdani-type NFSs reported in the literature. Hence, this paper presents the evolving neural-fuzzy semantic memory (eFSM) model, a neural-fuzzy Mamdani architecture with a data-driven progressively adaptive structure (i.e., rule base) based on incremental learning. Issues related to the incremental learning of the eFSM rule base are carefully investigated, and a novel parameter learning approach is proposed for the tuning of the fuzzy set parameters in eFSM. The proposed eFSM model elicits highly interpretable semantic knowledge in the form of Mamdani-type if-then fuzzy rules from low-level numeric training data. These Mamdani fuzzy rules define the computing structure of eFSM and are incrementally learned with the arrival of each training data sample. New rules are constructed from the emergence of novel training data and obsolete fuzzy rules that no longer describe the recently observed data trends are pruned. This
Network inference via adaptive optimal design
2012-01-01
Background Current research in network reverse engineering for genetic or metabolic networks very often does not include a proper experimental and/or input design. In this paper we address this issue in more detail and suggest a method that includes an iterative design of experiments based, on the most recent data that become available. The presented approach allows a reliable reconstruction of the network and addresses an important issue, i.e., the analysis and the propagation of uncertainties as they exist in both the data and in our own knowledge. These two types of uncertainties have their immediate ramifications for the uncertainties in the parameter estimates and, hence, are taken into account from the very beginning of our experimental design. Findings The method is demonstrated for two small networks that include a genetic network for mRNA synthesis and degradation and an oscillatory network describing a molecular network underlying adenosine 3’-5’ cyclic monophosphate (cAMP) as observed in populations of Dyctyostelium cells. In both cases a substantial reduction in parameter uncertainty was observed. Extension to larger scale networks is possible but needs a more rigorous parameter estimation algorithm that includes sparsity as a constraint in the optimization procedure. Conclusion We conclude that a careful experiment design very often (but not always) pays off in terms of reliability in the inferred network topology. For large scale networks a better parameter estimation algorithm is required that includes sparsity as an additional constraint. These algorithms are available in the literature and can also be used in an adaptive optimal design setting as demonstrated in this paper. PMID:22999252
Active Inference, homeostatic regulation and adaptive behavioural control
Pezzulo, Giovanni; Rigoli, Francesco; Friston, Karl
2015-01-01
We review a theory of homeostatic regulation and adaptive behavioural control within the Active Inference framework. Our aim is to connect two research streams that are usually considered independently; namely, Active Inference and associative learning theories of animal behaviour. The former uses a probabilistic (Bayesian) formulation of perception and action, while the latter calls on multiple (Pavlovian, habitual, goal-directed) processes for homeostatic and behavioural control. We offer a synthesis these classical processes and cast them as successive hierarchical contextualisations of sensorimotor constructs, using the generative models that underpin Active Inference. This dissolves any apparent mechanistic distinction between the optimization processes that mediate classical control or learning. Furthermore, we generalize the scope of Active Inference by emphasizing interoceptive inference and homeostatic regulation. The ensuing homeostatic (or allostatic) perspective provides an intuitive explanation for how priors act as drives or goals to enslave action, and emphasises the embodied nature of inference. PMID:26365173
Active Inference, homeostatic regulation and adaptive behavioural control.
Pezzulo, Giovanni; Rigoli, Francesco; Friston, Karl
2015-11-01
We review a theory of homeostatic regulation and adaptive behavioural control within the Active Inference framework. Our aim is to connect two research streams that are usually considered independently; namely, Active Inference and associative learning theories of animal behaviour. The former uses a probabilistic (Bayesian) formulation of perception and action, while the latter calls on multiple (Pavlovian, habitual, goal-directed) processes for homeostatic and behavioural control. We offer a synthesis these classical processes and cast them as successive hierarchical contextualisations of sensorimotor constructs, using the generative models that underpin Active Inference. This dissolves any apparent mechanistic distinction between the optimization processes that mediate classical control or learning. Furthermore, we generalize the scope of Active Inference by emphasizing interoceptive inference and homeostatic regulation. The ensuing homeostatic (or allostatic) perspective provides an intuitive explanation for how priors act as drives or goals to enslave action, and emphasises the embodied nature of inference. PMID:26365173
Using accelerometers for physical actions recognition by a neural fuzzy network.
Liu, Shing-Hong; Chang, Yuan-Jen
2009-11-01
Triaxial accelerometers were employed to monitor human actions under various conditions. This study aimed to determine an optimum classification scheme and sensor placement positions for recognizing different types of physical action. Three triaxial accelerometers were placed on the chest, waist, and thigh, and their abilities to recognize the three actions of walking, sitting down, and falling were determined. The features of the resultant triaxial signals from each accelerometer were extracted by an autoregression (AR) model. A self-constructing neural fuzzy inference network (SONFIN) was used to recognize the three actions. The performance of the SONFIN was assessed based on statistical parameters, sensitivity, specificity, and total classification accuracy. The results show that the SONFIN provided a stability total classification accuracy of 96.3% and 88.7% for the training and testing data, when the parameters of the 60-order AR model were used as the input feature vector, and the accelerometer was placed anywhere on the abdomen. Seven elderly individuals performing the three basic actions had 80.4% confirmation for the testing data.
Adaptive inference for distinguishing credible from incredible patterns in nature
Holling, Crawford S.; Allen, C.R.
2002-01-01
Strong inference is a powerful and rapid tool that can be used to identify and explain patterns in molecular biology, cell biology, and physiology. It is effective where causes are single and separable and where discrimination between pairwise alternative hypotheses can be determined experimentally by a simple yes or no answer. But causes in ecological systems are multiple and overlapping and are not entirely separable. Frequently, competing hypotheses cannot be distinguished by a single unambiguous test, but only by a suite of tests of different kinds, that produce a body of evidence to support one line of argument and not others. We call this process "adaptive inference". Instead of pitting each member of a pair of hypotheses against each other, adaptive inference relies on the exuberant invention of multiple, competing hypotheses, after which carefully structured comparative data are used to explore the logical consequences of each. Herein we present an example that demonstrates the attributes of adaptive inference that have developed out of a 30-year study of the resilience of ecosystems.
Do people treat missing information adaptively when making inferences?
Garcia-Retamero, Rocio; Rieskamp, Jörg
2009-10-01
When making inferences, people are often confronted with situations with incomplete information. Previous research has led to a mixed picture about how people react to missing information. Options include ignoring missing information, treating it as either positive or negative, using the average of past observations for replacement, or using the most frequent observation of the available information as a placeholder. The accuracy of these inference mechanisms depends on characteristics of the environment. When missing information is uniformly distributed, it is most accurate to treat it as the average, whereas when it is negatively correlated with the criterion to be judged, treating missing information as if it were negative is most accurate. Whether people treat missing information adaptively according to the environment was tested in two studies. The results show that participants were sensitive to how missing information was distributed in an environment and most frequently selected the mechanism that was most adaptive. From these results the authors conclude that reacting to missing information in different ways is an adaptive response to environmental characteristics.
Adaptive gene expression divergence inferred from population genomics.
Holloway, Alisha K; Lawniczak, Mara K N; Mezey, Jason G; Begun, David J; Jones, Corbin D
2007-10-01
Detailed studies of individual genes have shown that gene expression divergence often results from adaptive evolution of regulatory sequence. Genome-wide analyses, however, have yet to unite patterns of gene expression with polymorphism and divergence to infer population genetic mechanisms underlying expression evolution. Here, we combined genomic expression data--analyzed in a phylogenetic context--with whole genome light-shotgun sequence data from six Drosophila simulans lines and reference sequences from D. melanogaster and D. yakuba. These data allowed us to use molecular population genetics to test for neutral versus adaptive gene expression divergence on a genomic scale. We identified recent and recurrent adaptive evolution along the D. simulans lineage by contrasting sequence polymorphism within D. simulans to divergence from D. melanogaster and D. yakuba. Genes that evolved higher levels of expression in D. simulans have experienced adaptive evolution of the associated 3' flanking and amino acid sequence. Concomitantly, these genes are also decelerating in their rates of protein evolution, which is in agreement with the finding that highly expressed genes evolve slowly. Interestingly, adaptive evolution in 5' cis-regulatory regions did not correspond strongly with expression evolution. Our results provide a genomic view of the intimate link between selection acting on a phenotype and associated genic evolution.
Bayesian inference with an adaptive proposal density for GARCH models
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2010-04-01
We perform the Bayesian inference of a GARCH model by the Metropolis-Hastings algorithm with an adaptive proposal density. The adaptive proposal density is assumed to be the Student's t-distribution and the distribution parameters are evaluated by using the data sampled during the simulation. We apply the method for the QGARCH model which is one of asymmetric GARCH models and make empirical studies for Nikkei 225, DAX and Hang indexes. We find that autocorrelation times from our method are very small, thus the method is very efficient for generating uncorrelated Monte Carlo data. The results from the QGARCH model show that all the three indexes show the leverage effect, i.e. the volatility is high after negative observations.
Seizure prediction using adaptive neuro-fuzzy inference system.
Rabbi, Ahmed F; Azinfar, Leila; Fazel-Rezai, Reza
2013-01-01
In this study, we present a neuro-fuzzy approach of seizure prediction from invasive Electroencephalogram (EEG) by applying adaptive neuro-fuzzy inference system (ANFIS). Three nonlinear seizure predictive features were extracted from a patient's data obtained from the European Epilepsy Database, one of the most comprehensive EEG database for epilepsy research. A total of 36 hours of recordings including 7 seizures was used for analysis. The nonlinear features used in this study were similarity index, phase synchronization, and nonlinear interdependence. We designed an ANFIS classifier constructed based on these features as input. Fuzzy if-then rules were generated by the ANFIS classifier using the complex relationship of feature space provided during training. The membership function optimization was conducted based on a hybrid learning algorithm. The proposed method achieved highest sensitivity of 80% with false prediction rate as low as 0.46 per hour. PMID:24110134
A neural fuzzy controller learning by fuzzy error propagation
NASA Technical Reports Server (NTRS)
Nauck, Detlef; Kruse, Rudolf
1992-01-01
In this paper, we describe a procedure to integrate techniques for the adaptation of membership functions in a linguistic variable based fuzzy control environment by using neural network learning principles. This is an extension to our work. We solve this problem by defining a fuzzy error that is propagated back through the architecture of our fuzzy controller. According to this fuzzy error and the strength of its antecedent each fuzzy rule determines its amount of error. Depending on the current state of the controlled system and the control action derived from the conclusion, each rule tunes the membership functions of its antecedent and its conclusion. By this we get an unsupervised learning technique that enables a fuzzy controller to adapt to a control task by knowing just about the global state and the fuzzy error.
A neural-fuzzy system for congestion control in ATM networks.
Lee, S J; Hou, C L
2000-01-01
We propose the use of a neural-fuzzy scheme for rate-based feedback congestion control in asynchronous transfer mode (ATM) networks. Available bit rate (ABR) traffic is not guaranteed quality of service (QoS) in the setup connection, and it can dynamically share the available bandwidth. Therefore, congestion can be controlled by regulating the source rate, to a certain degree, according to the current traffic flow. Traditional methods perform congestion control by monitoring the queue length. The source rate is decreased by a fixed rate when the queue length is greater than a prespecified threshold. However, it is difficult to get a suitable rate according to the degree of traffic congestion. We employ a neural-fuzzy mechanism to control the source rate. Through learning, membership values can be generated and cell loss can be predicted from the status of the queue length. Then, an explicit rate is calculated and the source rate is controlled appropriately. Simulation results have shown that our method is effective compared with traditional methods.
Keefe, Bruce D; Wincenciak, Joanna; Jellema, Tjeerd; Ward, James W; Barraclough, Nick E
2016-07-01
When observing another individual's actions, we can both recognize their actions and infer their beliefs concerning the physical and social environment. The extent to which visual adaptation influences action recognition and conceptually later stages of processing involved in deriving the belief state of the actor remains unknown. To explore this we used virtual reality (life-size photorealistic actors presented in stereoscopic three dimensions) to see how visual adaptation influences the perception of individuals in naturally unfolding social scenes at increasingly higher levels of action understanding. We presented scenes in which one actor picked up boxes (of varying number and weight), after which a second actor picked up a single box. Adaptation to the first actor's behavior systematically changed perception of the second actor. Aftereffects increased with the duration of the first actor's behavior, declined exponentially over time, and were independent of view direction. Inferences about the second actor's expectation of box weight were also distorted by adaptation to the first actor. Distortions in action recognition and actor expectations did not, however, extend across different actions, indicating that adaptation is not acting at an action-independent abstract level but rather at an action-dependent level. We conclude that although adaptation influences more complex inferences about belief states of individuals, this is likely to be a result of adaptation at an earlier action recognition stage rather than adaptation operating at a higher, more abstract level in mentalizing or simulation systems. PMID:27472496
Integrating evolutionary and functional approaches to infer adaptation at specific loci.
Storz, Jay F; Wheat, Christopher W
2010-09-01
Inferences about adaptation at specific loci are often exclusively based on the static analysis of DNA sequence variation. Ideally,population-genetic evidence for positive selection serves as a stepping-off point for experimental studies to elucidate the functional significance of the putatively adaptive variation. We argue that inferences about adaptation at specific loci are best achieved by integrating the indirect, retrospective insights provided by population-genetic analyses with the more direct, mechanistic insights provided by functional experiments. Integrative studies of adaptive genetic variation may sometimes be motivated by experimental insights into molecular function, which then provide the impetus to perform population genetic tests to evaluate whether the functional variation is of adaptive significance. In other cases, studies may be initiated by genome scans of DNA variation to identify candidate loci for recent adaptation. Results of such analyses can then motivate experimental efforts to test whether the identified candidate loci do in fact contribute to functional variation in some fitness-related phenotype. Functional studies can provide corroborative evidence for positive selection at particular loci, and can potentially reveal specific molecular mechanisms of adaptation.
Specificity and timescales of cortical adaptation as inferences about natural movie statistics
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-01-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416
Review of Medical Image Classification using the Adaptive Neuro-Fuzzy Inference System
Hosseini, Monireh Sheikh; Zekri, Maryam
2012-01-01
Image classification is an issue that utilizes image processing, pattern recognition and classification methods. Automatic medical image classification is a progressive area in image classification, and it is expected to be more developed in the future. Because of this fact, automatic diagnosis can assist pathologists by providing second opinions and reducing their workload. This paper reviews the application of the adaptive neuro-fuzzy inference system (ANFIS) as a classifier in medical image classification during the past 16 years. ANFIS is a fuzzy inference system (FIS) implemented in the framework of an adaptive fuzzy neural network. It combines the explicit knowledge representation of an FIS with the learning power of artificial neural networks. The objective of ANFIS is to integrate the best features of fuzzy systems and neural networks. A brief comparison with other classifiers, main advantages and drawbacks of this classifier are investigated. PMID:23493054
Horn, Sebastian S; Ruggeri, Azzurra; Pachur, Thorsten
2016-09-01
Judgments about objects in the world are often based on probabilistic information (or cues). A frugal judgment strategy that utilizes memory (i.e., the ability to discriminate between known and unknown objects) as a cue for inference is the recognition heuristic (RH). The usefulness of the RH depends on the structure of the environment, particularly the predictive power (validity) of recognition. Little is known about developmental differences in use of the RH. In this study, the authors examined (a) to what extent children and adolescents recruit the RH when making judgments, and (b) around what age adaptive use of the RH emerges. Primary schoolchildren (M = 9 years), younger adolescents (M = 12 years), and older adolescents (M = 17 years) made comparative judgments in task environments with either high or low recognition validity. Reliance on the RH was measured with a hierarchical multinomial model. Results indicated that primary schoolchildren already made systematic use of the RH. However, only older adolescents adaptively adjusted their strategy use between environments and were better able to discriminate between situations in which the RH led to correct versus incorrect inferences. These findings suggest that the use of simple heuristics does not progress unidirectionally across development but strongly depends on the task environment, in line with the perspective of ecological rationality. Moreover, adaptive heuristic inference seems to require experience and a developed base of domain knowledge. (PsycINFO Database Record PMID:27505696
Inferring metabolic networks using the Bayesian adaptive graphical lasso with informative priors
PETERSON, CHRISTINE; VANNUCCI, MARINA; KARAKAS, CEMAL; CHOI, WILLIAM; MA, LIHUA; MALETIĆ-SAVATIĆ, MIRJANA
2014-01-01
Metabolic processes are essential for cellular function and survival. We are interested in inferring a metabolic network in activated microglia, a major neuroimmune cell in the brain responsible for the neuroinflammation associated with neurological diseases, based on a set of quantified metabolites. To achieve this, we apply the Bayesian adaptive graphical lasso with informative priors that incorporate known relationships between covariates. To encourage sparsity, the Bayesian graphical lasso places double exponential priors on the off-diagonal entries of the precision matrix. The Bayesian adaptive graphical lasso allows each double exponential prior to have a unique shrinkage parameter. These shrinkage parameters share a common gamma hyperprior. We extend this model to create an informative prior structure by formulating tailored hyperpriors on the shrinkage parameters. By choosing parameter values for each hyperprior that shift probability mass toward zero for nodes that are close together in a reference network, we encourage edges between covariates with known relationships. This approach can improve the reliability of network inference when the sample size is small relative to the number of parameters to be estimated. When applied to the data on activated microglia, the inferred network includes both known relationships and associations of potential interest for further investigation. PMID:24533172
Perspectives of probabilistic inferences: Reinforcement learning and an adaptive network compared.
Rieskamp, Jörg
2006-11-01
The assumption that people possess a strategy repertoire for inferences has been raised repeatedly. The strategy selection learning theory specifies how people select strategies from this repertoire. The theory assumes that individuals select strategies proportional to their subjective expectations of how well the strategies solve particular problems; such expectations are assumed to be updated by reinforcement learning. The theory is compared with an adaptive network model that assumes people make inferences by integrating information according to a connectionist network. The network's weights are modified by error correction learning. The theories were tested against each other in 2 experimental studies. Study 1 showed that people substantially improved their inferences through feedback, which was appropriately predicted by the strategy selection learning theory. Study 2 examined a dynamic environment in which the strategies' performances changed. In this situation a quick adaptation to the new situation was not observed; rather, individuals got stuck on the strategy they had successfully applied previously. This "inertia effect" was most strongly predicted by the strategy selection learning theory.
Adaptive neuro-fuzzy inference systems for automatic detection of breast cancer.
Ubeyli, Elif Derya
2009-10-01
This paper intends to an integrated view of implementing adaptive neuro-fuzzy inference system (ANFIS) for breast cancer detection. The Wisconsin breast cancer database contained records of patients with known diagnosis. The ANFIS classifiers learned how to differentiate a new case in the domain by given a training set of such records. The ANFIS classifier was used to detect the breast cancer when nine features defining breast cancer indications were used as inputs. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the impacts of features on the detection of breast cancer were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performances and classification accuracies and the results confirmed that the proposed ANFIS model has potential in detecting the breast cancer. PMID:19827261
Classification of diabetes maculopathy images using data-adaptive neuro-fuzzy inference classifier.
Ibrahim, Sulaimon; Chowriappa, Pradeep; Dua, Sumeet; Acharya, U Rajendra; Noronha, Kevin; Bhandary, Sulatha; Mugasa, Hatwib
2015-12-01
Prolonged diabetes retinopathy leads to diabetes maculopathy, which causes gradual and irreversible loss of vision. It is important for physicians to have a decision system that detects the early symptoms of the disease. This can be achieved by building a classification model using machine learning algorithms. Fuzzy logic classifiers group data elements with a degree of membership in multiple classes by defining membership functions for each attribute. Various methods have been proposed to determine the partitioning of membership functions in a fuzzy logic inference system. A clustering method partitions the membership functions by grouping data that have high similarity into clusters, while an equalized universe method partitions data into predefined equal clusters. The distribution of each attribute determines its partitioning as fine or coarse. A simple grid partitioning partitions each attribute equally and is therefore not effective in handling varying distribution amongst the attributes. A data-adaptive method uses a data frequency-driven approach to partition each attribute based on the distribution of data in that attribute. A data-adaptive neuro-fuzzy inference system creates corresponding rules for both finely distributed and coarsely distributed attributes. This method produced more useful rules and a more effective classification system. We obtained an overall accuracy of 98.55%.
Design and inference for the intent-to-treat principle using adaptive treatment.
Dawson, Ree; Lavori, Philip W
2015-04-30
Nonadherence to assigned treatment jeopardizes the power and interpretability of intent-to-treat comparisons from clinical trial data and continues to be an issue for effectiveness studies, despite their pragmatic emphasis. We posit that new approaches to design need to complement developments in methods for causal inference to address nonadherence, in both experimental and practice settings. This paper considers the conventional study design for psychiatric research and other medical contexts, in which subjects are randomized to treatments that are fixed throughout the trial and presents an alternative that converts the fixed treatments into an adaptive intervention that reflects best practice. The key element is the introduction of an adaptive decision point midway into the study to address a patient's reluctance to remain on treatment before completing a full-length trial of medication. The clinical uncertainty about the appropriate adaptation prompts a second randomization at the new decision point to evaluate relevant options. Additionally, the standard 'all-or-none' principal stratification (PS) framework is applied to the first stage of the design to address treatment discontinuation that occurs too early for a midtrial adaptation. Drawing upon the adaptive intervention features, we develop assumptions to identify the PS causal estimand and to introduce restrictions on outcome distributions to simplify expectation-maximization calculations. We evaluate the performance of the PS setup, with particular attention to the role played by a binary covariate. The results emphasize the importance of collecting covariate data for use in design and analysis. We consider the generality of our approach beyond the setting of psychiatric research. PMID:25581413
Design and inference for the intent-to-treat principle using adaptive treatment.
Dawson, Ree; Lavori, Philip W
2015-04-30
Nonadherence to assigned treatment jeopardizes the power and interpretability of intent-to-treat comparisons from clinical trial data and continues to be an issue for effectiveness studies, despite their pragmatic emphasis. We posit that new approaches to design need to complement developments in methods for causal inference to address nonadherence, in both experimental and practice settings. This paper considers the conventional study design for psychiatric research and other medical contexts, in which subjects are randomized to treatments that are fixed throughout the trial and presents an alternative that converts the fixed treatments into an adaptive intervention that reflects best practice. The key element is the introduction of an adaptive decision point midway into the study to address a patient's reluctance to remain on treatment before completing a full-length trial of medication. The clinical uncertainty about the appropriate adaptation prompts a second randomization at the new decision point to evaluate relevant options. Additionally, the standard 'all-or-none' principal stratification (PS) framework is applied to the first stage of the design to address treatment discontinuation that occurs too early for a midtrial adaptation. Drawing upon the adaptive intervention features, we develop assumptions to identify the PS causal estimand and to introduce restrictions on outcome distributions to simplify expectation-maximization calculations. We evaluate the performance of the PS setup, with particular attention to the role played by a binary covariate. The results emphasize the importance of collecting covariate data for use in design and analysis. We consider the generality of our approach beyond the setting of psychiatric research.
Respiratory motion prediction by using the adaptive neuro fuzzy inference system (ANFIS)
NASA Astrophysics Data System (ADS)
Kakar, Manish; Nyström, Håkan; Rye Aarup, Lasse; Jakobi Nøttrup, Trine; Rune Olsen, Dag
2005-10-01
The quality of radiation therapy delivered for treating cancer patients is related to set-up errors and organ motion. Due to the margins needed to ensure adequate target coverage, many breast cancer patients have been shown to develop late side effects such as pneumonitis and cardiac damage. Breathing-adapted radiation therapy offers the potential for precise radiation dose delivery to a moving target and thereby reduces the side effects substantially. However, the basic requirement for breathing-adapted radiation therapy is to track and predict the target as precisely as possible. Recent studies have addressed the problem of organ motion prediction by using different methods including artificial neural network and model based approaches. In this study, we propose to use a hybrid intelligent system called ANFIS (the adaptive neuro fuzzy inference system) for predicting respiratory motion in breast cancer patients. In ANFIS, we combine both the learning capabilities of a neural network and reasoning capabilities of fuzzy logic in order to give enhanced prediction capabilities, as compared to using a single methodology alone. After training ANFIS and checking for prediction accuracy on 11 breast cancer patients, it was found that the RMSE (root-mean-square error) can be reduced to sub-millimetre accuracy over a period of 20 s provided the patient is assisted with coaching. The average RMSE for the un-coached patients was 35% of the respiratory amplitude and for the coached patients 6% of the respiratory amplitude.
Adaptive Path Selection for Link Loss Inference in Network Tomography Applications
Qiao, Yan; Jiao, Jun; Rao, Yuan; Ma, Huimin
2016-01-01
In this study, we address the problem of selecting the optimal end-to-end paths for link loss inference in order to improve the performance of network tomography applications, which infer the link loss rates from the path loss rates. Measuring the path loss rates using end-to-end probing packets may incur additional traffic overheads for networks, so it is important to select the minimum path set carefully while maximizing their performance. The usual approach is to select the maximum independent paths from the candidates simultaneously, while the other paths can be replaced by linear combinations of them. However, this approach ignores the fact that many paths always exist that do not lose any packets, and thus it is easy to determine that all of the links of these paths also have 0 loss rates. Not considering these good paths will inevitably lead to inefficiency and high probing costs. Thus, we propose an adaptive path selection method that selects paths sequentially based on the loss rates of previously selected paths. We also propose a theorem as well as a graph construction and decomposition approach to efficiently find the most valuable path during each round of selection. Our new method significantly outperforms the classical path selection method based on simulations in terms of the probing cost, number of accurate links determined, and the running speed. PMID:27701447
Inference for optimal dynamic treatment regimes using an adaptive m-out-of-n bootstrap scheme.
Chakraborty, Bibhas; Laber, Eric B; Zhao, Yingqi
2013-09-01
A dynamic treatment regime consists of a set of decision rules that dictate how to individualize treatment to patients based on available treatment and covariate history. A common method for estimating an optimal dynamic treatment regime from data is Q-learning which involves nonsmooth operations of the data. This nonsmoothness causes standard asymptotic approaches for inference like the bootstrap or Taylor series arguments to breakdown if applied without correction. Here, we consider the m-out-of-n bootstrap for constructing confidence intervals for the parameters indexing the optimal dynamic regime. We propose an adaptive choice of m and show that it produces asymptotically correct confidence sets under fixed alternatives. Furthermore, the proposed method has the advantage of being conceptually and computationally much simple than competing methods possessing this same theoretical property. We provide an extensive simulation study to compare the proposed method with currently available inference procedures. The results suggest that the proposed method delivers nominal coverage while being less conservative than alternatives. The proposed methods are implemented in the qLearn R-package and have been made available on the Comprehensive R-Archive Network (http://cran.r-project.org/). Analysis of the Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study is used as an illustrative example.
Adaptive neuro-fuzzy inference system for analysis of Doppler signals.
Ubeyli, Elif Derya
2006-01-01
In this study, a new approach based on adaptive neuro-fuzzy inference system (ANFIS) was presented for detection of ophthalmic artery stenosis. Decision making was performed in two stages: feature extraction using the wavelet transform (WT) and the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. The ophthalmic arterial Doppler signals were recorded from 128 subjects that 62 of them had suffered from ophthalmic artery stenosis and the rest of them had been healthy subjects. Some conclusions concerning the impacts of features on the detection of ophthalmic artery stenosis were obtained through analysis of the ANFIS. The performance of the ANFIS classifier was evaluated in terms of training performance and classification accuracies (total classification accuracy was 97.59%) and the results confirmed that the proposed ANFIS classifier has potential in detecting the ophthalmic artery stenosis. PMID:17945697
Woo, Hyung Jun; Reifman, Jaques
2014-01-01
We describe a stochastic virus evolution model representing genomic diversification and within-host selection during experimental serial passages under cell culture or live-host conditions. The model incorporates realistic descriptions of the virus genotypes in nucleotide and amino acid sequence spaces, as well as their diversification from error-prone replications. It quantitatively considers factors such as target cell number, bottleneck size, passage period, infection and cell death rates, and the replication rate of different genotypes, allowing for systematic examinations of how their changes affect the evolutionary dynamics of viruses during passages. The relative probability for a viral population to achieve adaptation under a new host environment, quantified by the rate with which a target sequence frequency rises above 50%, was found to be most sensitive to factors related to sequence structure (distance from the wild type to the target) and selection strength (host cell number and bottleneck size). For parameter values representative of RNA viruses, the likelihood of observing adaptations during passages became negligible as the required number of mutations rose above two amino acid sites. We modeled the specific adaptation process of influenza A H5N1 viruses in mammalian hosts by simulating the evolutionary dynamics of H5 strains under the fitness landscape inferred from multiple sequence alignments of H3 proteins. In light of comparisons with experimental findings, we observed that the evolutionary dynamics of adaptation is strongly affected not only by the tendency toward increasing fitness values but also by the accessibility of pathways between genotypes constrained by the genetic code.
Application of an adaptive neuro-fuzzy inference system to ground subsidence hazard mapping
NASA Astrophysics Data System (ADS)
Park, Inhye; Choi, Jaewon; Jin Lee, Moung; Lee, Saro
2012-11-01
We constructed hazard maps of ground subsidence around abandoned underground coal mines (AUCMs) in Samcheok City, Korea, using an adaptive neuro-fuzzy inference system (ANFIS) and a geographical information system (GIS). To evaluate the factors related to ground subsidence, a spatial database was constructed from topographic, geologic, mine tunnel, land use, and ground subsidence maps. An attribute database was also constructed from field investigations and reports on existing ground subsidence areas at the study site. Five major factors causing ground subsidence were extracted: (1) depth of drift; (2) distance from drift; (3) slope gradient; (4) geology; and (5) land use. The adaptive ANFIS model with different types of membership functions (MFs) was then applied for ground subsidence hazard mapping in the study area. Two ground subsidence hazard maps were prepared using the different MFs. Finally, the resulting ground subsidence hazard maps were validated using the ground subsidence test data which were not used for training the ANFIS. The validation results showed 95.12% accuracy using the generalized bell-shaped MF model and 94.94% accuracy using the Sigmoidal2 MF model. These accuracy results show that an ANFIS can be an effective tool in ground subsidence hazard mapping. Analysis of ground subsidence with the ANFIS model suggests that quantitative analysis of ground subsidence near AUCMs is possible.
Adaptive neuro-fuzzy inference system for classification of ECG signals using Lyapunov exponents.
Ubeyli, Elif Derya
2009-03-01
This paper describes the application of adaptive neuro-fuzzy inference system (ANFIS) model for classification of electrocardiogram (ECG) signals. Decision making was performed in two stages: feature extraction by computation of Lyapunov exponents and classification by the ANFIS trained with the backpropagation gradient descent method in combination with the least squares method. Four types of ECG beats (normal beat, congestive heart failure beat, ventricular tachyarrhythmia beat, and atrial fibrillation beat) obtained from the PhysioBank database were classified by four ANFIS classifiers. To improve diagnostic accuracy, the fifth ANFIS classifier (combining ANFIS) was trained using the outputs of the four ANFIS classifiers as input data. The proposed ANFIS model combined the neural network adaptive capabilities and the fuzzy logic qualitative approach. Some conclusions concerning the saliency of features on classification of the ECG signals were obtained through analysis of the ANFIS. The performance of the ANFIS model was evaluated in terms of training performance and classification accuracies and the results confirmed that the proposed ANFIS model has potential in classifying the ECG signals. PMID:19084286
Modelling Dissolved Pollutants in Krishna River Using Adaptive Neuro Fuzzy Inference Systems
NASA Astrophysics Data System (ADS)
Matli, C. S.; Umamahesh, N. V.
2014-01-01
Water quality models are used to describe the discharge concentration relationships in the river. Number of models exists to simulate the pollutant loads in a river, of which some of them are based on simple cause effect relationships and others on highly sophisticated physical and mathematical approaches that require extensive data inputs. Fuzzy rule based modeling extensively used in other disciplines, is attempted in the present study for modeling water quality with respect of dissolved pollutants in Krishna river flowing in Southern part of India. Adaptive Neuro Fuzzy Inference Systems (ANFIS), a recent development in the area of neuro-computing, based on the concept of fuzzy sets is used to model highly non-linear relationships and are capable of adaptive learning. This paper presents the results of the application of ANFIS for modeling dissolved pollutants in the Krishna River. The application and validation of the models is carried out using water quality and flow data obtained from the monitoring stations on the river. The results indicate that the models are quite successful in simulating the physical processes of the relationships between discharge and concentrations.
NASA Astrophysics Data System (ADS)
Ramesh, K.; Kesarkar, A. P.; Bhate, J.; Venkat Ratnam, M.; Jayaraman, A.
2015-01-01
The retrieval of accurate profiles of temperature and water vapour is important for the study of atmospheric convection. Recent development in computational techniques motivated us to use adaptive techniques in the retrieval algorithms. In this work, we have used an adaptive neuro-fuzzy inference system (ANFIS) to retrieve profiles of temperature and humidity up to 10 km over the tropical station Gadanki (13.5° N, 79.2° E), India. ANFIS is trained by using observations of temperature and humidity measurements by co-located Meisei GPS radiosonde (henceforth referred to as radiosonde) and microwave brightness temperatures observed by radiometrics multichannel microwave radiometer MP3000 (MWR). ANFIS is trained by considering these observations during rainy and non-rainy days (ANFIS(RD + NRD)) and during non-rainy days only (ANFIS(NRD)). The comparison of ANFIS(RD + NRD) and ANFIS(NRD) profiles with independent radiosonde observations and profiles retrieved using multivariate linear regression (MVLR: RD + NRD and NRD) and artificial neural network (ANN) indicated that the errors in the ANFIS(RD + NRD) are less compared to other retrieval methods. The Pearson product movement correlation coefficient (r) between retrieved and observed profiles is more than 92% for temperature profiles for all techniques and more than 99% for the ANFIS(RD + NRD) technique Therefore this new techniques is relatively better for the retrieval of temperature profiles. The comparison of bias, mean absolute error (MAE), RMSE and symmetric mean absolute percentage error (SMAPE) of retrieved temperature and relative humidity (RH) profiles using ANN and ANFIS also indicated that profiles retrieved using ANFIS(RD + NRD) are significantly better compared to the ANN technique. The analysis of profiles concludes that retrieved profiles using ANFIS techniques have improved the temperature retrievals substantially; however, the retrieval of RH by all techniques considered in this paper (ANN, MVLR and
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Adaptive neuro-fuzzy inference system for real-time monitoring of integrated-constructed wetlands.
Dzakpasu, Mawuli; Scholz, Miklas; McCarthy, Valerie; Jordan, Siobhán; Sani, Abdulkadir
2015-01-01
Monitoring large-scale treatment wetlands is costly and time-consuming, but required by regulators. Some analytical results are available only after 5 days or even longer. Thus, adaptive neuro-fuzzy inference system (ANFIS) models were developed to predict the effluent concentrations of 5-day biochemical oxygen demand (BOD5) and NH4-N from a full-scale integrated constructed wetland (ICW) treating domestic wastewater. The ANFIS models were developed and validated with a 4-year data set from the ICW system. Cost-effective, quicker and easier to measure variables were selected as the possible predictors based on their goodness of correlation with the outputs. A self-organizing neural network was applied to extract the most relevant input variables from all the possible input variables. Fuzzy subtractive clustering was used to identify the architecture of the ANFIS models and to optimize fuzzy rules, overall, improving the network performance. According to the findings, ANFIS could predict the effluent quality variation quite strongly. Effluent BOD5 and NH4-N concentrations were predicted relatively accurately by other effluent water quality parameters, which can be measured within a few hours. The simulated effluent BOD5 and NH4-N concentrations well fitted the measured concentrations, which was also supported by relatively low mean squared error. Thus, ANFIS can be useful for real-time monitoring and control of ICW systems. PMID:25607665
Adaptive network based on fuzzy inference system for equilibrated urea concentration prediction.
Azar, Ahmad Taher
2013-09-01
Post-dialysis urea rebound (PDUR) has been attributed mostly to redistribution of urea from different compartments, which is determined by variations in regional blood flows and transcellular urea mass transfer coefficients. PDUR occurs after 30-90min of short or standard hemodialysis (HD) sessions and after 60min in long 8-h HD sessions, which is inconvenient. This paper presents adaptive network based on fuzzy inference system (ANFIS) for predicting intradialytic (Cint) and post-dialysis urea concentrations (Cpost) in order to predict the equilibrated (Ceq) urea concentrations without any blood sampling from dialysis patients. The accuracy of the developed system was prospectively compared with other traditional methods for predicting equilibrated urea (Ceq), post dialysis urea rebound (PDUR) and equilibrated dialysis dose (eKt/V). This comparison is done based on root mean squares error (RMSE), normalized mean square error (NRMSE), and mean absolute percentage error (MAPE). The ANFIS predictor for Ceq achieved mean RMSE values of 0.3654 and 0.4920 for training and testing, respectively. The statistical analysis demonstrated that there is no statistically significant difference found between the predicted and the measured values. The percentage of MAE and RMSE for testing phase is 0.63% and 0.96%, respectively. PMID:23806679
Prediction of antimicrobial peptides based on the adaptive neuro-fuzzy inference system application.
Fernandes, Fabiano C; Rigden, Daniel J; Franco, Octavio L
2012-01-01
Antimicrobial peptides (AMPs) are widely distributed defense molecules and represent a promising alternative for solving the problem of antibiotic resistance. Nevertheless, the experimental time required to screen putative AMPs makes computational simulations based on peptide sequence analysis and/or molecular modeling extremely attractive. Artificial intelligence methods acting as simulation and prediction tools are of great importance in helping to efficiently discover and design novel AMPs. In the present study, state-of-the-art published outcomes using different prediction methods and databases were compared to an adaptive neuro-fuzzy inference system (ANFIS) model. Data from our study showed that ANFIS obtained an accuracy of 96.7% and a Matthew's Correlation Coefficient (MCC) of0.936, which proved it to be an efficient model for pattern recognition in antimicrobial peptide prediction. Furthermore, a lower number of input parameters were needed for the ANFIS model, improving the speed and ease of prediction. In summary, due to the fuzzy nature ofAMP physicochemical properties, the ANFIS approach presented here can provide an efficient solution for screening putative AMP sequences and for exploration of properties characteristic of AMPs. PMID:23193592
Classifying work rate from heart rate measurements using an adaptive neuro-fuzzy inference system.
Kolus, Ahmet; Imbeau, Daniel; Dubé, Philippe-Antoine; Dubeau, Denise
2016-05-01
In a new approach based on adaptive neuro-fuzzy inference systems (ANFIS), field heart rate (HR) measurements were used to classify work rate into four categories: very light, light, moderate, and heavy. Inter-participant variability (physiological and physical differences) was considered. Twenty-eight participants performed Meyer and Flenghi's step-test and a maximal treadmill test, during which heart rate and oxygen consumption (VO2) were measured. Results indicated that heart rate monitoring (HR, HRmax, and HRrest) and body weight are significant variables for classifying work rate. The ANFIS classifier showed superior sensitivity, specificity, and accuracy compared to current practice using established work rate categories based on percent heart rate reserve (%HRR). The ANFIS classifier showed an overall 29.6% difference in classification accuracy and a good balance between sensitivity (90.7%) and specificity (95.2%) on average. With its ease of implementation and variable measurement, the ANFIS classifier shows potential for widespread use by practitioners for work rate assessment. PMID:26851475
Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J.
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively “hiding” its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378
Lachmann, Alexander; Giorgi, Federico M.; Lopez, Gonzalo; Califano, Andrea
2016-01-01
Summary: The accurate reconstruction of gene regulatory networks from large scale molecular profile datasets represents one of the grand challenges of Systems Biology. The Algorithm for the Reconstruction of Accurate Cellular Networks (ARACNe) represents one of the most effective tools to accomplish this goal. However, the initial Fixed Bandwidth (FB) implementation is both inefficient and unable to deal with sample sets providing largely uneven coverage of the probability density space. Here, we present a completely new implementation of the algorithm, based on an Adaptive Partitioning strategy (AP) for estimating the Mutual Information. The new AP implementation (ARACNe-AP) achieves a dramatic improvement in computational performance (200× on average) over the previous methodology, while preserving the Mutual Information estimator and the Network inference accuracy of the original algorithm. Given that the previous version of ARACNe is extremely demanding, the new version of the algorithm will allow even researchers with modest computational resources to build complex regulatory networks from hundreds of gene expression profiles. Availability and Implementation: A JAVA cross-platform command line executable of ARACNe, together with all source code and a detailed usage guide are freely available on Sourceforge (http://sourceforge.net/projects/aracne-ap). JAVA version 8 or higher is required. Contact: califano@c2b2.columbia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153652
Denef, Vincent; Verberkmoes, Nathan C; Shah, Manesh B; Abraham, Paul E; Lefsrud, Mark G; Hettich, Robert {Bob} L; Banfield, Jillian F.
2009-01-01
Analyses of ecological and evolutionary processes that shape microbial consortia are facilitated by comprehensive studies of ecosystems with low species richness. In the current study we evaluated the role of recombination in altering the fitness of chemoautotrophic bacteria in their natural environment. Proteomics-inferred genome typing (PIGT) was used to determine the genomic make-up of Leptospirillum group II populations in 27 biofilms sampled from six locations in the Richmond Mine acid mine drainage system (Iron Mountain, CA) over a four-year period. We observed six distinct genotypes that are recombinants comprised of segments from two parental genotypes. Community genomic analyses revealed additional low abundance recombinant variants. The dominance of some genotypes despite a larger available genome pool, and patterns of spatiotemporal distribution within the ecosystem, indicate selection for distinct recombinants. Genes involved in motility, signal transduction and transport were overrepresented in the tens to hundreds of kilobase recombinant blocks, whereas core metabolic functions were significantly underrepresented. Our findings demonstrate the power of PIGT and reveal that recombination is a mechanism for fine-scale adaptation in this system.
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research. PMID:25505378
NASA Astrophysics Data System (ADS)
Hayati, M.; Rashidi, A. M.; Rezaei, A.
2011-01-01
This paper presents application of adaptive neuro-fuzzy inference system (ANFIS) for prediction of the grain size of nanocrystalline nickel coatings as a function of current density, saccharin concentration and bath temperature. For developing ANFIS model, the current density, saccharin concentration and bath temperature are taken as input, and the resulting grain size of the nanocrystalline coating as the output of the model. In order to provide a consistent set of experimental data, the nanocrystalline nickel coatings have been deposited from Watts-type bath using direct current electroplating within a large range of process parameters i.e., current density, saccharin concentration and bath temperature. Variation of the grain size because of the electroplating parameters has been modeled using ANFIS, and the experimental results and theoretical approaches have been compared to each other as well. Also, we have compared the proposed ANFIS model with artificial neural network (ANN) approach. The results have shown that the ANFIS model is more accurate and reliable compared to the ANN approach.
Subhi Al-batah, Mohammad; Mat Isa, Nor Ashidi; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi
2014-01-01
To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316
NASA Astrophysics Data System (ADS)
Gowtham, K. N.; Vasudevan, M.; Maduraimuthu, V.; Jayakumar, T.
2011-04-01
Modified 9Cr-1Mo ferritic steel is used as a structural material for steam generator components of power plants. Generally, tungsten inert gas (TIG) welding is preferred for welding of these steels in which the depth of penetration achievable during autogenous welding is limited. Therefore, activated flux TIG (A-TIG) welding, a novel welding technique, has been developed in-house to increase the depth of penetration. In modified 9Cr-1Mo steel joints produced by the A-TIG welding process, weld bead width, depth of penetration, and heat-affected zone (HAZ) width play an important role in determining the mechanical properties as well as the performance of the weld joints during service. To obtain the desired weld bead geometry and HAZ width, it becomes important to set the welding process parameters. In this work, adaptative neuro fuzzy inference system is used to develop independent models correlating the welding process parameters like current, voltage, and torch speed with weld bead shape parameters like depth of penetration, bead width, and HAZ width. Then a genetic algorithm is employed to determine the optimum A-TIG welding process parameters to obtain the desired weld bead shape parameters and HAZ width.
Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi
2014-01-01
To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316
NASA Astrophysics Data System (ADS)
Lohani, A. K.; Kumar, Rakesh; Singh, R. D.
2012-06-01
SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.
Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos; Zhang, Hanqing
2014-05-01
Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.
qPR: An adaptive partial-report procedure based on Bayesian inference
Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin
2016-01-01
Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6–8 cue delays or 600–800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations. PMID:27580045
NASA Astrophysics Data System (ADS)
Accardi, Luigi; Khrennikov, Andrei; Ohya, Masanori; Tanaka, Yoshiharu; Yamato, Ichiro
2016-07-01
Recently a novel quantum information formalism — quantum adaptive dynamics — was developed and applied to modelling of information processing by bio-systems including cognitive phenomena: from molecular biology (glucose-lactose metabolism for E.coli bacteria, epigenetic evolution) to cognition, psychology. From the foundational point of view quantum adaptive dynamics describes mutual adapting of the information states of two interacting systems (physical or biological) as well as adapting of co-observations performed by the systems. In this paper we apply this formalism to model unconscious inference: the process of transition from sensation to perception. The paper combines theory and experiment. Statistical data collected in an experimental study on recognition of a particular ambiguous figure, the Schröder stairs, support the viability of the quantum(-like) model of unconscious inference including modelling of biases generated by rotation-contexts. From the probabilistic point of view, we study (for concrete experimental data) the problem of contextuality of probability, its dependence on experimental contexts. Mathematically contextuality leads to non-Komogorovness: probability distributions generated by various rotation contexts cannot be treated in the Kolmogorovian framework. At the same time they can be embedded in a “big Kolmogorov space” as conditional probabilities. However, such a Kolmogorov space has too complex structure and the operational quantum formalism in the form of quantum adaptive dynamics simplifies the modelling essentially.
NASA Astrophysics Data System (ADS)
Mahandrio, Irsantyo; Budi, Andriantama; Liong, The Houw; Purqon, Acep
2015-09-01
The growing patterns in cultural and mining sectors are interesting particularly in developed country such as in Indonesia. Here, we investigate the local characteristics of stocks between the sectors of agriculture and mining which si representing two leading companies and two common companies in these sectors. We analyze the prediction by using Adaptive Neuro Fuzzy Inference System (ANFIS). The type of Fuzzy Inference System (FIS) is Sugeno type with Generalized Bell membership function (Gbell). Our results show that ANFIS is a proper method to predicting the stock market with the RMSE : 0.14% for AALI and 0.093% for SGRO representing the agriculture sectors, meanwhile, 0.073% for ANTM and 0.1107% for MDCO representing the mining sectors.
Perspectives of Probabilistic Inferences: Reinforcement Learning and an Adaptive Network Compared
ERIC Educational Resources Information Center
Rieskamp, Jorg
2006-01-01
The assumption that people possess a strategy repertoire for inferences has been raised repeatedly. The strategy selection learning theory specifies how people select strategies from this repertoire. The theory assumes that individuals select strategies proportional to their subjective expectations of how well the strategies solve particular…
Poisson-Based Inference for Perturbation Models in Adaptive Spelling Training
ERIC Educational Resources Information Center
Baschera, Gian-Marco; Gross, Markus
2010-01-01
We present an inference algorithm for perturbation models based on Poisson regression. The algorithm is designed to handle unclassified input with multiple errors described by independent mal-rules. This knowledge representation provides an intelligent tutoring system with local and global information about a student, such as error classification…
Heddam, Salim
2014-01-01
This article presents a comparison of two adaptive neuro-fuzzy inference systems (ANFIS)-based neuro-fuzzy models applied for modeling dissolved oxygen (DO) concentration. The two models are developed using experimental data collected from the bottom (USGS station no: 420615121533601) and top (USGS station no: 420615121533600) stations at Klamath River at site KRS12a nr Rock Quarry, Oregon, USA. The input variables used for the ANFIS models are water pH, temperature, specific conductance, and sensor depth. Two ANFIS-based neuro-fuzzy systems are presented. The two neuro-fuzzy systems are: (1) grid partition-based fuzzy inference system, named ANFIS_GRID, and (2) subtractive-clustering-based fuzzy inference system, named ANFIS_SUB. In both models, 60 % of the data set was randomly assigned to the training set, 20 % to the validation set, and 20 % to the test set. The ANFIS results are compared with multiple linear regression models. The system proposed in this paper shows a novelty approach with regard to the usage of ANFIS models for DO concentration modeling.
NASA Astrophysics Data System (ADS)
Knuth, K. H.
2001-05-01
We consider the application of Bayesian inference to the study of self-organized structures in complex adaptive systems. In particular, we examine the distribution of elements, agents, or processes in systems dominated by hierarchical structure. We demonstrate that results obtained by Caianiello [1] on Hierarchical Modular Systems (HMS) can be found by applying Jaynes' Principle of Group Invariance [2] to a few key assumptions about our knowledge of hierarchical organization. Subsequent application of the Principle of Maximum Entropy allows inferences to be made about specific systems. The utility of the Bayesian method is considered by examining both successes and failures of the hierarchical model. We discuss how Caianiello's original statements suffer from the Mind Projection Fallacy [3] and we restate his assumptions thus widening the applicability of the HMS model. The relationship between inference and statistical physics, described by Jaynes [4], is reiterated with the expectation that this realization will aid the field of complex systems research by moving away from often inappropriate direct application of statistical mechanics to a more encompassing inferential methodology.
Heddam, Salim
2014-01-01
This article presents a comparison of two adaptive neuro-fuzzy inference systems (ANFIS)-based neuro-fuzzy models applied for modeling dissolved oxygen (DO) concentration. The two models are developed using experimental data collected from the bottom (USGS station no: 420615121533601) and top (USGS station no: 420615121533600) stations at Klamath River at site KRS12a nr Rock Quarry, Oregon, USA. The input variables used for the ANFIS models are water pH, temperature, specific conductance, and sensor depth. Two ANFIS-based neuro-fuzzy systems are presented. The two neuro-fuzzy systems are: (1) grid partition-based fuzzy inference system, named ANFIS_GRID, and (2) subtractive-clustering-based fuzzy inference system, named ANFIS_SUB. In both models, 60 % of the data set was randomly assigned to the training set, 20 % to the validation set, and 20 % to the test set. The ANFIS results are compared with multiple linear regression models. The system proposed in this paper shows a novelty approach with regard to the usage of ANFIS models for DO concentration modeling. PMID:24057665
NASA Astrophysics Data System (ADS)
Oğuz, Yüksel; Üstün, Seydi Vakkas; Yabanova, İsmail; Yumurtaci, Mehmet; Güney, İrfan
2012-01-01
This article presents design of adaptive neuro-fuzzy inference system (ANFIS) for the turbine speed control for purpose of improving the power quality of the power production system of a split shaft microturbine. To improve the operation performance of the microturbine power generation system (MTPGS) and to obtain the electrical output magnitudes in desired quality and value (terminal voltage, operation frequency, power drawn by consumer and production power), a controller depended on adaptive neuro-fuzzy inference system was designed. The MTPGS consists of the microturbine speed controller, a split shaft microturbine, cylindrical pole synchronous generator, excitation circuit and voltage regulator. Modeling of dynamic behavior of synchronous generator driver with a turbine and split shaft turbine was realized by using the Matlab/Simulink and SimPowerSystems in it. It is observed from the simulation results that with the microturbine speed control made with ANFIS, when the MTPGS is operated under various loading situations, the terminal voltage and frequency values of the system can be settled in desired operation values in a very short time without significant oscillation and electrical production power in desired quality can be obtained.
Modeling and Simulation of An Adaptive Neuro-Fuzzy Inference System (ANFIS) for Mobile Learning
ERIC Educational Resources Information Center
Al-Hmouz, A.; Shen, Jun; Al-Hmouz, R.; Yan, Jun
2012-01-01
With recent advances in mobile learning (m-learning), it is becoming possible for learning activities to occur everywhere. The learner model presented in our earlier work was partitioned into smaller elements in the form of learner profiles, which collectively represent the entire learning process. This paper presents an Adaptive Neuro-Fuzzy…
Adaptive thresholding for reliable topological inference in single subject fMRI analysis.
Gorgolewski, Krzysztof J; Storkey, Amos J; Bastin, Mark E; Pernet, Cyril R
2012-01-01
Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test-retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps.
Adaptive thresholding for reliable topological inference in single subject fMRI analysis.
Gorgolewski, Krzysztof J; Storkey, Amos J; Bastin, Mark E; Pernet, Cyril R
2012-01-01
Single subject fMRI has proved to be a useful tool for mapping functional areas in clinical procedures such as tumor resection. Using fMRI data, clinicians assess the risk, plan and execute such procedures based on thresholded statistical maps. However, because current thresholding methods were developed mainly in the context of cognitive neuroscience group studies, most single subject fMRI maps are thresholded manually to satisfy specific criteria related to single subject analyzes. Here, we propose a new adaptive thresholding method which combines Gamma-Gaussian mixture modeling with topological thresholding to improve cluster delineation. In a series of simulations we show that by adapting to the signal and noise properties, the new method performs well in terms of total number of errors but also in terms of the trade-off between false negative and positive cluster error rates. Similarly, simulations show that adaptive thresholding performs better than fixed thresholding in terms of over and underestimation of the true activation border (i.e., higher spatial accuracy). Finally, through simulations and a motor test-retest study on 10 volunteer subjects, we show that adaptive thresholding improves reliability, mainly by accounting for the global signal variance. This in turn increases the likelihood that the true activation pattern can be determined offering an automatic yet flexible way to threshold single subject fMRI maps. PMID:22936908
ERIC Educational Resources Information Center
Deiglmayr, Anne; Spada, Hans
2010-01-01
Adaptive support for computer-mediated collaboration aims at supporting learners' collaboration in a way that is tailored to their actual needs and by fostering their self-regulation, leading to the acquisition of new collaboration skills. This review gives an example of developing support for a specific collaboration skill: the co-construction of…
Evidence for Adaptation to the Tibetan Plateau Inferred from Tibetan Loach Transcriptomes.
Wang, Ying; Yang, Liandong; Zhou, Kun; Zhang, Yanping; Song, Zhaobin; He, Shunping
2015-11-01
Triplophysa fishes are the primary component of the fish fauna on the Tibetan Plateau and are well adapted to the high-altitude environment. Despite the importance of Triplophysa fishes on the plateau, the genetic mechanisms of the adaptations of these fishes to this high-altitude environment remain poorly understood. In this study, we generated the transcriptome sequences for three Triplophysa fishes, that is, Triplophysa siluroides, Triplophysa scleroptera, and Triplophysa dalaica, and used these and the previously available transcriptome and genome sequences from fishes living at low altitudes to identify potential genetic mechanisms for the high-altitude adaptations in Triplophysa fishes. An analysis of 2,269 orthologous genes among cave fish (Astyanax mexicanus), zebrafish (Danio rerio), large-scale loach (Paramisgurnus dabryanus), and Triplophysa fishes revealed that each of the terminal branches of the Triplophysa fishes had a significantly higher ratio of nonsynonymous to synonymous substitutions than that of the branches of the fishes from low altitudes, which provided consistent evidence for genome-wide rapid evolution in the Triplophysa genus. Many of the GO (Gene Ontology) categories associated with energy metabolism and hypoxia response exhibited accelerated evolution in the Triplophysa fishes compared with the large-scale loach. The genes that exhibited signs of positive selection and rapid evolution in the Triplophysa fishes were also significantly enriched in energy metabolism and hypoxia response categories. Our analysis identified widespread Triplophysa-specific nonsynonymous mutations in the fast evolving genes and positively selected genes. Moreover, we detected significant evidence of positive selection in the HIF (hypoxia-inducible factor)-1A and HIF-2B genes in Triplophysa fishes and found that the Triplophysa-specific nonsynonymous mutations in the HIF-1A and HIF-2B genes were associated with functional changes. Overall, our study provides
Becerra, Miguel A; Orrego, Diana A; Delgado-Trejos, Edilson
2013-01-01
The heart's mechanical activity can be appraised by auscultation recordings, taken from the 4-Standard Auscultation Areas (4-SAA), one for each cardiac valve, as there are invisible murmurs when a single area is examined. This paper presents an effective approach for cardiac murmur detection based on adaptive neuro-fuzzy inference systems (ANFIS) over acoustic representations derived from Empirical Mode Decomposition (EMD) and Hilbert-Huang Transform (HHT) of 4-channel phonocardiograms (4-PCG). The 4-PCG database belongs to the National University of Colombia. Mel-Frequency Cepstral Coefficients (MFCC) and statistical moments of HHT were estimated on the combination of different intrinsic mode functions (IMFs). A fuzzy-rough feature selection (FRFS) was applied in order to reduce complexity. An ANFIS network was implemented on the feature space, randomly initialized, adjusted using heuristic rules and trained using a hybrid learning algorithm made up by least squares and gradient descent. Global classification for 4-SAA was around 98.9% with satisfactory sensitivity and specificity, using a 50-fold cross-validation procedure (70/30 split). The representation capability of the EMD technique applied to 4-PCG and the neuro-fuzzy inference of acoustic features offered a high performance to detect cardiac murmurs. PMID:24109851
Karami, Ali; Keiter, Steffen; Hollert, Henner; Courtenay, Simon C
2013-03-01
This study represents a first attempt at applying a fuzzy inference system (FIS) and an adaptive neuro-fuzzy inference system (ANFIS) to the field of aquatic biomonitoring for classification of the dosage and time of benzo[a]pyrene (BaP) injection through selected biomarkers in African catfish (Clarias gariepinus). Fish were injected either intramuscularly (i.m.) or intraperitoneally (i.p.) with BaP. Hepatic glutathione S-transferase (GST) activities, relative visceral fat weights (LSI), and four biliary fluorescent aromatic compounds (FACs) concentrations were used as the inputs in the modeling study. Contradictory rules in FIS and ANFIS models appeared after conversion of bioassay results into human language (rule-based system). A "data trimming" approach was proposed to eliminate the conflicts prior to fuzzification. However, the model produced was relevant only to relatively low exposures to BaP, especially through the i.m. route of exposure. Furthermore, sensitivity analysis was unable to raise the classification rate to an acceptable level. In conclusion, FIS and ANFIS models have limited applications in the field of fish biomarker studies.
Becerra, Miguel A; Orrego, Diana A; Delgado-Trejos, Edilson
2013-01-01
The heart's mechanical activity can be appraised by auscultation recordings, taken from the 4-Standard Auscultation Areas (4-SAA), one for each cardiac valve, as there are invisible murmurs when a single area is examined. This paper presents an effective approach for cardiac murmur detection based on adaptive neuro-fuzzy inference systems (ANFIS) over acoustic representations derived from Empirical Mode Decomposition (EMD) and Hilbert-Huang Transform (HHT) of 4-channel phonocardiograms (4-PCG). The 4-PCG database belongs to the National University of Colombia. Mel-Frequency Cepstral Coefficients (MFCC) and statistical moments of HHT were estimated on the combination of different intrinsic mode functions (IMFs). A fuzzy-rough feature selection (FRFS) was applied in order to reduce complexity. An ANFIS network was implemented on the feature space, randomly initialized, adjusted using heuristic rules and trained using a hybrid learning algorithm made up by least squares and gradient descent. Global classification for 4-SAA was around 98.9% with satisfactory sensitivity and specificity, using a 50-fold cross-validation procedure (70/30 split). The representation capability of the EMD technique applied to 4-PCG and the neuro-fuzzy inference of acoustic features offered a high performance to detect cardiac murmurs.
NASA Astrophysics Data System (ADS)
Trianto, Andriantama Budi; Hadi, I. M.; Liong, The Houw; Purqon, Acep
2015-09-01
Indonesian economical development is growing well. It has effect for their invesment in Banks and the stock market. In this study, we perform prediction for the three blue chips of Indonesian bank i.e. BCA, BNI, and MANDIRI by using the method of Adaptive Neuro-Fuzzy Inference System (ANFIS) with Takagi-Sugeno rules and Generalized bell (Gbell) as the membership function. Our results show that ANFIS perform good prediction with RMSE for BCA of 27, BNI of 5.29, and MANDIRI of 13.41, respectively. Furthermore, we develop an active strategy to gain more benefit. We compare between passive strategy versus active strategy. Our results shows that for the passive strategy gains 13 million rupiah, while for the active strategy gains 47 million rupiah in one year. The active investment strategy significantly shows gaining multiple benefit than the passive one.
Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang
2014-01-01
Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3-9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547
Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang
2014-01-01
Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3–9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547
Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D
2012-09-01
Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional
Dietary adaptations of South African australopiths: inference from enamel prism attitude.
Macho, Gabriele A; Shimizu, Daisuke
2009-09-01
The angle at which enamel prisms approach the wear surface holds information with regard to the stiffness of the tissue, as well as its wear resistance. Hence, analyses of prism orientation may shed light on questions of whether the thick enamel in hominins has evolved to confer stiffness or wear resistance to the teeth and may thus inform about the diet and behavioural ecology of these species. This was explored for Paranthropus robustus and Australopithecus africanus, whereby a distinction was made between prisms at the Phase I and Phase II facets. The results were compared with those obtained for Theropithecus, Macaca, and Potamochoerus for whom behavioural and/or experimental data are available, and were interpreted against simple mechanical principles. The South African hominins differ significantly in their relationships between wear facets and prism angulations. Teeth of P. robustus are better adapted to more vertical loads during mastication (Phase I), whereas those of A. africanus are better adapted to cope with more laterally-directed loads (Phase II) commonly associated with roll-crush and mastication. Overall, teeth of P. robustus appear stiffer, while those of A. africanus seem more wear resistant. PMID:19660781
Benner, Philipp; Elze, Tobias
2012-01-01
We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem. PMID:22822269
Bazin, Eric; Dawson, Kevin J; Beaumont, Mark A
2010-06-01
We address the problem of finding evidence of natural selection from genetic data, accounting for the confounding effects of demographic history. In the absence of natural selection, gene genealogies should all be sampled from the same underlying distribution, often approximated by a coalescent model. Selection at a particular locus will lead to a modified genealogy, and this motivates a number of recent approaches for detecting the effects of natural selection in the genome as "outliers" under some models. The demographic history of a population affects the sampling distribution of genealogies, and therefore the observed genotypes and the classification of outliers. Since we cannot see genealogies directly, we have to infer them from the observed data under some model of mutation and demography. Thus the accuracy of an outlier-based approach depends to a greater or a lesser extent on the uncertainty about the demographic and mutational model. A natural modeling framework for this type of problem is provided by Bayesian hierarchical models, in which parameters, such as mutation rates and selection coefficients, are allowed to vary across loci. It has proved quite difficult computationally to implement fully probabilistic genealogical models with complex demographies, and this has motivated the development of approximations such as approximate Bayesian computation (ABC). In ABC the data are compressed into summary statistics, and computation of the likelihood function is replaced by simulation of data under the model. In a hierarchical setting one may be interested both in hyperparameters and parameters, and there may be very many of the latter--for example, in a genetic model, these may be parameters describing each of many loci or populations. This poses a problem for ABC in that one then requires summary statistics for each locus, which, if used naively, leads to a consequent difficulty in conditional density estimation. We develop a general method for applying
Gudhka, Reema K.; Neilan, Brett A.; Burns, Brendan P.
2015-01-01
Halococcus hamelinensis was the first archaeon isolated from stromatolites. These geomicrobial ecosystems are thought to be some of the earliest known on Earth, yet, despite their evolutionary significance, the role of Archaea in these systems is still not well understood. Detailed here is the genome sequencing and analysis of an archaeon isolated from stromatolites. The genome of H. hamelinensis consisted of 3,133,046 base pairs with an average G+C content of 60.08% and contained 3,150 predicted coding sequences or ORFs, 2,196 (68.67%) of which were protein-coding genes with functional assignments and 954 (29.83%) of which were of unknown function. Codon usage of the H. hamelinensis genome was consistent with a highly acidic proteome, a major adaptive mechanism towards high salinity. Amino acid transport and metabolism, inorganic ion transport and metabolism, energy production and conversion, ribosomal structure, and unknown function COG genes were overrepresented. The genome of H. hamelinensis also revealed characteristics reflecting its survival in its extreme environment, including putative genes/pathways involved in osmoprotection, oxidative stress response, and UV damage repair. Finally, genome analyses indicated the presence of putative transposases as well as positive matches of genes of H. hamelinensis against various genomes of Bacteria, Archaea, and viruses, suggesting the potential for horizontal gene transfer. PMID:25709556
Domestication history and geographical adaptation inferred from a SNP map of African rice.
Meyer, Rachel S; Choi, Jae Young; Sanches, Michelle; Plessis, Anne; Flowers, Jonathan M; Amas, Junrey; Dorph, Katherine; Barretto, Annie; Gross, Briana; Fuller, Dorian Q; Bimpong, Isaac Kofi; Ndjiondjop, Marie-Noelle; Hazzouri, Khaled M; Gregorio, Glenn B; Purugganan, Michael D
2016-09-01
African rice (Oryza glaberrima Steud.) is a cereal crop species closely related to Asian rice (Oryza sativa L.) but was independently domesticated in West Africa ∼3,000 years ago. African rice is rarely grown outside sub-Saharan Africa but is of global interest because of its tolerance to abiotic stresses. Here we describe a map of 2.32 million SNPs of African rice from whole-genome resequencing of 93 landraces. Population genomic analysis shows a population bottleneck in this species that began ∼13,000-15,000 years ago with effective population size reaching its minimum value ∼3,500 years ago, suggesting a protracted period of population size reduction likely commencing with predomestication management and/or cultivation. Genome-wide association studies (GWAS) for six salt tolerance traits identify 11 significant loci, 4 of which are within ∼300 kb of genomic regions that possess signatures of positive selection, suggesting adaptive geographical divergence for salt tolerance in this species.
Domestication history and geographical adaptation inferred from a SNP map of African rice.
Meyer, Rachel S; Choi, Jae Young; Sanches, Michelle; Plessis, Anne; Flowers, Jonathan M; Amas, Junrey; Dorph, Katherine; Barretto, Annie; Gross, Briana; Fuller, Dorian Q; Bimpong, Isaac Kofi; Ndjiondjop, Marie-Noelle; Hazzouri, Khaled M; Gregorio, Glenn B; Purugganan, Michael D
2016-09-01
African rice (Oryza glaberrima Steud.) is a cereal crop species closely related to Asian rice (Oryza sativa L.) but was independently domesticated in West Africa ∼3,000 years ago. African rice is rarely grown outside sub-Saharan Africa but is of global interest because of its tolerance to abiotic stresses. Here we describe a map of 2.32 million SNPs of African rice from whole-genome resequencing of 93 landraces. Population genomic analysis shows a population bottleneck in this species that began ∼13,000-15,000 years ago with effective population size reaching its minimum value ∼3,500 years ago, suggesting a protracted period of population size reduction likely commencing with predomestication management and/or cultivation. Genome-wide association studies (GWAS) for six salt tolerance traits identify 11 significant loci, 4 of which are within ∼300 kb of genomic regions that possess signatures of positive selection, suggesting adaptive geographical divergence for salt tolerance in this species. PMID:27500524
Kolus, Ahmet; Dubé, Philippe-Antoine; Imbeau, Daniel; Labib, Richard; Dubeau, Denise
2014-11-01
In new approaches based on adaptive neuro-fuzzy systems (ANFIS) and analytical method, heart rate (HR) measurements were used to estimate oxygen consumption (VO2). Thirty-five participants performed Meyer and Flenghi's step-test (eight of which performed regeneration release work), during which heart rate and oxygen consumption were measured. Two individualized models and a General ANFIS model that does not require individual calibration were developed. Results indicated the superior precision achieved with individualized ANFIS modelling (RMSE = 1.0 and 2.8 ml/kg min in laboratory and field, respectively). The analytical model outperformed the traditional linear calibration and Flex-HR methods with field data. The General ANFIS model's estimates of VO2 were not significantly different from actual field VO2 measurements (RMSE = 3.5 ml/kg min). With its ease of use and low implementation cost, the General ANFIS model shows potential to replace any of the traditional individualized methods for VO2 estimation from HR data collected in the field. PMID:24793823
Motion Adaptive Vertical Handoff in Cellular/WLAN Heterogeneous Wireless Network
Ma, Lin; Xu, Yubin; Fu, Yunhai
2014-01-01
In heterogeneous wireless network, vertical handoff plays an important role for guaranteeing quality of service and overall performance of network. Conventional vertical handoff trigger schemes are mostly developed from horizontal handoff in homogeneous cellular network. Basically, they can be summarized as hysteresis-based and dwelling-timer-based algorithms, which are reliable on avoiding unnecessary handoff caused by the terminals dwelling at the edge of WLAN coverage. However, the coverage of WLAN is much smaller compared with cellular network, while the motion types of terminals can be various in a typical outdoor scenario. As a result, traditional algorithms are less effective in avoiding unnecessary handoff triggered by vehicle-borne terminals with various speeds. Besides that, hysteresis and dwelling-timer thresholds usually need to be modified to satisfy different channel environments. For solving this problem, a vertical handoff algorithm based on Q-learning is proposed in this paper. Q-learning can provide the decider with self-adaptive ability for handling the terminals' handoff requests with different motion types and channel conditions. Meanwhile, Neural Fuzzy Inference System (NFIS) is embedded to retain a continuous perception of the state space. Simulation results verify that the proposed algorithm can achieve lower unnecessary handoff probability compared with the other two conventional algorithms. PMID:24741347
Klein, Nicole; Sander, P. Martin; Krahl, Anna; Scheyer, Torsten M.; Houssaye, Alexandra
2016-01-01
, and possibly sexual dimorphism. Humeral microanatomy documents the diversification of nothosaur species into different environments to avoid intraclade competition as well as competition with other marine reptiles. Nothosaur microanatomy indicates that knowledge of processes involved in secondary aquatic adaptation and their interaction are more complex than previously believed. PMID:27391607
Klein, Nicole; Sander, P Martin; Krahl, Anna; Scheyer, Torsten M; Houssaye, Alexandra
2016-01-01
, and possibly sexual dimorphism. Humeral microanatomy documents the diversification of nothosaur species into different environments to avoid intraclade competition as well as competition with other marine reptiles. Nothosaur microanatomy indicates that knowledge of processes involved in secondary aquatic adaptation and their interaction are more complex than previously believed. PMID:27391607
Amiri, Mohammad J; Abedi-Koupai, Jahangir; Eslamian, Sayed S; Mousavi, Sayed F; Hasheminejad, Hasti
2013-01-01
To evaluate the performance of Adaptive Neural-Based Fuzzy Inference System (ANFIS) model in estimating the efficiency of Pb (II) ions removal from aqueous solution by ostrich bone ash, a batch experiment was conducted. Five operational parameters including adsorbent dosage (C(s)), initial concentration of Pb (II) ions (C(o)), initial pH, temperature (T) and contact time (t) were taken as the input data and the adsorption efficiency (AE) of bone ash as the output. Based on the 31 different structures, 5 ANFIS models were tested against the measured adsorption efficiency to assess the accuracy of each model. The results showed that ANFIS5, which used all input parameters, was the most accurate (RMSE = 2.65 and R(2) = 0.95) and ANFIS1, which used only the contact time input, was the worst (RMSE = 14.56 and R(2) = 0.46). In ranking the models, ANFIS4, ANFIS3 and ANFIS2 ranked second, third and fourth, respectively. The sensitivity analysis revealed that the estimated AE is more sensitive to the contact time, followed by pH, initial concentration of Pb (II) ions, adsorbent dosage, and temperature. The results showed that all ANFIS models overestimated the AE. In general, this study confirmed the capabilities of ANFIS model as an effective tool for estimation of AE. PMID:23383640
Jhin, Changho; Hwang, Keum Taek
2014-01-01
Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627
Jhin, Changho; Hwang, Keum Taek
2015-01-01
One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models. PMID:26474167
Jhin, Changho; Hwang, Keum Taek
2014-01-01
Radical scavenging activity of anthocyanins is well known, but only a few studies have been conducted by quantum chemical approach. The adaptive neuro-fuzzy inference system (ANFIS) is an effective technique for solving problems with uncertainty. The purpose of this study was to construct and evaluate quantitative structure-activity relationship (QSAR) models for predicting radical scavenging activities of anthocyanins with good prediction efficiency. ANFIS-applied QSAR models were developed by using quantum chemical descriptors of anthocyanins calculated by semi-empirical PM6 and PM7 methods. Electron affinity (A) and electronegativity (χ) of flavylium cation, and ionization potential (I) of quinoidal base were significantly correlated with radical scavenging activities of anthocyanins. These descriptors were used as independent variables for QSAR models. ANFIS models with two triangular-shaped input fuzzy functions for each independent variable were constructed and optimized by 100 learning epochs. The constructed models using descriptors calculated by both PM6 and PM7 had good prediction efficiency with Q-square of 0.82 and 0.86, respectively. PMID:25153627
NASA Astrophysics Data System (ADS)
Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro
2012-04-01
Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.
NASA Astrophysics Data System (ADS)
Teimouri, Reza; Sohrabpoor, Hamed
2013-12-01
Electrochemical machining process (ECM) is increasing its importance due to some of the specific advantages which can be exploited during machining operation. The process offers several special privileges such as higher machining rate, better accuracy and control, and wider range of materials that can be machined. Contribution of too many predominate parameters in the process, makes its prediction and selection of optimal values really complex, especially while the process is programmized for machining of hard materials. In the present work in order to investigate effects of electrolyte concentration, electrolyte flow rate, applied voltage and feed rate on material removal rate (MRR) and surface roughness (SR) the adaptive neuro-fuzzy inference systems (ANFIS) have been used for creation predictive models based on experimental observations. Then the ANFIS 3D surfaces have been plotted for analyzing effects of process parameters on MRR and SR. Finally, the cuckoo optimization algorithm (COA) was used for selection solutions in which the process reaches maximum material removal rate and minimum surface roughness simultaneously. Results indicated that the ANFIS technique has superiority in modeling of MRR and SR with high prediction accuracy. Also, results obtained while applying of COA have been compared with those derived from confirmatory experiments which validate the applicability and suitability of the proposed techniques in enhancing the performance of ECM process.
Jhin, Changho; Hwang, Keum Taek
2015-01-01
One of the physiological characteristics of carotenoids is their radical scavenging activity. In this study, the relationship between radical scavenging activities and quantum chemical descriptors of carotenoids was determined. Adaptive neuro-fuzzy inference system (ANFIS) applied quantitative structure-activity relationship models (QSAR) were also developed for predicting and comparing radical scavenging activities of carotenoids. Semi-empirical PM6 and PM7 quantum chemical calculations were done by MOPAC. Ionisation energies of neutral and monovalent cationic carotenoids and the product of chemical potentials of neutral and monovalent cationic carotenoids were significantly correlated with the radical scavenging activities, and consequently these descriptors were used as independent variables for the QSAR study. The ANFIS applied QSAR models were developed with two triangular-shaped input membership functions made for each of the independent variables and optimised by a backpropagation method. High prediction efficiencies were achieved by the ANFIS applied QSAR. The R-square values of the developed QSAR models with the variables calculated by PM6 and PM7 methods were 0.921 and 0.902, respectively. The results of this study demonstrated reliabilities of the selected quantum chemical descriptors and the significance of QSAR models. PMID:26474167
NASA Astrophysics Data System (ADS)
Salehi, Mohammad Reza; Noori, Leila; Abiri, Ebrahim
2016-11-01
In this paper, a subsystem consisting of a microstrip bandpass filter and a microstrip low noise amplifier (LNA) is designed for WLAN applications. The proposed filter has a small implementation area (49 mm2), small insertion loss (0.08 dB) and wide fractional bandwidth (FBW) (61%). To design the proposed LNA, the compact microstrip cells, an field effect transistor, and only a lumped capacitor are used. It has a low supply voltage and a low return loss (-40 dB) at the operation frequency. The matching condition of the proposed subsystem is predicted using subsystem analysis, artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). To design the proposed filter, the transmission matrix of the proposed resonator is obtained and analysed. The performance of the proposed ANN and ANFIS models is tested using the numerical data by four performance measures, namely the correlation coefficient (CC), the mean absolute error (MAE), the average percentage error (APE) and the root mean square error (RMSE). The obtained results show that these models are in good agreement with the numerical data, and a small error between the predicted values and numerical solution is obtained.
Fu, Zening; Chan, Shing-Chow; Di, Xin; Biswal, Bharat; Zhang, Zhiguo
2014-04-01
Time-varying covariance is an important metric to measure the statistical dependence between non-stationary biological processes. Time-varying covariance is conventionally estimated from short-time data segments within a window having a certain bandwidth, but it is difficult to choose an appropriate bandwidth to estimate covariance with different degrees of non-stationarity. This paper introduces a local polynomial regression (LPR) method to estimate time-varying covariance and performs an asymptotic analysis of the LPR covariance estimator to show that both the estimation bias and variance are functions of the bandwidth and there exists an optimal bandwidth to minimize the mean square error (MSE) locally. A data-driven variable bandwidth selection method, namely the intersection of confidence intervals (ICI), is adopted in LPR for adaptively determining the local optimal bandwidth that minimizes the MSE. Experimental results on simulated signals show that the LPR-ICI method can achieve robust and reliable performance in estimating time-varying covariance with different degrees of variations and under different noise scenarios, making it a powerful tool to study the dynamic relationship between non-stationary biomedical signals. Further, we apply the LPR-ICI method to estimate time-varying covariance of functional magnetic resonance imaging (fMRI) signals in a visual task for the inference of dynamic functional brain connectivity. The results show that the LPR-ICI method can effectively capture the transient connectivity patterns from fMRI.
NASA Technical Reports Server (NTRS)
Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.
2012-01-01
We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.
Djukanovic, M.B.; Calovic, M.S.; Vesovic, B.V.; Sobajic, D.J.
1997-12-01
This paper presents an attempt of nonlinear, multivariable control of low-head hydropower plants, by using adaptive-network based fuzzy inference system (ANFIS). The new design technique enhances fuzzy controllers with self-learning capability for achieving prescribed control objectives in a near optimal manner. The controller has flexibility for accepting more sensory information, with the main goal to improve the generator unit transients, by adjusting the exciter input, the wicket gate and runner blade positions. The developed ANFIS controller whose control signals are adjusted by using incomplete on-line measurements, can offer better damping effects to generator oscillations over a wide range of operating conditions, than conventional controllers. Digital simulations of hydropower plant equipped with low-head Kaplan turbine are performed and the comparisons of conventional excitation-governor control, state-feedback optimal control and ANFIS based output feedback control are presented. To demonstrate the effectiveness of the proposed control scheme and the robustness of the acquired neuro-fuzzy controller, the controller has been implemented on a complex high-order non-linear hydrogenerator model.
Azeez, Dhifaf; Ali, Mohd Alauddin Mohd; Gan, Kok Beng; Saiboon, Ismail
2013-01-01
Unexpected disease outbreaks and disasters are becoming primary issues facing our world. The first points of contact either at the disaster scenes or emergency department exposed the frontline workers and medical physicians to the risk of infections. Therefore, there is a persuasive demand for the integration and exploitation of heterogeneous biomedical information to improve clinical practice, medical research and point of care. In this paper, a primary triage model was designed using two different methods: an adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN).When the patient is presented at the triage counter, the system will capture their vital signs and chief complains beside physiology stat and general appearance of the patient. This data will be managed and analyzed in the data server and the patient's emergency status will be reported immediately. The proposed method will help to reduce the queue time at the triage counter and the emergency physician's burden especially duringdisease outbreak and serious disaster. The models have been built with 2223 data set extracted from the Emergency Department of the Universiti Kebangsaan Malaysia Medical Centre to predict the primary triage category. Multilayer feed forward with one hidden layer having 12 neurons has been used for the ANN architecture. Fuzzy subtractive clustering has been used to find the fuzzy rules for the ANFIS model. The results showed that the RMSE, %RME and the accuracy which evaluated by measuring specificity and sensitivity for binary classificationof the training data were 0.14, 5.7 and 99 respectively for the ANN model and 0.85, 32.00 and 96.00 respectively for the ANFIS model. As for unseen data the root mean square error, percentage the root mean square error and the accuracy for ANN is 0.18, 7.16 and 96.7 respectively, 1.30, 49.84 and 94 respectively for ANFIS model. The ANN model was performed better for both training and unseen data than ANFIS model in
Azeez, Dhifaf; Ali, Mohd Alauddin Mohd; Gan, Kok Beng; Saiboon, Ismail
2013-01-01
Unexpected disease outbreaks and disasters are becoming primary issues facing our world. The first points of contact either at the disaster scenes or emergency department exposed the frontline workers and medical physicians to the risk of infections. Therefore, there is a persuasive demand for the integration and exploitation of heterogeneous biomedical information to improve clinical practice, medical research and point of care. In this paper, a primary triage model was designed using two different methods: an adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN).When the patient is presented at the triage counter, the system will capture their vital signs and chief complains beside physiology stat and general appearance of the patient. This data will be managed and analyzed in the data server and the patient's emergency status will be reported immediately. The proposed method will help to reduce the queue time at the triage counter and the emergency physician's burden especially duringdisease outbreak and serious disaster. The models have been built with 2223 data set extracted from the Emergency Department of the Universiti Kebangsaan Malaysia Medical Centre to predict the primary triage category. Multilayer feed forward with one hidden layer having 12 neurons has been used for the ANN architecture. Fuzzy subtractive clustering has been used to find the fuzzy rules for the ANFIS model. The results showed that the RMSE, %RME and the accuracy which evaluated by measuring specificity and sensitivity for binary classificationof the training data were 0.14, 5.7 and 99 respectively for the ANN model and 0.85, 32.00 and 96.00 respectively for the ANFIS model. As for unseen data the root mean square error, percentage the root mean square error and the accuracy for ANN is 0.18, 7.16 and 96.7 respectively, 1.30, 49.84 and 94 respectively for ANFIS model. The ANN model was performed better for both training and unseen data than ANFIS model in
NASA Astrophysics Data System (ADS)
Mekanik, F.; Imteaz, M. A.; Talei, A.
2016-05-01
Accurate seasonal rainfall forecasting is an important step in the development of reliable runoff forecast models. The large scale climate modes affecting rainfall in Australia have recently been proven useful in rainfall prediction problems. In this study, adaptive network-based fuzzy inference systems (ANFIS) models are developed for the first time for southeast Australia in order to forecast spring rainfall. The models are applied in east, center and west Victoria as case studies. Large scale climate signals comprising El Nino Southern Oscillation (ENSO), Indian Ocean Dipole (IOD) and Inter-decadal Pacific Ocean (IPO) are selected as rainfall predictors. Eight models are developed based on single climate modes (ENSO, IOD, and IPO) and combined climate modes (ENSO-IPO and ENSO-IOD). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Pearson correlation coefficient (r) and root mean square error in probability (RMSEP) skill score are used to evaluate the performance of the proposed models. The predictions demonstrate that ANFIS models based on individual IOD index perform superior in terms of RMSE, MAE and r to the models based on individual ENSO indices. It is further discovered that IPO is not an effective predictor for the region and the combined ENSO-IOD and ENSO-IPO predictors did not improve the predictions. In order to evaluate the effectiveness of the proposed models a comparison is conducted between ANFIS models and the conventional Artificial Neural Network (ANN), the Predictive Ocean Atmosphere Model for Australia (POAMA) and climatology forecasts. POAMA is the official dynamic model used by the Australian Bureau of Meteorology. The ANFIS predictions certify a superior performance for most of the region compared to ANN and climatology forecasts. POAMA performs better in regards to RMSE and MAE in east and part of central Victoria, however, compared to ANFIS it shows weaker results in west Victoria in terms of prediction errors and RMSEP skill
Robson, Barry
2007-08-01
What is the Best Practice for automated inference in Medical Decision Support for personalized medicine? A known system already exists as Dirac's inference system from quantum mechanics (QM) using bra-kets and bras where A and B are states, events, or measurements representing, say, clinical and biomedical rules. Dirac's system should theoretically be the universal best practice for all inference, though QM is notorious as sometimes leading to bizarre conclusions that appear not to be applicable to the macroscopic world of everyday world human experience and medical practice. It is here argued that this apparent difficulty vanishes if QM is assigned one new multiplication function @, which conserves conditionality appropriately, making QM applicable to classical inference including a quantitative form of the predicate calculus. An alternative interpretation with the same consequences is if every i = radical-1 in Dirac's QM is replaced by h, an entity distinct from 1 and i and arguably a hidden root of 1 such that h2 = 1. With that exception, this paper is thus primarily a review of the application of Dirac's system, by application of linear algebra in the complex domain to help manipulate information about associations and ontology in complicated data. Any combined bra-ket can be shown to be composed only of the sum of QM-like bra and ket weights c(), times an exponential function of Fano's mutual information measure I(A; B) about the association between A and B, that is, an association rule from data mining. With the weights and Fano measure re-expressed as expectations on finite data using Riemann's Incomplete (i.e., Generalized) Zeta Functions, actual counts of observations for real world sparse data can be readily utilized. Finally, the paper compares identical character, distinguishability of states events or measurements, correlation, mutual information, and orthogonal character, important issues in data mining
The new physician as unwitting quantum mechanic: is adapting Dirac's inference system best practice for personalized medicine, genomics, and proteomics?
Robson, Barry
2007-08-01
What is the Best Practice for automated inference in Medical Decision Support for personalized medicine? A known system already exists as Dirac's inference system from quantum mechanics (QM) using bra-kets and bras where A and B are states, events, or measurements representing, say, clinical and biomedical rules. Dirac's system should theoretically be the universal best practice for all inference, though QM is notorious as sometimes leading to bizarre conclusions that appear not to be applicable to the macroscopic world of everyday world human experience and medical practice. It is here argued that this apparent difficulty vanishes if QM is assigned one new multiplication function @, which conserves conditionality appropriately, making QM applicable to classical inference including a quantitative form of the predicate calculus. An alternative interpretation with the same consequences is if every i = radical-1 in Dirac's QM is replaced by h, an entity distinct from 1 and i and arguably a hidden root of 1 such that h2 = 1. With that exception, this paper is thus primarily a review of the application of Dirac's system, by application of linear algebra in the complex domain to help manipulate information about associations and ontology in complicated data. Any combined bra-ket can be shown to be composed only of the sum of QM-like bra and ket weights c(), times an exponential function of Fano's mutual information measure I(A; B) about the association between A and B, that is, an association rule from data mining. With the weights and Fano measure re-expressed as expectations on finite data using Riemann's Incomplete (i.e., Generalized) Zeta Functions, actual counts of observations for real world sparse data can be readily utilized. Finally, the paper compares identical character, distinguishability of states events or measurements, correlation, mutual information, and orthogonal character, important issues in data mining
Realities of weather extremes on daily life in urban India - How quantified impacts infer sensible adaptation options
NASA Astrophysics Data System (ADS)
Reckien, D.
2012-12-01
Emerging and developing economies are currently undergoing one of the profoundest socio-spatial transitions in their history, with strong urbanization and weather extremes bringing about changes in the economy, forms of living and living conditions, but also increasing risks and altered social divides. The impacts of heat waves and strong rain events are therefore differently perceived among urban residents. Addressing the social differences of climate change impacts1 and expanding targeted adaptation options have emerged as urgent policy priorities, particularly for developing and emerging economies2. This paper discusses the perceived impacts of weather-related extreme events on different social groups in New Delhi and Hyderabad, India. Using network statistics and scenario analysis on Fuzzy Cognitive Maps (FCMs) as part of a vulnerability analysis, the investigation provides quantitative and qualitative measures to compare impacts and adaptation strategies for different social groups. Impacts of rain events are stronger than those of heat in both cities and affect the lower income classes particularly. Interestingly, the scenario analysis (comparing altered networks in which the alteration represents a possible adaptation measure) shows that investments in the water infrastructure would be most meaningful and more effective than investments in, e.g., the traffic infrastructure, despite the stronger burden from traffic disruptions and the resulting concentration of planning and policy on traffic ease and investments. The method of Fuzzy Cognitive Mapping offers a link between perception and modeling, and the possibility to aggregate and analyze the views of a large number of stakeholders. Our research has shown that planners and politicians often know about many of the problems, but are often overwhelmed by the problems in their respective cities and look for a prioritization of adaptation options. FCM provides this need and identifies priority adaptation options
Pincheira-Donoso, Daniel
2011-01-01
Large-scale patterns of current species geographic range-size variation reflect historical dynamics of dispersal and provide insights into future consequences under changing environments. Evidence suggests that climate warming exerts major damage on high latitude and elevation organisms, where changes are more severe and available space to disperse tracking historical niches is more limited. Species with longer generations (slower adaptive responses), such as vertebrates, and with restricted distributions (lower genetic diversity, higher inbreeding) in these environments are expected to be particularly threatened by warming crises. However, a well-known macroecological generalization (Rapoport's rule) predicts that species range-sizes increase with increasing latitude-elevation, thus counterbalancing the impact of climate change. Here, I investigate geographic range-size variation across an extreme environmental gradient and as a function of body size, in the prominent Liolaemus lizard adaptive radiation. Conventional and phylogenetic analyses revealed that latitudinal (but not elevational) ranges significantly decrease with increasing latitude-elevation, while body size was unrelated to range-size. Evolutionarily, these results are insightful as they suggest a link between spatial environmental gradients and range-size evolution. However, ecologically, these results suggest that Liolaemus might be increasingly threatened if, as predicted by theory, ranges retract and contract continuously under persisting climate warming, potentially increasing extinction risks at high latitudes and elevations. PMID:22194953
Mathur, Neha; Glesk, Ivan; Buis, Arjan
2016-10-01
Monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used impeding the required consistent positioning of the temperature sensors during donning and doffing. Predicting the in-socket residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. In this work, we propose to implement an adaptive neuro fuzzy inference strategy (ANFIS) to predict the in-socket residual limb temperature. ANFIS belongs to the family of fused neuro fuzzy system in which the fuzzy system is incorporated in a framework which is adaptive in nature. The proposed method is compared to our earlier work using Gaussian processes for machine learning. By comparing the predicted and actual data, results indicate that both the modeling techniques have comparable performance metrics and can be efficiently used for non-invasive temperature monitoring. PMID:27452775
Torshabi, Ahmad Esmaili
2014-12-01
In external radiotherapy of dynamic targets such as lung and breast cancers, accurate correlation models are utilized to extract real time tumor position by means of external surrogates in correlation with the internal motion of tumors. In this study, a correlation method based on the neuro-fuzzy model is proposed to correlate the input external motion data with internal tumor motion estimation in real-time mode, due to its robustness in motion tracking. An initial test of the performance of this model was reported in our previous studies. In this work by implementing some modifications it is resulted that ANFIS is still robust to track tumor motion more reliably by reducing the motion estimation error remarkably. After configuring new version of our ANFIS model, its performance was retrospectively tested over ten patients treated with Synchrony Cyberknife system. In order to assess the performance of our model, the predicted tumor motion as model output was compared with respect to the state of the art model. Final analyzed results show that our adaptive neuro-fuzzy model can reduce tumor tracking errors more significantly, as compared with ground truth database and even tumor tracking methods presented in our previous works. PMID:25412886
Kolus, Ahmet; Imbeau, Daniel; Dubé, Philippe-Antoine; Dubeau, Denise
2015-09-01
This paper presents a new model based on adaptive neuro-fuzzy inference systems (ANFIS) to predict oxygen consumption (V˙O2) from easily measured variables. The ANFIS prediction model consists of three ANFIS modules for estimating the Flex-HR parameters. Each module was developed based on clustering a training set of data samples relevant to that module and then the ANFIS prediction model was tested against a validation data set. Fifty-eight participants performed the Meyer and Flenghi step-test, during which heart rate (HR) and V˙O2 were measured. Results indicated no significant difference between observed and estimated Flex-HR parameters and between measured and estimated V˙O2 in the overall HR range, and separately in different HR ranges. The ANFIS prediction model (MAE = 3 ml kg(-1) min(-1)) demonstrated better performance than Rennie et al.'s (MAE = 7 ml kg(-1) min(-1)) and Keytel et al.'s (MAE = 6 ml kg(-1) min(-1)) models, and comparable performance with the standard Flex-HR method (MAE = 2.3 ml kg(-1) min(-1)) throughout the HR range. The ANFIS model thus provides practitioners with a practical, cost- and time-efficient method for V˙O2 estimation without the need for individual calibration.
NASA Astrophysics Data System (ADS)
Ajay Kumar, M.; Srikanth, N. V.
2014-03-01
In HVDC Light transmission systems, converter control is one of the major fields of present day research works. In this paper, fuzzy logic controller is utilized for controlling both the converters of the space vector pulse width modulation (SVPWM) based HVDC Light transmission systems. Due to its complexity in the rule base formation, an intelligent controller known as adaptive neuro fuzzy inference system (ANFIS) controller is also introduced in this paper. The proposed ANFIS controller changes the PI gains automatically for different operating conditions. A hybrid learning method which combines and exploits the best features of both the back propagation algorithm and least square estimation method is used to train the 5-layer ANFIS controller. The performance of the proposed ANFIS controller is compared and validated with the fuzzy logic controller and also with the fixed gain conventional PI controller. The simulations are carried out in the MATLAB/SIMULINK environment. The results reveal that the proposed ANFIS controller is reducing power fluctuations at both the converters. It also improves the dynamic performance of the test power system effectively when tested for various ac fault conditions.
Yolmeh, Mahmoud; Habibi Najafi, Mohammad B; Salehi, Fakhreddin
2014-01-01
Annatto is commonly used as a coloring agent in the food industry and has antimicrobial and antioxidant properties. In this study, genetic algorithm-artificial neural network (GA-ANN) and adaptive neuro-fuzzy inference system (ANFIS) models were used to predict the effect of annatto dye on Salmonella enteritidis in mayonnaise. The GA-ANN and ANFIS were fed with 3 inputs of annatto dye concentration (0, 0.1, 0.2 and 0.4%), storage temperature (4 and 25°C) and storage time (1-20 days) for prediction of S. enteritidis population. Both models were trained with experimental data. The results showed that the annatto dye was able to reduce of S. enteritidis and its effect was stronger at 25°C than 4°C. The developed GA-ANN, which included 8 hidden neurons, could predict S. enteritidis population with correlation coefficient of 0.999. The overall agreement between ANFIS predictions and experimental data was also very good (r=0.998). Sensitivity analysis results showed that storage temperature was the most sensitive factor for prediction of S. enteritidis population. PMID:24566279
NASA Astrophysics Data System (ADS)
Woo, Youngkeun; Lee, Juwon; Hwang, Sujin; Hong, Cheol Pyo
2013-03-01
The purpose of this study was to investigate the associations between gait performance, postural stability, and depression in patients with Parkinson's disease (PD) by using an adaptive neuro-fuzzy inference system (ANFIS). Twenty-two idiopathic PD patients were assessed during outpatient physical therapy by using three clinical tests: the Berg balance scale (BBS), Dynamic gait index (DGI), and Geriatric depression scale (GDS). Scores were determined from clinical observation and patient interviews, and associations among gait performance, postural stability, and depression in this PD population were evaluated. The DGI showed significant positive correlation with the BBS scores, and negative correlation with the GDS score. We assessed the relationship between the BBS score and the DGI results by using a multiple regression analysis. In this case, the GDS score was not significantly associated with the DGI, but the BBS and DGI results were. Strikingly, the ANFIS-estimated value of the DGI, based on the BBS and the GDS scores, significantly correlated with the walking ability determined by using the DGI in patients with Parkinson's disease. These findings suggest that the ANFIS techniques effectively reflect and explain the multidirectional phenomena or conditions of gait performance in patients with PD.
Kolus, Ahmet; Imbeau, Daniel; Dubé, Philippe-Antoine; Dubeau, Denise
2015-09-01
This paper presents a new model based on adaptive neuro-fuzzy inference systems (ANFIS) to predict oxygen consumption (V˙O2) from easily measured variables. The ANFIS prediction model consists of three ANFIS modules for estimating the Flex-HR parameters. Each module was developed based on clustering a training set of data samples relevant to that module and then the ANFIS prediction model was tested against a validation data set. Fifty-eight participants performed the Meyer and Flenghi step-test, during which heart rate (HR) and V˙O2 were measured. Results indicated no significant difference between observed and estimated Flex-HR parameters and between measured and estimated V˙O2 in the overall HR range, and separately in different HR ranges. The ANFIS prediction model (MAE = 3 ml kg(-1) min(-1)) demonstrated better performance than Rennie et al.'s (MAE = 7 ml kg(-1) min(-1)) and Keytel et al.'s (MAE = 6 ml kg(-1) min(-1)) models, and comparable performance with the standard Flex-HR method (MAE = 2.3 ml kg(-1) min(-1)) throughout the HR range. The ANFIS model thus provides practitioners with a practical, cost- and time-efficient method for V˙O2 estimation without the need for individual calibration. PMID:25959320
Zarei, Kobra; Atabati, Morteza; Kor, Kamalodin
2014-06-01
A quantitative structure-activity relationship (QSAR) was developed to predict the toxicity of substituted benzenes to Tetrahymena pyriformis. A set of 1,497 zero- to three-dimensional descriptors were used for each molecule in the data set. A major problem of QSAR is the high dimensionality of the descriptor space; therefore, descriptor selection is one of the most important steps. In this paper, bee algorithm was used to select the best descriptors. Three descriptors were selected and used as inputs for adaptive neuro-fuzzy inference system (ANFIS). Then the model was corrected for unstable compounds (the compounds that can be ionized in the aqueous solutions or can easily metabolize under some conditions). Finally squared correlation coefficients were obtained as 0.8769, 0.8649 and 0.8301 for training, test and validation sets, respectively. The results showed bee-ANFIS can be used as a powerful model for prediction of toxicity of substituted benzenes to T. pyriformis. PMID:24638918
NASA Astrophysics Data System (ADS)
Islam, Tanvir; Srivastava, Prashant K.; Rico-Ramirez, Miguel A.; Dai, Qiang; Han, Dawei; Gupta, Manika
2014-08-01
The authors have investigated an adaptive neuro fuzzy inference system (ANFIS) for the estimation of hydrometeors from the TRMM microwave imager (TMI). The proposed algorithm, named as Hydro-Rain algorithm, is developed in synergy with the TRMM precipitation radar (PR) observed hydrometeor information. The method retrieves rain rates by exploiting the synergistic relations between the TMI and PR observations in twofold steps. First, the fundamental hydrometeor parameters, liquid water path (LWP) and ice water path (IWP), are estimated from the TMI brightness temperatures. Next, the rain rates are estimated from the retrieved hydrometeor parameters (LWP and IWP). A comparison of the hydrometeor retrievals by the Hydro-Rain algorithm is done with the TRMM PR 2A25 and GPROF 2A12 algorithms. The results reveal that the Hydro-Rain algorithm has good skills in estimating hydrometeor paths LWP and IWP, as well as surface rain rate. An examination of the Hydro-Rain algorithm is also conducted on a super typhoon case, in which the Hydro-Rain has shown very good performance in reproducing the typhoon field. Nevertheless, the passive microwave based estimate of hydrometeors appears to suffer in high rain rate regimes, and as the rain rate increases, the discrepancies with hydrometeor estimates tend to increase as well.
NASA Astrophysics Data System (ADS)
Heidary, Saeed; Setayeshi, Saeed
2015-01-01
This work presents a simulation based study by Monte Carlo which uses two adaptive neuro-fuzzy inference systems (ANFIS) for cross talk compensation of simultaneous 99mTc/201Tl dual-radioisotope SPECT imaging. We have compared two neuro-fuzzy systems based on fuzzy c-means (FCM) and subtractive (SUB) clustering. Our approach incorporates eight energy-windows image acquisition from 28 keV to 156 keV and two main photo peaks of 201Tl (77±10% keV) and 99mTc (140±10% keV). The Geant4 application in emission tomography (GATE) is used as a Monte Carlo simulator for three cylindrical and a NURBS Based Cardiac Torso (NCAT) phantom study. Three separate acquisitions including two single-isotopes and one dual isotope were performed in this study. Cross talk and scatter corrected projections are reconstructed by an iterative ordered subsets expectation maximization (OSEM) algorithm which models the non-uniform attenuation in the projection/back-projection. ANFIS-FCM/SUB structures are tuned to create three to sixteen fuzzy rules for modeling the photon cross-talk of the two radioisotopes. Applying seven to nine fuzzy rules leads to a total improvement of the contrast and the bias comparatively. It is found that there is an out performance for the ANFIS-FCM due to its acceleration and accurate results.
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Hosaininia, R.; Ghaedi, A. M.; Vafaei, A.; Taghizadeh, F.
2014-10-01
In this research, a novel adsorbent gold nanoparticle loaded on activated carbon (Au-NP-AC) was synthesized by ultrasound energy as a low cost routing protocol. Subsequently, this novel material characterization and identification followed by different techniques such as scanning electron microscope (SEM), Brunauer-Emmett-Teller (BET) and transmission electron microscopy (TEM) analysis. Unique properties such as high BET surface area (>1229.55 m2/g) and low pore size (<22.46 Å) and average particle size lower than 48.8 Å in addition to high reactive atoms and the presence of various functional groups make it possible for efficient removal of 1,3,4-thiadiazole-2,5-dithiol (TDDT). Generally, the influence of variables, including the amount of adsorbent, initial pollutant concentration, contact time on pollutants removal percentage has great effect on the removal percentage that their influence was optimized. The optimum parameters for adsorption of 1,3,4-thiadiazole-2, 5-dithiol onto gold nanoparticales-activated carbon were 0.02 g adsorbent mass, 10 mg L-1 initial 1,3,4-thiadiazole-2,5-dithiol concentration, 30 min contact time and pH 7. The Adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models, have been applied for prediction of removal of 1,3,4-thiadiazole-2,5-dithiol using gold nanoparticales-activated carbon (Au-NP-AC) in a batch study. The input data are included adsorbent dosage (g), contact time (min) and pollutant concentration (mg/l). The coefficient of determination (R2) and mean squared error (MSE) for the training data set of optimal ANFIS model were achieved to be 0.9951 and 0.00017, respectively. These results show that ANFIS model is capable of predicting adsorption of 1,3,4-thiadiazole-2,5-dithiol using Au-NP-AC with high accuracy in an easy, rapid and cost effective way.
Roberto, Baccoli; Ubaldo, Carlini; Stefano, Mariotti; Roberto, Innamorati; Elisa, Solinas; Paolo, Mura
2010-06-15
This paper deals with the development of methods for non steady state test of solar thermal collectors. Our goal is to infer performances in steady-state conditions in terms of the efficiency curve when measures in transient conditions are the only ones available. We take into consideration the method of identification of a system in dynamic conditions by applying a Graybox Identification Model and a Dynamic Adaptative Linear Neural Network (ALNN) model. The study targets the solar collector with evacuated pipes, such as Dewar pipes. The mathematical description that supervises the functioning of the solar collector in transient conditions is developed using the equation of the energy balance, with the aim of determining the order and architecture of the two models. The input and output vectors of the two models are constructed, considering the measures of 4 days of solar radiation, flow mass, environment and heat-transfer fluid temperature in the inlet and outlet from the thermal solar collector. The efficiency curves derived from the two models are detected in correspondence to the test and validation points. The two synthetic simulated efficiency curves are compared with the actual efficiency curve certified by the Swiss Institute Solartechnik Puffung Forschung which tested the solar collector performance in steady-state conditions according to the UNI-EN 12975 standard. An acquisition set of measurements of only 4 days in the transient condition was enough to trace through a Graybox State Space Model the efficiency curve of the tested solar thermal collector, with a relative error of synthetic values with respect to efficiency certified by SPF, lower than 0.5%, while with the ALNN model the error is lower than 2.2% with respect to certified one. (author)
Stinear, Timothy P; Holt, Kathryn E; Chua, Kyra; Stepnell, Justin; Tuck, Kellie L; Coombs, Geoffrey; Harrison, Paul Francis; Seemann, Torsten; Howden, Benjamin P
2014-02-01
Community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) has emerged as a major public health problem around the world. In Australia, ST93-IV[2B] is the dominant CA-MRSA clone and displays significantly greater virulence than other S. aureus. Here, we have examined the evolution of ST93 via genomic analysis of 12 MSSA and 44 MRSA ST93 isolates, collected from around Australia over a 17-year period. Comparative analysis revealed a core genome of 2.6 Mb, sharing greater than 99.7% nucleotide identity. The accessory genome was 0.45 Mb and comprised additional mobile DNA elements, harboring resistance to erythromycin, trimethoprim, and tetracycline. Phylogenetic inference revealed a molecular clock and suggested that a single clone of methicillin susceptible, Panton-Valentine leukocidin (PVL) positive, ST93 S. aureus likely spread from North Western Australia in the early 1970s, acquiring methicillin resistance at least twice in the mid 1990s. We also explored associations between genotype and important MRSA phenotypes including oxacillin MIC and production of exotoxins (α-hemolysin [Hla], δ-hemolysin [Hld], PSMα3, and PVL). High-level expression of Hla is a signature feature of ST93 and reduced expression in eight isolates was readily explained by mutations in the agr locus. However, subtle but significant decreases in Hld were also noted over time that coincided with decreasing oxacillin resistance and were independent of agr mutations. The evolution of ST93 S. aureus is thus associated with a reduction in both exotoxin expression and oxacillin MIC, suggesting MRSA ST93 isolates are under pressure for adaptive change. PMID:24482534
Stinear, Timothy P.; Holt, Kathryn E.; Chua, Kyra; Stepnell, Justin; Tuck, Kellie L.; Coombs, Geoffrey; Harrison, Paul Francis; Seemann, Torsten; Howden, Benjamin P.
2014-01-01
Community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) has emerged as a major public health problem around the world. In Australia, ST93-IV[2B] is the dominant CA-MRSA clone and displays significantly greater virulence than other S. aureus. Here, we have examined the evolution of ST93 via genomic analysis of 12 MSSA and 44 MRSA ST93 isolates, collected from around Australia over a 17-year period. Comparative analysis revealed a core genome of 2.6 Mb, sharing greater than 99.7% nucleotide identity. The accessory genome was 0.45 Mb and comprised additional mobile DNA elements, harboring resistance to erythromycin, trimethoprim, and tetracycline. Phylogenetic inference revealed a molecular clock and suggested that a single clone of methicillin susceptible, Panton-Valentine leukocidin (PVL) positive, ST93 S. aureus likely spread from North Western Australia in the early 1970s, acquiring methicillin resistance at least twice in the mid 1990s. We also explored associations between genotype and important MRSA phenotypes including oxacillin MIC and production of exotoxins (α-hemolysin [Hla], δ-hemolysin [Hld], PSMα3, and PVL). High-level expression of Hla is a signature feature of ST93 and reduced expression in eight isolates was readily explained by mutations in the agr locus. However, subtle but significant decreases in Hld were also noted over time that coincided with decreasing oxacillin resistance and were independent of agr mutations. The evolution of ST93 S. aureus is thus associated with a reduction in both exotoxin expression and oxacillin MIC, suggesting MRSA ST93 isolates are under pressure for adaptive change. PMID:24482534
Ghaedi, M; Hosaininia, R; Ghaedi, A M; Vafaei, A; Taghizadeh, F
2014-10-15
In this research, a novel adsorbent gold nanoparticle loaded on activated carbon (Au-NP-AC) was synthesized by ultrasound energy as a low cost routing protocol. Subsequently, this novel material characterization and identification followed by different techniques such as scanning electron microscope(SEM), Brunauer-Emmett-Teller(BET) and transmission electron microscopy (TEM) analysis. Unique properties such as high BET surface area (>1229.55m(2)/g) and low pore size (<22.46Å) and average particle size lower than 48.8Å in addition to high reactive atoms and the presence of various functional groups make it possible for efficient removal of 1,3,4-thiadiazole-2,5-dithiol (TDDT). Generally, the influence of variables, including the amount of adsorbent, initial pollutant concentration, contact time on pollutants removal percentage has great effect on the removal percentage that their influence was optimized. The optimum parameters for adsorption of 1,3,4-thiadiazole-2, 5-dithiol onto gold nanoparticales-activated carbon were 0.02g adsorbent mass, 10mgL(-1) initial 1,3,4-thiadiazole-2,5-dithiol concentration, 30min contact time and pH 7. The Adaptive neuro-fuzzy inference system (ANFIS), and multiple linear regression (MLR) models, have been applied for prediction of removal of 1,3,4-thiadiazole-2,5-dithiol using gold nanoparticales-activated carbon (Au-NP-AC) in a batch study. The input data are included adsorbent dosage (g), contact time (min) and pollutant concentration (mg/l). The coefficient of determination (R(2)) and mean squared error (MSE) for the training data set of optimal ANFIS model were achieved to be 0.9951 and 0.00017, respectively. These results show that ANFIS model is capable of predicting adsorption of 1,3,4-thiadiazole-2,5-dithiol using Au-NP-AC with high accuracy in an easy, rapid and cost effective way. PMID:24858196
NASA Astrophysics Data System (ADS)
Kentel, E.; Dogulu, N.
2015-12-01
In Turkey the experience and data required for a hydrological model setup is limited and very often not available. Moreover there are many ungauged catchments where there are also many planned projects aimed at utilization of water resources including development of existing hydropower potential. This situation makes runoff prediction at locations with lack of data and ungauged locations where small hydropower plants, reservoirs, etc. are planned an increasingly significant challenge and concern in the country. Flow duration curves have many practical applications in hydrology and integrated water resources management. Estimation of flood duration curve (FDC) at ungauged locations is essential, particularly for hydropower feasibility studies and selection of the installed capacities. In this study, we test and compare the performances of two methods for estimating FDCs in the Western Black Sea catchment, Turkey: (i) FDC based on Map Correlation Method (MCM) flow estimates. MCM is a recently proposed method (Archfield and Vogel, 2010) which uses geospatial information to estimate flow. Flow measurements of stream gauging stations nearby the ungauged location are the only data requirement for this method. This fact makes MCM very attractive for flow estimation in Turkey, (ii) Adaptive Neuro-Fuzzy Inference System (ANFIS) is a data-driven method which is used to relate FDC to a number of variables representing catchment and climate characteristics. However, it`s ease of implementation makes it very useful for practical purposes. Both methods use easily collectable data and are computationally efficient. Comparison of the results is realized based on two different measures: the root mean squared error (RMSE) and the Nash-Sutcliffe Efficiency (NSE) value. Ref: Archfield, S. A., and R. M. Vogel (2010), Map correlation method: Selection of a reference streamgage to estimate daily streamflow at ungaged catchments, Water Resour. Res., 46, W10513, doi:10.1029/2009WR008481.
NASA Astrophysics Data System (ADS)
He, Zhibin; Wen, Xiaohu; Liu, Hu; Du, Jun
2014-02-01
Data driven models are very useful for river flow forecasting when the underlying physical relationships are not fully understand, but it is not clear whether these data driven models still have a good performance in the small river basin of semiarid mountain regions where have complicated topography. In this study, the potential of three different data driven methods, artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for forecasting river flow in the semiarid mountain region, northwestern China. The models analyzed different combinations of antecedent river flow values and the appropriate input vector has been selected based on the analysis of residuals. The performance of the ANN, ANFIS and SVM models in training and validation sets are compared with the observed data. The model which consists of three antecedent values of flow has been selected as the best fit model for river flow forecasting. To get more accurate evaluation of the results of ANN, ANFIS and SVM models, the four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), root mean squared error (RMSE), Nash-Sutcliffe efficiency coefficient (NS) and mean absolute relative error (MARE), were employed to evaluate the performances of various models developed. The results indicate that the performance obtained by ANN, ANFIS and SVM in terms of different evaluation criteria during the training and validation period does not vary substantially; the performance of the ANN, ANFIS and SVM models in river flow forecasting was satisfactory. A detailed comparison of the overall performance indicated that the SVM model performed better than ANN and ANFIS in river flow forecasting for the validation data sets. The results also suggest that ANN, ANFIS and SVM method can be successfully applied to establish river flow with complicated topography forecasting models in the semiarid mountain regions.
Stinear, Timothy P; Holt, Kathryn E; Chua, Kyra; Stepnell, Justin; Tuck, Kellie L; Coombs, Geoffrey; Harrison, Paul Francis; Seemann, Torsten; Howden, Benjamin P
2014-02-01
Community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA) has emerged as a major public health problem around the world. In Australia, ST93-IV[2B] is the dominant CA-MRSA clone and displays significantly greater virulence than other S. aureus. Here, we have examined the evolution of ST93 via genomic analysis of 12 MSSA and 44 MRSA ST93 isolates, collected from around Australia over a 17-year period. Comparative analysis revealed a core genome of 2.6 Mb, sharing greater than 99.7% nucleotide identity. The accessory genome was 0.45 Mb and comprised additional mobile DNA elements, harboring resistance to erythromycin, trimethoprim, and tetracycline. Phylogenetic inference revealed a molecular clock and suggested that a single clone of methicillin susceptible, Panton-Valentine leukocidin (PVL) positive, ST93 S. aureus likely spread from North Western Australia in the early 1970s, acquiring methicillin resistance at least twice in the mid 1990s. We also explored associations between genotype and important MRSA phenotypes including oxacillin MIC and production of exotoxins (α-hemolysin [Hla], δ-hemolysin [Hld], PSMα3, and PVL). High-level expression of Hla is a signature feature of ST93 and reduced expression in eight isolates was readily explained by mutations in the agr locus. However, subtle but significant decreases in Hld were also noted over time that coincided with decreasing oxacillin resistance and were independent of agr mutations. The evolution of ST93 S. aureus is thus associated with a reduction in both exotoxin expression and oxacillin MIC, suggesting MRSA ST93 isolates are under pressure for adaptive change.
Jalali-Heravi, Mehdi; Kyani, Anahita
2008-06-01
The purpose of this study was to develop the structure-toxicity relationships for a large group of 268 substituted benzene to the ciliate Tetrahymena pyriformis using mechanistically interpretable descriptors. The shuffling-adaptive neuro fuzzy inference system (Shuffling-ANFIS) has been successfully applied to select the important factors affecting the toxicity of substituted benzenes to T. pyriformis. The results of the proposed model were compared with the model of linear-free energy response surface and also the principal component analysis Bayesian-regularized neural network (PCA-BRANN) trained using the same data. The presented model shows a better statistical parameter in comparison with the previous models. The results of the model are promising and descriptive. Five descriptors of octanol-water partition coefficient (logP), bond information content (BIC0), number of R-CX-R (C-026), eigenvalue sum from Z weighted distance matrix (SEigZ) and fragment based polar surface area (PSA) selected by Shuffling-ANFIS reveal the role of hydrophobicity, electronic and steric interactions in the mechanism of toxic action. Sequential zeroing of weights (SZW) as a sensitivity analysis method revealed that the hydrophobicity and electronic interactions play a major role in toxicity of these compounds. Satisfactory results (q(2)=0.828 and RMSE=0.348) in comparison with the previous works indicate that the Shuffling-ANFIS-ANN technique is able to model a diverse chemical class with more than one mechanism of toxicity by using simple and interpretable descriptors. Shuffling-ANFIS can be used as powerful feature selection technique, because its application in prediction of toxicity potency results in good statistical and interpretable physiochemical parameters. PMID:18499226
NASA Astrophysics Data System (ADS)
Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.
2015-12-01
This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted
Adaptive fuzzy system for 3-D vision
NASA Technical Reports Server (NTRS)
Mitra, Sunanda
1993-01-01
An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.
Labarbe, Rudi; Janssens, Guillaume; Sterpin, Edmond
2016-09-01
In proton therapy, quantification of the proton range uncertainty is important to achieve dose distribution compliance. The promising accuracy of prompt gamma imaging (PGI) suggests the development of a mathematical framework using the range measurements to convert population based estimates of uncertainties into patient specific estimates with the purpose of plan adaptation. We present here such framework using Bayesian inference. The sources of uncertainty were modeled by three parameters: setup bias m, random setup precision r and water equivalent path length bias u. The evolution of the expectation values E(m), E(r) and E(u) during the treatment was simulated. The expectation values converged towards the true simulation parameters after 5 and 10 fractions, for E(m) and E(u), respectively. E(r) settle on a constant value slightly lower than the true value after 10 fractions. In conclusion, the simulation showed that there is enough information in the frequency distribution of the range errors measured by PGI to estimate the expectation values and the confidence interval of the model parameters by Bayesian inference. The updated model parameters were used to compute patient specific lateral and local distal margins for adaptive re-planning. PMID:27494118
NASA Astrophysics Data System (ADS)
Labarbe, Rudi; Janssens, Guillaume; Sterpin, Edmond
2016-09-01
In proton therapy, quantification of the proton range uncertainty is important to achieve dose distribution compliance. The promising accuracy of prompt gamma imaging (PGI) suggests the development of a mathematical framework using the range measurements to convert population based estimates of uncertainties into patient specific estimates with the purpose of plan adaptation. We present here such framework using Bayesian inference. The sources of uncertainty were modeled by three parameters: setup bias m, random setup precision r and water equivalent path length bias u. The evolution of the expectation values E(m), E(r) and E(u) during the treatment was simulated. The expectation values converged towards the true simulation parameters after 5 and 10 fractions, for E(m) and E(u), respectively. E(r) settle on a constant value slightly lower than the true value after 10 fractions. In conclusion, the simulation showed that there is enough information in the frequency distribution of the range errors measured by PGI to estimate the expectation values and the confidence interval of the model parameters by Bayesian inference. The updated model parameters were used to compute patient specific lateral and local distal margins for adaptive re-planning.
2012-01-01
Background Understanding demographic histories, such as divergence time, patterns of gene flow, and population size changes, in ecologically diverging lineages provide implications for the process and maintenance of population differentiation by ecological adaptation. This study addressed the demographic histories in two independently derived lineages of flood-resistant riparian plants and their non-riparian relatives [Ainsliaea linearis (riparian) and A. apiculata (non-riparian); A. oblonga (riparian) and A. macroclinidioides (non-riparian); Asteraceae] using an isolation-with-migration (IM) model based on variation at 10 nuclear DNA loci. Results The highest posterior probabilities of the divergence time parameters were estimated to be ca. 25,000 years ago for A. linearis and A. apiculata and ca. 9000 years ago for A. oblonga and A. macroclinidioides, although the confidence intervals of the parameters had broad ranges. The likelihood ratio tests detected evidence of historical gene flow between both riparian/non-riparian species pairs. The riparian populations showed lower levels of genetic diversity and a significant reduction in effective population sizes compared to the non-riparian populations and their ancestral populations. Conclusions This study showed the recent origins of flood-resistant riparian plants, which are remarkable examples of plant ecological adaptation. The recent divergence and genetic signatures of historical gene flow among riparian/non-riparian species implied that they underwent morphological and ecological differentiation within short evolutionary timescales and have maintained their species boundaries in the face of gene flow. Comparative analyses of adaptive divergence in two sets of riparian/non-riparian lineages suggested that strong natural selection by flooding had frequently reduced the genetic diversity and size of riparian populations through genetic drift, possibly leading to fixation of adaptive traits in riparian
Adaptive fuzzy leader clustering of complex data sets in pattern recognition
NASA Technical Reports Server (NTRS)
Newton, Scott C.; Pemmaraju, Surya; Mitra, Sunanda
1992-01-01
A modular, unsupervised neural network architecture for clustering and classification of complex data sets is presented. The adaptive fuzzy leader clustering (AFLC) architecture is a hybrid neural-fuzzy system that learns on-line in a stable and efficient manner. The initial classification is performed in two stages: a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from fuzzy C-means system equations for the centroids and the membership values. The AFLC algorithm is applied to the Anderson Iris data and laser-luminescent fingerprint image data. It is concluded that the AFLC algorithm successfully classifies features extracted from real data, discrete or continuous.
Inferring horizontal gene transfer.
Ravenhall, Matt; Škunca, Nives; Lassalle, Florent; Dessimoz, Christophe
2015-05-01
Horizontal or Lateral Gene Transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate the investigations of evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages. Computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average, whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events. PMID:26020646
Inferring Horizontal Gene Transfer
Lassalle, Florent; Dessimoz, Christophe
2015-01-01
Horizontal or Lateral Gene Transfer (HGT or LGT) is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate the investigations of evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages [1]. Computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric") methods search for deviations from the genomic average, whereas evolutionary history-based ("phylogenetic") approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events. PMID:26020646
NASA Astrophysics Data System (ADS)
Khoshbin, Fatemeh; Bonakdari, Hossein; Hamed Ashraf Talesh, Seyed; Ebtehaj, Isa; Zaji, Amir Hossein; Azimi, Hamed
2016-06-01
In the present article, the adaptive neuro-fuzzy inference system (ANFIS) is employed to model the discharge coefficient in rectangular sharp-crested side weirs. The genetic algorithm (GA) is used for the optimum selection of membership functions, while the singular value decomposition (SVD) method helps in computing the linear parameters of the ANFIS results section (GA/SVD-ANFIS). The effect of each dimensionless parameter on discharge coefficient prediction is examined in five different models to conduct sensitivity analysis by applying the above-mentioned dimensionless parameters. Two different sets of experimental data are utilized to examine the models and obtain the best model. The study results indicate that the model designed through GA/SVD-ANFIS predicts the discharge coefficient with a good level of accuracy (mean absolute percentage error = 3.362 and root mean square error = 0.027). Moreover, comparing this method with existing equations and the multi-layer perceptron-artificial neural network (MLP-ANN) indicates that the GA/SVD-ANFIS method has superior performance in simulating the discharge coefficient of side weirs.
NASA Astrophysics Data System (ADS)
Heidary, Saeed; Setayeshi, Saeed; Ghannadi-Maragheh, Mohammad
2014-09-01
The aim of this study is to compare the adaptive neuro-fuzzy inference system (ANFIS) and the artificial neural network (ANN) to estimate the cross-talk contamination of 99 m Tc / 201 Tl image acquisition in the 201 Tl energy window (77 ± 15% keV). GATE (Geant4 Application in Emission and Tomography) is employed due to its ability to simulate multiple radioactive sources concurrently. Two kinds of phantoms, including two digital and one physical phantom, are used. In the real and the simulation studies, data acquisition is carried out using eight energy windows. The ANN and the ANFIS are prepared in MATLAB, and the GATE results are used as a training data set. Three indications are evaluated and compared. The ANFIS method yields better outcomes for two indications (Spearman's rank correlation coefficient and contrast) and the two phantom results in each category. The maximum image biasing, which is the third indication, is found to be 6% more than that for the ANN.
Adaptive Importance Sampling for Control and Inference
NASA Astrophysics Data System (ADS)
Kappen, H. J.; Ruiz, H. C.
2016-03-01
Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.
NASA Astrophysics Data System (ADS)
Chaudhuri, S.; Das, D.; Goswami, S.; Das, S. K.
2016-02-01
All India summer monsoon rainfall (AISMR) characteristics play a vital role for the policy planning and national economy of the country. In view of the significant impact of monsoon system on regional as well as global climate systems, accurate prediction of summer monsoon rainfall has become a challenge. The objective of this study is to develop an adaptive neuro-fuzzy inference system (ANFIS) for long range forecast of AISMR. The NCEP/NCAR reanalysis data of temperature, zonal and meridional wind at different pressure levels have been taken to construct the input matrix of ANFIS. The membership of the input parameters for AISMR as high, medium or low is estimated with trapezoidal membership function. The fuzzified standardized input parameters and the de-fuzzified target output are trained with artificial neural network models. The forecast of AISMR with ANFIS is compared with non-hybrid multi-layer perceptron model (MLP), radial basis functions network (RBFN) and multiple linear regression (MLR) models. The forecast error analyses of the models reveal that ANFIS provides the best forecast of AISMR with minimum prediction error of 0.076, whereas the errors with MLP, RBFN and MLR models are 0.22, 0.18 and 0.73 respectively. During validation with observations, ANFIS shows its potency over the said comparative models. Performance of the ANFIS model is verified through different statistical skill scores, which also confirms the aptitude of ANFIS in forecasting AISMR. The forecast skill of ANFIS is also observed to be better than Climate Forecast System version 2. The real-time forecast with ANFIS shows possibility of deficit (65-75 cm) AISMR in the year 2015.
NASA Astrophysics Data System (ADS)
Akhoondzadeh, M.
2013-09-01
Anomaly detection is extremely important for forecasting the date, location and magnitude of an impending earthquake. In this paper, an Adaptive Network-based Fuzzy Inference System (ANFIS) has been proposed to detect the thermal and Total Electron Content (TEC) anomalies around the time of the Varzeghan, Iran, (Mw = 6.4) earthquake jolted in 11 August 2012 NW Iran. ANFIS is the famous hybrid neuro-fuzzy network for modeling the non-linear complex systems. In this study, also the detected thermal and TEC anomalies using the proposed method are compared to the results dealing with the observed anomalies by applying the classical and intelligent methods including Interquartile, Auto-Regressive Integrated Moving Average (ARIMA), Artificial Neural Network (ANN) and Support Vector Machine (SVM) methods. The duration of the dataset which is comprised from Aqua-MODIS Land Surface Temperature (LST) night-time snapshot images and also Global Ionospheric Maps (GIM), is 62 days. It can be shown that, if the difference between the predicted value using the ANFIS method and the observed value, exceeds the pre-defined threshold value, then the observed precursor value in the absence of non seismic effective parameters could be regarded as precursory anomaly. For two precursors of LST and TEC, the ANFIS method shows very good agreement with the other implemented classical and intelligent methods and this indicates that ANFIS is capable of detecting earthquake anomalies. The applied methods detected anomalous occurrences 1 and 2 days before the earthquake. This paper indicates that the detection of the thermal and TEC anomalies derive their credibility from the overall efficiencies and potentialities of the five integrated methods.
NASA Astrophysics Data System (ADS)
Subashini, L.; Vasudevan, M.
2012-02-01
Type 316 LN stainless steel is the major structural material used in the construction of nuclear reactors. Activated flux tungsten inert gas (A-TIG) welding has been developed to increase the depth of penetration because the depth of penetration achievable in single-pass TIG welding is limited. Real-time monitoring and control of weld processes is gaining importance because of the requirement of remoter welding process technologies. Hence, it is essential to develop computational methodologies based on an adaptive neuro fuzzy inference system (ANFIS) or artificial neural network (ANN) for predicting and controlling the depth of penetration and weld bead width during A-TIG welding of type 316 LN stainless steel. In the current work, A-TIG welding experiments have been carried out on 6-mm-thick plates of 316 LN stainless steel by varying the welding current. During welding, infrared (IR) thermal images of the weld pool have been acquired in real time, and the features have been extracted from the IR thermal images of the weld pool. The welding current values, along with the extracted features such as length, width of the hot spot, thermal area determined from the Gaussian fit, and thermal bead width computed from the first derivative curve were used as inputs, whereas the measured depth of penetration and weld bead width were used as output of the respective models. Accurate ANFIS models have been developed for predicting the depth of penetration and the weld bead width during TIG welding of 6-mm-thick 316 LN stainless steel plates. A good correlation between the measured and predicted values of weld bead width and depth of penetration were observed in the developed models. The performance of the ANFIS models are compared with that of the ANN models.
Petrov, S.
1996-10-01
Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.
Exploring Beginning Inference with Novice Grade 7 Students
ERIC Educational Resources Information Center
Watson, Jane M.
2008-01-01
This study documented efforts to facilitate ideas of beginning inference in novice grade 7 students. A design experiment allowed modified teaching opportunities in light of observation of components of a framework adapted from that developed by Pfannkuch for teaching informal inference with box plots. Box plots were replaced by hat plots, a…
Experimental adaptive Bayesian tomography
NASA Astrophysics Data System (ADS)
Kravtsov, K. S.; Straupe, S. S.; Radchenko, I. V.; Houlsby, N. M. T.; Huszár, F.; Kulik, S. P.
2013-06-01
We report an experimental realization of an adaptive quantum state tomography protocol. Our method takes advantage of a Bayesian approach to statistical inference and is naturally tailored for adaptive strategies. For pure states, we observe close to N-1 scaling of infidelity with overall number of registered events, while the best nonadaptive protocols allow for N-1/2 scaling only. Experiments are performed for polarization qubits, but the approach is readily adapted to any dimension.
Phylogeny and the inference of evolutionary trajectories.
Hancock, Lillian; Edwards, Erika J
2014-07-01
Most important organismal adaptations are not actually single traits, but complex trait syndromes that are evolutionarily integrated into a single emergent phenotype. Two alternative photosynthetic pathways, C4 photosynthesis and crassulacean acid metabolism (CAM), are primary plant adaptations of this sort, each requiring multiple biochemical and anatomical modifications. Phylogenetic methods are a promising approach for teasing apart the order of character acquisition during the evolution of complex traits, and the phylogenetic placement of intermediate phenotypes as sister taxa to fully optimized syndromes has been taken as good evidence of an 'ordered' evolutionary trajectory. But how much power does the phylogenetic approach have to detect ordered evolution? This study simulated ordered and unordered character evolution across a diverse set of phylogenetic trees to understand how tree size, models of evolution, and sampling efforts influence the ability to detect an evolutionary trajectory. The simulations show that small trees (15 taxa) do not contain enough information to correctly infer either an ordered or unordered trajectory, although inference improves as tree size and sampling increases. However, even when working with a 1000-taxon tree, the possibility of inferring the incorrect evolutionary model (type I/type II error) remains. Caution is needed when interpreting the phylogenetic placement of intermediate phenotypes, especially in small lineages. Such phylogenetic patterns can provide a line of evidence for the existence of a particular evolutionary trajectory, but they should be coupled with other types of data to infer the stepwise evolution of a complex character trait.
Maximum likelihood inference of reticulate evolutionary histories.
Yu, Yun; Dong, Jianrong; Liu, Kevin J; Nakhleh, Luay
2014-11-18
Hybridization plays an important role in the evolution of certain groups of organisms, adaptation to their environments, and diversification of their genomes. The evolutionary histories of such groups are reticulate, and methods for reconstructing them are still in their infancy and have limited applicability. We present a maximum likelihood method for inferring reticulate evolutionary histories while accounting simultaneously for incomplete lineage sorting. Additionally, we propose methods for assessing confidence in the amount of reticulation and the topology of the inferred evolutionary history. Our method obtains accurate estimates of reticulate evolutionary histories on simulated datasets. Furthermore, our method provides support for a hypothesis of a reticulate evolutionary history inferred from a set of house mouse (Mus musculus) genomes. As evidence of hybridization in eukaryotic groups accumulates, it is essential to have methods that infer reticulate evolutionary histories. The work we present here allows for such inference and provides a significant step toward putting phylogenetic networks on par with phylogenetic trees as a model of capturing evolutionary relationships. PMID:25368173
Hanson, K.M.; Cunningham, G.S.
1996-04-01
The authors are developing a computer application, called the Bayes Inference Engine, to provide the means to make inferences about models of physical reality within a Bayesian framework. The construction of complex nonlinear models is achieved by a fully object-oriented design. The models are represented by a data-flow diagram that may be manipulated by the analyst through a graphical programming environment. Maximum a posteriori solutions are achieved using a general, gradient-based optimization algorithm. The application incorporates a new technique of estimating and visualizing the uncertainties in specific aspects of the model.
Reinforcement learning or active inference?
Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J
2009-01-01
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Inverse Ising inference with correlated samples
NASA Astrophysics Data System (ADS)
Obermayer, Benedikt; Levine, Erel
2014-12-01
Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem.
Towards Context Sensitive Information Inference.
ERIC Educational Resources Information Center
Song, D.; Bruza, P. D.
2003-01-01
Discusses information inference from a psychologistic stance and proposes an information inference mechanism that makes inferences via computations of information flow through an approximation of a conceptual space. Highlights include cognitive economics of information processing; context sensitivity; and query models for information retrieval.…
Inference in high-dimensional parameter space.
O'Hare, Anthony
2015-11-01
Model parameter inference has become increasingly popular in recent years in the field of computational epidemiology, especially for models with a large number of parameters. Techniques such as Approximate Bayesian Computation (ABC) or maximum/partial likelihoods are commonly used to infer parameters in phenomenological models that best describe some set of data. These techniques rely on efficient exploration of the underlying parameter space, which is difficult in high dimensions, especially if there are correlations between the parameters in the model that may not be known a priori. The aim of this article is to demonstrate the use of the recently invented Adaptive Metropolis algorithm for exploring parameter space in a practical way through the use of a simple epidemiological model. PMID:26176624
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Graphical inference for Infovis.
Wickham, Hadley; Cook, Dianne; Hofmann, Heike; Buja, Andreas
2010-01-01
How do we know if what we see is really there? When visualizing data, how do we avoid falling into the trap of apophenia where we see patterns in random noise? Traditionally, infovis has been concerned with discovering new relationships, and statistics with preventing spurious relationships from being reported. We pull these opposing poles closer with two new techniques for rigorous statistical inference of visual discoveries. The "Rorschach" helps the analyst calibrate their understanding of uncertainty and "line-up" provides a protocol for assessing the significance of visual discoveries, protecting against the discovery of spurious structure.
Circular inferences in schizophrenia.
Jardri, Renaud; Denève, Sophie
2013-11-01
A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called 'circular belief propagation'. In circular belief propagation, bottom-up sensory information and top-down predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient's overconfidence when facing probabilistic choices, the learning of 'unshakable' causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia. PMID:24065721
Moment inference from tomograms
Day-Lewis, F. D.; Chen, Y.; Singha, K.
2007-01-01
Time-lapse geophysical tomography can provide valuable qualitative insights into hydrologic transport phenomena associated with aquifer dynamics, tracer experiments, and engineered remediation. Increasingly, tomograms are used to infer the spatial and/or temporal moments of solute plumes; these moments provide quantitative information about transport processes (e.g., advection, dispersion, and rate-limited mass transfer) and controlling parameters (e.g., permeability, dispersivity, and rate coefficients). The reliability of moments calculated from tomograms is, however, poorly understood because classic approaches to image appraisal (e.g., the model resolution matrix) are not directly applicable to moment inference. Here, we present a semi-analytical approach to construct a moment resolution matrix based on (1) the classic model resolution matrix and (2) image reconstruction from orthogonal moments. Numerical results for radar and electrical-resistivity imaging of solute plumes demonstrate that moment values calculated from tomograms depend strongly on plume location within the tomogram, survey geometry, regularization criteria, and measurement error. Copyright 2007 by the American Geophysical Union.
Circular inferences in schizophrenia.
Jardri, Renaud; Denève, Sophie
2013-11-01
A considerable number of recent experimental and computational studies suggest that subtle impairments of excitatory to inhibitory balance or regulation are involved in many neurological and psychiatric conditions. The current paper aims to relate, specifically and quantitatively, excitatory to inhibitory imbalance with psychotic symptoms in schizophrenia. Considering that the brain constructs hierarchical causal models of the external world, we show that the failure to maintain the excitatory to inhibitory balance results in hallucinations as well as in the formation and subsequent consolidation of delusional beliefs. Indeed, the consequence of excitatory to inhibitory imbalance in a hierarchical neural network is equated to a pathological form of causal inference called 'circular belief propagation'. In circular belief propagation, bottom-up sensory information and top-down predictions are reverberated, i.e. prior beliefs are misinterpreted as sensory observations and vice versa. As a result, these predictions are counted multiple times. Circular inference explains the emergence of erroneous percepts, the patient's overconfidence when facing probabilistic choices, the learning of 'unshakable' causal relationships between unrelated events and a paradoxical immunity to perceptual illusions, which are all known to be associated with schizophrenia.
On the Inference of Functional Circadian Networks Using Granger Causality
Pourzanjani, Arya; Herzog, Erik D.; Petzold, Linda R.
2015-01-01
Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals. PMID:26413748
On the Inference of Functional Circadian Networks Using Granger Causality.
Pourzanjani, Arya; Herzog, Erik D; Petzold, Linda R
2015-01-01
Being able to infer one way direct connections in an oscillatory network such as the suprachiastmatic nucleus (SCN) of the mammalian brain using time series data is difficult but crucial to understanding network dynamics. Although techniques have been developed for inferring networks from time series data, there have been no attempts to adapt these techniques to infer directional connections in oscillatory time series, while accurately distinguishing between direct and indirect connections. In this paper an adaptation of Granger Causality is proposed that allows for inference of circadian networks and oscillatory networks in general called Adaptive Frequency Granger Causality (AFGC). Additionally, an extension of this method is proposed to infer networks with large numbers of cells called LASSO AFGC. The method was validated using simulated data from several different networks. For the smaller networks the method was able to identify all one way direct connections without identifying connections that were not present. For larger networks of up to twenty cells the method shows excellent performance in identifying true and false connections; this is quantified by an area-under-the-curve (AUC) 96.88%. We note that this method like other Granger Causality-based methods, is based on the detection of high frequency signals propagating between cell traces. Thus it requires a relatively high sampling rate and a network that can propagate high frequency signals.
Bayesian inference in geomagnetism
NASA Technical Reports Server (NTRS)
Backus, George E.
1988-01-01
The inverse problem in empirical geomagnetic modeling is investigated, with critical examination of recently published studies. Particular attention is given to the use of Bayesian inference (BI) to select the damping parameter lambda in the uniqueness portion of the inverse problem. The mathematical bases of BI and stochastic inversion are explored, with consideration of bound-softening problems and resolution in linear Gaussian BI. The problem of estimating the radial magnetic field B(r) at the earth core-mantle boundary from surface and satellite measurements is then analyzed in detail, with specific attention to the selection of lambda in the studies of Gubbins (1983) and Gubbins and Bloxham (1985). It is argued that the selection method is inappropriate and leads to lambda values much larger than those that would result if a reasonable bound on the heat flow at the CMB were assumed.
BIE: Bayesian Inference Engine
NASA Astrophysics Data System (ADS)
Weinberg, Martin D.
2013-12-01
The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates heta distributed according to P( heta|D) so moments are trivially obtained by summing of the ensemble of variates.
Bayes factors and multimodel inference
Link, W.A.; Barker, R.J.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.
2009-01-01
Multimodel inference has two main themes: model selection, and model averaging. Model averaging is a means of making inference conditional on a model set, rather than on a selected model, allowing formal recognition of the uncertainty associated with model choice. The Bayesian paradigm provides a natural framework for model averaging, and provides a context for evaluation of the commonly used AIC weights. We review Bayesian multimodel inference, noting the importance of Bayes factors. Noting the sensitivity of Bayes factors to the choice of priors on parameters, we define and propose nonpreferential priors as offering a reasonable standard for objective multimodel inference.
Evolutionary inferences from the analysis of exchangeability
Hendry, Andrew P.; Kaeuffer, Renaud; Crispo, Erika; Peichel, Catherine L.; Bolnick, Daniel I.
2013-01-01
Evolutionary inferences are usually based on statistical models that compare mean genotypes and phenotypes (or their frequencies) among populations. An alternative is to use the actual distribution of genotypes and phenotypes to infer the “exchangeability” of individuals among populations. We illustrate this approach by using discriminant functions on principal components to classify individuals among paired lake and stream populations of threespine stickleback in each of six independent watersheds. Classification based on neutral and non-neutral microsatellite markers was highest to the population of origin and next-highest to populations in the same watershed. These patterns are consistent with the influence of historical contingency (separate colonization of each watershed) and subsequent gene flow (within but not between watersheds). In comparison to this low genetic exchangeability, ecological (diet) and morphological (trophic and armor traits) exchangeability was relatively high – particularly among populations from similar habitats. These patterns reflect the role of natural selection in driving parallel changes adaptive changes when independent populations colonize similar habitats. Importantly, however, substantial non-parallelism was also evident. Our results show that analyses based on exchangeability can confirm inferences based on statistical analyses of means or frequencies, while also refining insights into the drivers of – and constraints on – evolutionary diversification. PMID:24299398
Learning to Observe "and" Infer
ERIC Educational Resources Information Center
Hanuscin, Deborah L.; Park Rogers, Meredith A.
2008-01-01
Researchers describe the need for students to have multiple opportunities and social interaction to learn about the differences between observation and inference and their role in developing scientific explanations (Harlen 2001; Simpson 2000). Helping children develop their skills of observation and inference in science while emphasizing the…
Feature Inference Learning and Eyetracking
ERIC Educational Resources Information Center
Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.
2009-01-01
Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…
Improving Inferences from Multiple Methods.
ERIC Educational Resources Information Center
Shotland, R. Lance; Mark, Melvin M.
1987-01-01
Multiple evaluation methods (MEMs) can cause an inferential challenge, although there are strategies to strengthen inferences. Practical and theoretical issues involved in the use by social scientists of MEMs, three potential problems in drawing inferences from MEMs, and short- and long-term strategies for alleviating these problems are outlined.…
Causal Inference in Retrospective Studies.
ERIC Educational Resources Information Center
Holland, Paul W.; Rubin, Donald B.
1988-01-01
The problem of drawing causal inferences from retrospective case-controlled studies is considered. A model for causal inference in prospective studies is applied to retrospective studies. Limitations of case-controlled studies are formulated concerning relevant parameters that can be estimated in such studies. A coffee-drinking/myocardial…
Causal Inference and Developmental Psychology
ERIC Educational Resources Information Center
Foster, E. Michael
2010-01-01
Causal inference is of central importance to developmental psychology. Many key questions in the field revolve around improving the lives of children and their families. These include identifying risk factors that if manipulated in some way would foster child development. Such a task inherently involves causal inference: One wants to know whether…
ERIC Educational Resources Information Center
Diergarten, Anna Katharina; Nieding, Gerhild
2015-01-01
Two studies examined inferences drawn about the protagonist's emotional state in movies (Study 1) or audiobooks (Study 2). Children aged 5, 8, and 10 years old and adults took part. Participants saw or heard 20 movie scenes or sections of audiobooks taken or adapted from the TV show Lassie. An online measure of emotional inference was designed…
Graphical Models and Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Almond, Russell G.; Mislevy, Robert J.
1999-01-01
Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Active inference, communication and hermeneutics.
Friston, Karl J; Frith, Christopher D
2015-07-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa.
Active inference, communication and hermeneutics☆
Friston, Karl J.; Frith, Christopher D.
2015-01-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others – during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions – both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then – in principle – they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007
Active inference, communication and hermeneutics.
Friston, Karl J; Frith, Christopher D
2015-07-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
cosmoabc: Likelihood-free inference for cosmology
NASA Astrophysics Data System (ADS)
Ishida, Emille E. O.; Vitenti, Sandro D. P.; Penna-Lima, Mariana; Trindade, Arlindo M.; Cisewski, Jessi; M.; de Souza, Rafael; Cameron, Ewan; Busti, Vinicius C.
2015-05-01
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
Design-based and model-based inference in surveys of freshwater mollusks
Dorazio, R.M.
1999-01-01
Well-known concepts in statistical inference and sampling theory are used to develop recommendations for planning and analyzing the results of quantitative surveys of freshwater mollusks. Two methods of inference commonly used in survey sampling (design-based and model-based) are described and illustrated using examples relevant in surveys of freshwater mollusks. The particular objectives of a survey and the type of information observed in each unit of sampling can be used to help select the sampling design and the method of inference. For example, the mean density of a sparsely distributed population of mollusks can be estimated with higher precision by using model-based inference or by using design-based inference with adaptive cluster sampling than by using design-based inference with conventional sampling. More experience with quantitative surveys of natural assemblages of freshwater mollusks is needed to determine the actual benefits of different sampling designs and inferential procedures.
Adaptation without Plasticity.
Del Mar Quiroga, Maria; Morris, Adam P; Krekelberg, Bart
2016-09-27
Sensory adaptation is a phenomenon in which neurons are affected not only by their immediate input but also by the sequence of preceding inputs. In visual cortex, for example, neurons shift their preferred orientation after exposure to an oriented stimulus. This adaptation is traditionally attributed to plasticity. We show that a recurrent network generates tuning curve shifts observed in cat and macaque visual cortex, even when all synaptic weights and intrinsic properties in the model are fixed. This demonstrates that, in a recurrent network, adaptation on timescales of hundreds of milliseconds does not require plasticity. Given the ubiquity of recurrent connections, this phenomenon likely contributes to responses observed across cortex and shows that plasticity cannot be inferred solely from changes in tuning on these timescales. More broadly, our findings show that recurrent connections can endow a network with a powerful mechanism to store and integrate recent contextual information. PMID:27681421
Children's Category-Based Inferences Affect Classification
ERIC Educational Resources Information Center
Ross, Brian H.; Gelman, Susan A.; Rosengren, Karl S.
2005-01-01
Children learn many new categories and make inferences about these categories. Much work has examined how children make inferences on the basis of category knowledge. However, inferences may also affect what is learned about a category. Four experiments examine whether category-based inferences during category learning influence category knowledge…
Causal inference from observational data.
Listl, Stefan; Jürges, Hendrik; Watt, Richard G
2016-10-01
Randomized controlled trials have long been considered the 'gold standard' for causal inference in clinical research. In the absence of randomized experiments, identification of reliable intervention points to improve oral health is often perceived as a challenge. But other fields of science, such as social science, have always been challenged by ethical constraints to conducting randomized controlled trials. Methods have been established to make causal inference using observational data, and these methods are becoming increasingly relevant in clinical medicine, health policy and public health research. This study provides an overview of state-of-the-art methods specifically designed for causal inference in observational data, including difference-in-differences (DiD) analyses, instrumental variables (IV), regression discontinuity designs (RDD) and fixed-effects panel data analysis. The described methods may be particularly useful in dental research, not least because of the increasing availability of routinely collected administrative data and electronic health records ('big data'). PMID:27111146
Schirillo, James A
2013-10-01
In studies of lightness and color constancy, the terms lightness and brightness refer to the qualia corresponding to perceived surface reflectance and perceived luminance, respectively. However, what has rarely been considered is the fact that the volume of space containing surfaces appears neither empty, void, nor black, but filled with light. Helmholtz (1866/1962) came closest to describing this phenomenon when discussing inferred illumination, but previous theoretical treatments have fallen short by restricting their considerations to the surfaces of objects. The present work is among the first to explore how we infer the light present in empty space. It concludes with several research examples supporting the theory that humans can infer the differential levels and chromaticities of illumination in three-dimensional space. PMID:23435628
Inferring Diversity: Life After Shannon
NASA Astrophysics Data System (ADS)
Giffin, Adom
The diversity of a community that cannot be fully counted must be inferred. The two preeminent inference methods are the MaxEnt method, which uses information in the form of constraints and Bayes' rule which uses information in the form of data. It has been shown that these two methods are special cases of the method of Maximum (relative) Entropy (ME). We demonstrate how this method can be used as a measure of diversity that not only reproduces the features of Shannon's index but exceeds them by allowing more types of information to be included in the inference. A specific example is solved in detail. Additionally, the entropy that is found is the same form as the thermodynamic entropy.
Perception, illusions and Bayesian inference.
Nour, Matthew M; Nour, Joseph M
2015-01-01
Descriptive psychopathology makes a distinction between veridical perception and illusory perception. In both cases a perception is tied to a sensory stimulus, but in illusions the perception is of a false object. This article re-examines this distinction in light of new work in theoretical and computational neurobiology, which views all perception as a form of Bayesian statistical inference that combines sensory signals with prior expectations. Bayesian perceptual inference can solve the 'inverse optics' problem of veridical perception and provides a biologically plausible account of a number of illusory phenomena, suggesting that veridical and illusory perceptions are generated by precisely the same inferential mechanisms.
Inferring biotic interactions from proxies.
Morales-Castilla, Ignacio; Matias, Miguel G; Gravel, Dominique; Araújo, Miguel B
2015-06-01
Inferring biotic interactions from functional, phylogenetic and geographical proxies remains one great challenge in ecology. We propose a conceptual framework to infer the backbone of biotic interaction networks within regional species pools. First, interacting groups are identified to order links and remove forbidden interactions between species. Second, additional links are removed by examination of the geographical context in which species co-occur. Third, hypotheses are proposed to establish interaction probabilities between species. We illustrate the framework using published food-webs in terrestrial and marine systems. We conclude that preliminary descriptions of the web of life can be made by careful integration of data with theory.
Noninvasive inference of the molecular chemotactic response using bacterial trajectories.
Masson, Jean-Baptiste; Voisinne, Guillaume; Wong-Ng, Jerome; Celani, Antonio; Vergassola, Massimo
2012-01-31
The quality of sensing and response to external stimuli constitutes a basic element in the selective performance of living organisms. Here we consider the response of Escherichia coli to chemical stimuli. For moderate amplitudes, the bacterial response to generic profiles of sensed chemicals is reconstructed from its response function to an impulse, which then controls the efficiency of bacterial motility. We introduce a method for measuring the impulse response function based on coupling microfluidic experiments and inference methods: The response function is inferred using Bayesian methods from the observed trajectories of bacteria swimming in microfluidically controlled chemical fields. The notable advantages are that the method is based on the bacterial swimming response, it is noninvasive, without any genetic and/or mechanical preparation, and assays the behavior of the whole flagella bundle. We exploit the inference method to measure responses to aspartate and α-methylaspartate--measured previously by other methods--as well as glucose, leucine, and serine. The response to the attractant glucose is shown to be biphasic and perfectly adapted, as for aspartate. The response to the attractant serine is shown to be biphasic yet imperfectly adapted, that is, the response function has a nonzero (positive) integral. The adaptation of the response to the repellent leucine is also imperfect, with the sign of the two phases inverted with respect to serine. The diversity in the bacterial population of the response function and its dependency upon the background concentration are quantified. PMID:22307649
Perceptual Inference and Autistic Traits
ERIC Educational Resources Information Center
Skewes, Joshua C; Jegindø, Else-Marie; Gebauer, Line
2015-01-01
Autistic people are better at perceiving details. Major theories explain this in terms of bottom-up sensory mechanisms or in terms of top-down cognitive biases. Recently, it has become possible to link these theories within a common framework. This framework assumes that perception is implicit neural inference, combining sensory evidence with…
Science Shorts: Observation versus Inference
ERIC Educational Resources Information Center
Leager, Craig R.
2008-01-01
When you observe something, how do you know for sure what you are seeing, feeling, smelling, or hearing? Asking students to think critically about their encounters with the natural world will help to strengthen their understanding and application of the science-process skills of observation and inference. In the following lesson, students make…
Sample Size and Correlational Inference
ERIC Educational Resources Information Center
Anderson, Richard B.; Doherty, Michael E.; Friedrich, Jeff C.
2008-01-01
In 4 studies, the authors examined the hypothesis that the structure of the informational environment makes small samples more informative than large ones for drawing inferences about population correlations. The specific purpose of the studies was to test predictions arising from the signal detection simulations of R. B. Anderson, M. E. Doherty,…
Improving Explanatory Inferences from Assessments
ERIC Educational Resources Information Center
Diakow, Ronli Phyllis
2013-01-01
This dissertation comprises three papers that propose, discuss, and illustrate models to make improved inferences about research questions regarding student achievement in education. Addressing the types of questions common in educational research today requires three different "extensions" to traditional educational assessment: (1)…
Reentry vehicle adaptive telemetry
Kidner, R.E.
1993-09-01
In RF telemetry (TM) the allowable RF bandwidth limits the amount of data in the telemetered data set. Typically the data set is less than ideal to accommodate all aspects of a test. In the case of diagnostic data, the compromise often leaves insufficient diagnostic data when problems occur. As a solution, intelligence was designed into a TM, allowing it to adapt to changing data requirements. To minimize the computational requirements for an intelligent TM, a fuzzy logic inference engine was developed. This reference engine was simulated on a PC and then loaded into a TM hardware package for final testing.
Reentry vehicle adaptive telemetry
NASA Astrophysics Data System (ADS)
Kidner, R. E.
1993-09-01
In RF telemetry (TM) the allowable RF bandwidth limits the amount of data in the telemetered data set. Typically the data set is less than ideal to accommodate all aspects of a test. In the case of diagnostic data, the compromise often leaves insufficient diagnostic data when problems occur. As a solution, intelligence was designed into a TM allowing it to adapt to changing data requirements. To minimize the computational requirements for an intelligent TM, a fuzzy logic inference engine was developed. This reference engine was simulated on a PC and then loaded into a TM hardware package for final testing.
Network Plasticity as Bayesian Inference
Legenstein, Robert; Maass, Wolfgang
2015-01-01
General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling. PMID:26545099
Bayesian inference on proportional elections.
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Statistical learning and selective inference
Taylor, Jonathan; Tibshirani, Robert J.
2015-01-01
We describe the problem of “selective inference.” This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have “cherry-picked”—searched for the strongest associations—means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis. PMID:26100887
Causal inference based on counterfactuals
Höfler, M
2005-01-01
Background The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. Discussion This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. Summary Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept. PMID:16159397
Cortical circuits for perceptual inference.
Friston, Karl; Kiebel, Stefan
2009-10-01
This paper assumes that cortical circuits have evolved to enable inference about the causes of sensory input received by the brain. This provides a principled specification of what neural circuits have to achieve. Here, we attempt to address how the brain makes inferences by casting inference as an optimisation problem. We look at how the ensuing recognition dynamics could be supported by directed connections and message-passing among neuronal populations, given our knowledge of intrinsic and extrinsic neuronal connections. We assume that the brain models the world as a dynamic system, which imposes causal structure on the sensorium. Perception is equated with the optimisation or inversion of this internal model, to explain sensory input. Given a model of how sensory data are generated, we use a generic variational approach to model inversion to furnish equations that prescribe recognition; i.e., the dynamics of neuronal activity that represents the causes of sensory input. Here, we focus on a model whose hierarchical and dynamical structure enables simulated brains to recognise and predict sequences of sensory states. We first review these models and their inversion under a variational free-energy formulation. We then show that the brain has the necessary infrastructure to implement this inversion and present stimulations using synthetic birds that generate and recognise birdsongs.
Cortical circuits for perceptual inference.
Friston, Karl; Kiebel, Stefan
2009-10-01
This paper assumes that cortical circuits have evolved to enable inference about the causes of sensory input received by the brain. This provides a principled specification of what neural circuits have to achieve. Here, we attempt to address how the brain makes inferences by casting inference as an optimisation problem. We look at how the ensuing recognition dynamics could be supported by directed connections and message-passing among neuronal populations, given our knowledge of intrinsic and extrinsic neuronal connections. We assume that the brain models the world as a dynamic system, which imposes causal structure on the sensorium. Perception is equated with the optimisation or inversion of this internal model, to explain sensory input. Given a model of how sensory data are generated, we use a generic variational approach to model inversion to furnish equations that prescribe recognition; i.e., the dynamics of neuronal activity that represents the causes of sensory input. Here, we focus on a model whose hierarchical and dynamical structure enables simulated brains to recognise and predict sequences of sensory states. We first review these models and their inversion under a variational free-energy formulation. We then show that the brain has the necessary infrastructure to implement this inversion and present stimulations using synthetic birds that generate and recognise birdsongs. PMID:19635656
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
The nonlinearity of dynamics in systems biology makes it hard to infer them from experimental data. Simple linear models are computationally efficient, but cannot incorporate these important nonlinearities. An adaptive method based on the S-system formalism, which is a sensible representation of nonlinear mass-action kinetics typically found in cellular dynamics, maintains the efficiency of linear regression. We combine this approach with adaptive model selection to obtain efficient and parsimonious representations of cellular dynamics. The approach is tested by inferring the dynamics of yeast glycolysis from simulated data. With little computing time, it produces dynamical models with high predictive power and with structural complexity adapted to the difficulty of the inference problem. PMID:25806510
Small, Dylan S; Joffe, Marshall M; Lynch, Kevin G; Roy, Jason A; Russell Localio, A
2014-09-10
Tom Ten Have made many contributions to causal inference and biostatistics before his untimely death. This paper reviews Tom's contributions and discusses potential related future research directions. We focus on Tom's contributions to longitudinal/repeated measures categorical data analysis and particularly his contributions to causal inference. Tom's work on causal inference was primarily in the areas of estimating the effect of receiving treatment in randomized trials with nonadherence and mediation analysis. A related area to mediation analysis he was working on at the time of his death was posttreatment effect modification with applications to designing adaptive treatment strategies.
Bayesian inference for Markov jump processes with informative observations.
Golightly, Andrew; Wilkinson, Darren J
2015-04-01
In this paper we consider the problem of parameter inference for Markov jump process (MJP) representations of stochastic kinetic models. Since transition probabilities are intractable for most processes of interest yet forward simulation is straightforward, Bayesian inference typically proceeds through computationally intensive methods such as (particle) MCMC. Such methods ostensibly require the ability to simulate trajectories from the conditioned jump process. When observations are highly informative, use of the forward simulator is likely to be inefficient and may even preclude an exact (simulation based) analysis. We therefore propose three methods for improving the efficiency of simulating conditioned jump processes. A conditioned hazard is derived based on an approximation to the jump process, and used to generate end-point conditioned trajectories for use inside an importance sampling algorithm. We also adapt a recently proposed sequential Monte Carlo scheme to our problem. Essentially, trajectories are reweighted at a set of intermediate time points, with more weight assigned to trajectories that are consistent with the next observation. We consider two implementations of this approach, based on two continuous approximations of the MJP. We compare these constructs for a simple tractable jump process before using them to perform inference for a Lotka-Volterra system. The best performing construct is used to infer the parameters governing a simple model of motility regulation in Bacillus subtilis. PMID:25720091
Category Representation for Classification and Feature Inference
ERIC Educational Resources Information Center
Johansen, Mark K.; Kruschke, John K.
2005-01-01
This research's purpose was to contrast the representations resulting from learning of the same categories by either classifying instances or inferring instance features. Prior inference learning research, particularly T. Yamauchi and A. B. Markman (1998), has suggested that feature inference learning fosters prototype representation, whereas…
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive managem...
Neuronal integration of dynamic sources: Bayesian learning and Bayesian inference
NASA Astrophysics Data System (ADS)
Siegelmann, Hava T.; Holzman, Lars E.
2010-09-01
One of the brain's most basic functions is integrating sensory data from diverse sources. This ability causes us to question whether the neural system is computationally capable of intelligently integrating data, not only when sources have known, fixed relative dependencies but also when it must determine such relative weightings based on dynamic conditions, and then use these learned weightings to accurately infer information about the world. We suggest that the brain is, in fact, fully capable of computing this parallel task in a single network and describe a neural inspired circuit with this property. Our implementation suggests the possibility that evidence learning requires a more complex organization of the network than was previously assumed, where neurons have different specialties, whose emergence brings the desired adaptivity seen in human online inference.
sick: The Spectroscopic Inference Crank
NASA Astrophysics Data System (ADS)
Casey, Andrew R.
2016-03-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Universum Inference and Corpus Homogeneity
NASA Astrophysics Data System (ADS)
Vogel, Carl; Lynch, Gerard; Janssen, Jerom
Universum Inference is re-interpreted for assessment of corpus homogeneity in computational stylometry. Recent stylometric research quantifies strength of characterization within dramatic works by assessing the homogeneity of corpora associated with dramatic personas. A methodological advance is suggested to mitigate the potential for the assessment of homogeneity to be achieved by chance. Baseline comparison analysis is constructed for contributions to debates by nonfictional participants: the corpus analyzed consists of transcripts of US Presidential and Vice-Presidential debates from the 2000 election cycle. The corpus is also analyzed in translation to Italian, Spanish and Portuguese. Adding randomized categories makes assessments of homogeneity more conservative.
Impact of nonignorable coarsening on Bayesian inference.
Zhang, Jiameng; Heitjan, Daniel F
2007-10-01
The coarse data model of Heitjan and Rubin (1991) generalizes the missing data model of Rubin (1976) to cover other forms of incompleteness such as censoring and grouping. The model has 2 components: an ideal data model describing the distribution of the quantity of interest and a coarsening mechanism that describes a distribution over degrees of coarsening given the ideal data. The coarsening mechanism is said to be nonignorable when the degree of coarsening depends on an incompletely observed ideal outcome, in which case failure to properly account for it can spoil inferences. A theme in recent research is to measure sensitivity to nonignorability by evaluating the effect of a small departure from ignorability on the maximum likelihood estimate (MLE) of a parameter of the ideal data model. One such construct is the "index of local sensitivity to nonignorability" (ISNI) (Troxel and others, 2004), which is the derivative of the MLE with respect to a nonignorability parameter evaluated at the ignorable model. In this paper, we adapt ISNI to Bayesian modeling by instead defining it as the derivative of the posterior expectation. We propose the application of ISNI as a first step in judging the robustness of a Bayesian analysis to nonignorable coarsening. We derive formulas for a range of models and apply the method to evaluate sensitivity to nonignorable coarsening in 2 real data examples, one involving missing CD4 counts in an HIV trial and the other involving potentially informatively censored relapse times in a leukemia trial.
Hunt, R.L.
1983-12-27
An adapter is disclosed for use with a fireplace. The stove pipe of a stove standing in a room to be heated may be connected to the flue of the chimney so that products of combustion from the stove may be safely exhausted through the flue and outwardly of the chimney. The adapter may be easily installed within the fireplace by removing the damper plate and fitting the adapter to the damper frame. Each of a pair of bolts has a portion which hooks over a portion of the damper frame and a threaded end depending from the hook portion and extending through a hole in the adapter. Nuts are threaded on the bolts and are adapted to force the adapter into a tight fit with the adapter frame.
SYMBOLIC INFERENCE OF XENOBIOTIC METABOLISM
MCSHAN, D.C.; UPDADHAYAYA, M.; SHAH, I.
2009-01-01
We present a new symbolic computational approach to elucidate the biochemical networks of living systems de novo and we apply it to an important biomedical problem: xenobiotic metabolism. A crucial issue in analyzing and modeling a living organism is understanding its biochemical network beyond what is already known. Our objective is to use the available metabolic information in a representational framework that enables the inference of novel biochemical knowledge and whose results can be validated experimentally. We describe a symbolic computational approach consisting of two parts. First, biotransformation rules are inferred from the molecular graphs of compounds in enzyme-catalyzed reactions. Second, these rules are recursively applied to different compounds to generate novel metabolic networks, containing new biotransformations and new metabolites. Using data for 456 generic reactions and 825 generic compounds from KEGG we were able to extract 110 biotransformation rules, which generalize a subset of known biocatalytic functions. We tested our approach by applying these rules to ethanol, a common substance of abuse and to furfuryl alcohol, a xenobiotic organic solvent, which is absent in metabolic databases. In both cases our predictions on the fate of ethanol and furfuryl alcohol are consistent with the literature on the metabolism of these compounds. PMID:14992532
Bayesian inference for OPC modeling
NASA Astrophysics Data System (ADS)
Burbine, Andrew; Sturtevant, John; Fryer, David; Smith, Bruce W.
2016-03-01
The use of optical proximity correction (OPC) demands increasingly accurate models of the photolithographic process. Model building and inference techniques in the data science community have seen great strides in the past two decades which make better use of available information. This paper aims to demonstrate the predictive power of Bayesian inference as a method for parameter selection in lithographic models by quantifying the uncertainty associated with model inputs and wafer data. Specifically, the method combines the model builder's prior information about each modelling assumption with the maximization of each observation's likelihood as a Student's t-distributed random variable. Through the use of a Markov chain Monte Carlo (MCMC) algorithm, a model's parameter space is explored to find the most credible parameter values. During parameter exploration, the parameters' posterior distributions are generated by applying Bayes' rule, using a likelihood function and the a priori knowledge supplied. The MCMC algorithm used, an affine invariant ensemble sampler (AIES), is implemented by initializing many walkers which semiindependently explore the space. The convergence of these walkers to global maxima of the likelihood volume determine the parameter values' highest density intervals (HDI) to reveal champion models. We show that this method of parameter selection provides insights into the data that traditional methods do not and outline continued experiments to vet the method.
Dopamine, affordance and active inference.
Friston, Karl J; Shiner, Tamara; FitzGerald, Thomas; Galea, Joseph M; Adams, Rick; Brown, Harriet; Dolan, Raymond J; Moran, Rosalyn; Stephan, Klaas Enno; Bestmann, Sven
2012-01-01
The role of dopamine in behaviour and decision-making is often cast in terms of reinforcement learning and optimal decision theory. Here, we present an alternative view that frames the physiology of dopamine in terms of Bayes-optimal behaviour. In this account, dopamine controls the precision or salience of (external or internal) cues that engender action. In other words, dopamine balances bottom-up sensory information and top-down prior beliefs when making hierarchical inferences (predictions) about cues that have affordance. In this paper, we focus on the consequences of changing tonic levels of dopamine firing using simulations of cued sequential movements. Crucially, the predictions driving movements are based upon a hierarchical generative model that infers the context in which movements are made. This means that we can confuse agents by changing the context (order) in which cues are presented. These simulations provide a (Bayes-optimal) model of contextual uncertainty and set switching that can be quantified in terms of behavioural and electrophysiological responses. Furthermore, one can simulate dopaminergic lesions (by changing the precision of prediction errors) to produce pathological behaviours that are reminiscent of those seen in neurological disorders such as Parkinson's disease. We use these simulations to demonstrate how a single functional role for dopamine at the synaptic level can manifest in different ways at the behavioural level.
Trandimensional Inference in the Geosciences
NASA Astrophysics Data System (ADS)
Bodin, Thomas
2016-04-01
An inverse problem is the task often occurring in many branches of Earth sciences, where the values of some model parameters describing the Earth must be obtained given noisy observations made at the surface. In all applications of inversion, assumptions are made about the nature of the model parametrisation and data noise characteristics, and results can significantly depend on those assumptions. These quantities are often manually `tuned' by means of subjective trial-and-error procedures, and this prevents to accurately quantify uncertainties in the solution. A Bayesian approach allows these assumptions to be relaxed by incorporating relevant parameters as unknowns in the inference problem. Rather than being forced to make decisions on parametrisation, the level of data noise and the weights between data types in advance, as is often the case in an optimization framework, the choice can be informed by the data themselves. Probabilistic sampling techniques such as transdimensional Markov chain Monte Carlo, allow sampling over complex posterior probability density functions, thus providing information on constraint, trade-offs and uncertainty in the unknowns. This presentation will present a review of transdimensional inference, and its application to different problems, ranging from Geochemistry to Solid Earth Geophysics.
Quantum Inference on Bayesian Networks
NASA Astrophysics Data System (ADS)
Yoder, Theodore; Low, Guang Hao; Chuang, Isaac
2014-03-01
Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.
Uncertainty and inference in deterministic and noisy chaos
NASA Astrophysics Data System (ADS)
Strelioff, Christopher Charles
We study dynamical systems which exhibit chaotic dynamics with a focus on sources of real and apparent randomness including sensitivity to perturbation, dynamical noise, measurement uncertainty and finite data samples for inference. This choice of topics is motivated by a desire to adapt established theoretical tools such as the Perron-Frobenius operator and symbolic dynamics to experimental data. First, we study prediction of chaotic time series when a perfect model is available but the initial condition is measured with uncertainty. A common approach for predicting future data given these circumstances is to apply the model despite the uncertainty. In systems with fold dynamics, we find prediction is improved over this strategy by recognizing this behavior. A systematic study of the logistic map demonstrates prediction can be extended three time steps using an approximation of the relevant Perron-Frobenius operator dynamics. Next, we show how to infer kth order Markov chains from finite data by applying Bayesian methods to both parameter estimation and model-order selection. In this pursuit, we connect inference to statistical mechanics through information-theoretic (type theory) techniques and establish a direct relationship between Bayesian evidence and the partition function. This allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Also, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes. Finally, we study a binary partition of time series data from the logistic map with additive noise, inferring optimal, effectively generating partitions and kth order Markov chain models. Here we adapt Bayesian inference, developed above, for applied symbolic dynamics. We show that reconciling Kolmogorov's maximum-entropy partition with the methods of Bayesian model selection requires the use of two separate
Inference for survival prediction under the regularized Cox model.
Sinnott, Jennifer A; Cai, Tianxi
2016-10-01
When a moderate number of potential predictors are available and a survival model is fit with regularization to achieve variable selection, providing accurate inference on the predicted survival can be challenging. We investigate inference on the predicted survival estimated after fitting a Cox model under regularization guaranteeing the oracle property. We demonstrate that existing asymptotic formulas for the standard errors of the coefficients tend to underestimate the variability for some coefficients, while typical resampling such as the bootstrap tends to overestimate it; these approaches can both lead to inaccurate variance estimation for predicted survival functions. We propose a two-stage adaptation of a resampling approach that brings the estimated error in line with the truth. In stage 1, we estimate the coefficients in the observed data set and in [Formula: see text] resampled data sets, and allow the resampled coefficient estimates to vote on whether each coefficient should be 0. For those coefficients voted as zero, we set both the point and interval estimates to [Formula: see text] In stage 2, to make inference about coefficients not voted as zero in stage 1, we refit the penalized model in the observed data and in the [Formula: see text] resampled data sets with only variables corresponding to those coefficients. We demonstrate that ensemble voting-based point and interval estimators of the coefficients perform well in finite samples, and prove that the point estimator maintains the oracle property. We extend this approach to derive inference procedures for survival functions and demonstrate that our proposed interval estimation procedures substantially outperform estimators based on asymptotic inference or standard bootstrap. We further illustrate our proposed procedures to predict breast cancer survival in a gene expression study. PMID:27107008
Inference for survival prediction under the regularized Cox model.
Sinnott, Jennifer A; Cai, Tianxi
2016-10-01
When a moderate number of potential predictors are available and a survival model is fit with regularization to achieve variable selection, providing accurate inference on the predicted survival can be challenging. We investigate inference on the predicted survival estimated after fitting a Cox model under regularization guaranteeing the oracle property. We demonstrate that existing asymptotic formulas for the standard errors of the coefficients tend to underestimate the variability for some coefficients, while typical resampling such as the bootstrap tends to overestimate it; these approaches can both lead to inaccurate variance estimation for predicted survival functions. We propose a two-stage adaptation of a resampling approach that brings the estimated error in line with the truth. In stage 1, we estimate the coefficients in the observed data set and in [Formula: see text] resampled data sets, and allow the resampled coefficient estimates to vote on whether each coefficient should be 0. For those coefficients voted as zero, we set both the point and interval estimates to [Formula: see text] In stage 2, to make inference about coefficients not voted as zero in stage 1, we refit the penalized model in the observed data and in the [Formula: see text] resampled data sets with only variables corresponding to those coefficients. We demonstrate that ensemble voting-based point and interval estimators of the coefficients perform well in finite samples, and prove that the point estimator maintains the oracle property. We extend this approach to derive inference procedures for survival functions and demonstrate that our proposed interval estimation procedures substantially outperform estimators based on asymptotic inference or standard bootstrap. We further illustrate our proposed procedures to predict breast cancer survival in a gene expression study.
Inference is bliss: using evolutionary relationship to guide categorical inferences.
Novick, Laura R; Catley, Kefyn M; Funk, Daniel J
2011-01-01
Three experiments, adopting an evolutionary biology perspective, investigated subjects' inferences about living things. Subjects were told that different enzymes help regulate cell function in two taxa and asked which enzyme a third taxon most likely uses. Experiment 1 and its follow-up, with college students, used triads involving amphibians, reptiles, and mammals (reptiles and mammals are most closely related evolutionarily) and plants, fungi, and animals (fungi are more closely related to animals than to plants). Experiment 2, with 10th graders, also included triads involving mammals, birds, and snakes/crocodilians (birds and snakes/crocodilians are most closely related). Some subjects received cladograms (hierarchical diagrams) depicting the evolutionary relationships among the taxa. The effect of providing cladograms depended on students' background in biology. The results illuminate students' misconceptions concerning common taxa and constraints on their willingness to override faulty knowledge when given appropriate evolutionary evidence. Implications for introducing tree thinking into biology curricula are discussed. PMID:21463358
Larsson, Jonas; Solomon, Samuel G; Kohn, Adam
2016-07-01
Adaptation has been widely used in functional magnetic imaging (fMRI) studies to infer neuronal response properties in human cortex. fMRI adaptation has been criticized because of the complex relationship between fMRI adaptation effects and the multiple neuronal effects that could underlie them. Many of the longstanding concerns about fMRI adaptation have received empirical support from neurophysiological studies over the last decade. We review these studies here, and also consider neuroimaging studies that have investigated how fMRI adaptation effects are influenced by high-level perceptual processes. The results of these studies further emphasize the need to interpret fMRI adaptation results with caution, but they also provide helpful guidance for more accurate interpretation and better experimental design. In addition, we argue that rather than being used as a proxy for measurements of neuronal stimulus selectivity, fMRI adaptation may be most useful for studying population-level adaptation effects across cortical processing hierarchies.
Barrett, Harrison H.; Furenlid, Lars R.; Freed, Melanie; Hesterman, Jacob Y.; Kupinski, Matthew A.; Clarkson, Eric; Whitaker, Meredith K.
2008-01-01
Adaptive imaging systems alter their data-acquisition configuration or protocol in response to the image information received. An adaptive pinhole single-photon emission computed tomography (SPECT) system might acquire an initial scout image to obtain preliminary information about the radiotracer distribution and then adjust the configuration or sizes of the pinholes, the magnifications, or the projection angles in order to improve performance. This paper briefly describes two small-animal SPECT systems that allow this flexibility and then presents a framework for evaluating adaptive systems in general, and adaptive SPECT systems in particular. The evaluation is in terms of the performance of linear observers on detection or estimation tasks. Expressions are derived for the ideal linear (Hotelling) observer and the ideal linear (Wiener) estimator with adaptive imaging. Detailed expressions for the performance figures of merit are given, and possible adaptation rules are discussed. PMID:18541485
Structural inference for uncertain networks
NASA Astrophysics Data System (ADS)
Martin, Travis; Ball, Brian; Newman, M. E. J.
2016-01-01
In the study of networked systems such as biological, technological, and social networks the available data are often uncertain. Rather than knowing the structure of a network exactly, we know the connections between nodes only with a certain probability. In this paper we develop methods for the analysis of such uncertain data, focusing particularly on the problem of community detection. We give a principled maximum-likelihood method for inferring community structure and demonstrate how the results can be used to make improved estimates of the true structure of the network. Using computer-generated benchmark networks we demonstrate that our methods are able to reconstruct known communities more accurately than previous approaches based on data thresholding. We also give an example application to the detection of communities in a protein-protein interaction network.
Transdimensional inference in the geosciences.
Sambridge, M; Bodin, T; Gallagher, K; Tkalcic, H
2013-02-13
Seismologists construct images of the Earth's interior structure using observations, derived from seismograms, collected at the surface. A common approach to such inverse problems is to build a single 'best' Earth model, in some sense. This is despite the fact that the observations by themselves often do not require, or even allow, a single best-fit Earth model to exist. Interpretation of optimal models can be fraught with difficulties, particularly when formal uncertainty estimates become heavily dependent on the regularization imposed. Similar issues occur across the physical sciences with model construction in ill-posed problems. An alternative approach is to embrace the non-uniqueness directly and employ an inference process based on parameter space sampling. Instead of seeking a best model within an optimization framework, one seeks an ensemble of solutions and derives properties of that ensemble for inspection. While this idea has itself been employed for more than 30 years, it is now receiving increasing attention in the geosciences. Recently, it has been shown that transdimensional and hierarchical sampling methods have some considerable benefits for problems involving multiple parameter types, uncertain data errors and/or uncertain model parametrizations, as are common in seismology. Rather than being forced to make decisions on parametrization, the level of data noise and the weights between data types in advance, as is often the case in an optimization framework, the choice can be informed by the data themselves. Despite the relatively high computational burden involved, the number of areas where sampling methods are now feasible is growing rapidly. The intention of this article is to introduce concepts of transdimensional inference to a general readership and illustrate with particular seismological examples. A growing body of references provide necessary detail. PMID:23277604
ERIC Educational Resources Information Center
Harrell, William
1999-01-01
Provides information on various adaptive technology resources available to people with disabilities. (Contains 19 references, an annotated list of 129 websites, and 12 additional print resources.) (JOW)
Anstis, Stuart
2013-01-01
It is known that adaptation to a disk that flickers between black and white at 3-8 Hz on a gray surround renders invisible a congruent gray test disk viewed afterwards. This is contrast adaptation. We now report that adapting simply to the flickering circular outline of the disk can have the same effect. We call this "contour adaptation." This adaptation does not transfer interocularly, and apparently applies only to luminance, not color. One can adapt selectively to only some of the contours in a display, making only these contours temporarily invisible. For instance, a plaid comprises a vertical grating superimposed on a horizontal grating. If one first adapts to appropriate flickering vertical lines, the vertical components of the plaid disappears and it looks like a horizontal grating. Also, we simulated a Cornsweet (1970) edge, and we selectively adapted out the subjective and objective contours of a Kanisza (1976) subjective square. By temporarily removing edges, contour adaptation offers a new technique to study the role of visual edges, and it demonstrates how brightness information is concentrated in edges and propagates from them as it fills in surfaces.
Bayesian Nonparametric Inference – Why and How
Müller, Peter; Mitra, Riten
2013-01-01
We review inference under models with nonparametric Bayesian (BNP) priors. The discussion follows a set of examples for some common inference problems. The examples are chosen to highlight problems that are challenging for standard parametric inference. We discuss inference for density estimation, clustering, regression and for mixed effects models with random effects distributions. While we focus on arguing for the need for the flexibility of BNP models, we also review some of the more commonly used BNP models, thus hopefully answering a bit of both questions, why and how to use BNP. PMID:24368932
Generic Comparison of Protein Inference Engines*
Claassen, Manfred; Reiter, Lukas; Hengartner, Michael O.; Buhmann, Joachim M.; Aebersold, Ruedi
2012-01-01
Protein identifications, instead of peptide-spectrum matches, constitute the biologically relevant result of shotgun proteomics studies. How to appropriately infer and report protein identifications has triggered a still ongoing debate. This debate has so far suffered from the lack of appropriate performance measures that allow us to objectively assess protein inference approaches. This study describes an intuitive, generic and yet formal performance measure and demonstrates how it enables experimentalists to select an optimal protein inference strategy for a given collection of fragment ion spectra. We applied the performance measure to systematically explore the benefit of excluding possibly unreliable protein identifications, such as single-hit wonders. Therefore, we defined a family of protein inference engines by extending a simple inference engine by thousands of pruning variants, each excluding a different specified set of possibly unreliable identifications. We benchmarked these protein inference engines on several data sets representing different proteomes and mass spectrometry platforms. Optimally performing inference engines retained all high confidence spectral evidence, without posterior exclusion of any type of protein identifications. Despite the diversity of studied data sets consistently supporting this rule, other data sets might behave differently. In order to ensure maximal reliable proteome coverage for data sets arising in other studies we advocate abstaining from rigid protein inference rules, such as exclusion of single-hit wonders, and instead consider several protein inference approaches and assess these with respect to the presented performance measure in the specific application context. PMID:22057310
Learning Probabilistic Inference through Spike-Timing-Dependent Plasticity123
Pecevski, Dejan
2016-01-01
Abstract Numerous experimental data show that the brain is able to extract information from complex, uncertain, and often ambiguous experiences. Furthermore, it can use such learnt information for decision making through probabilistic inference. Several models have been proposed that aim at explaining how probabilistic inference could be performed by networks of neurons in the brain. We propose here a model that can also explain how such neural network could acquire the necessary information for that from examples. We show that spike-timing-dependent plasticity in combination with intrinsic plasticity generates in ensembles of pyramidal cells with lateral inhibition a fundamental building block for that: probabilistic associations between neurons that represent through their firing current values of random variables. Furthermore, by combining such adaptive network motifs in a recursive manner the resulting network is enabled to extract statistical information from complex input streams, and to build an internal model for the distribution p* that generates the examples it receives. This holds even if p* contains higher-order moments. The analysis of this learning process is supported by a rigorous theoretical foundation. Furthermore, we show that the network can use the learnt internal model immediately for prediction, decision making, and other types of probabilistic inference. PMID:27419214
NASA Astrophysics Data System (ADS)
Kinzig, Ann P.
2015-03-01
This paper is intended as a brief introduction to climate adaptation in a conference devoted otherwise to the physics of sustainable energy. Whereas mitigation involves measures to reduce the probability of a potential event, such as climate change, adaptation refers to actions that lessen the impact of climate change. Mitigation and adaptation differ in other ways as well. Adaptation does not necessarily have to be implemented immediately to be effective; it only needs to be in place before the threat arrives. Also, adaptation does not necessarily require global, coordinated action; many effective adaptation actions can be local. Some urban communities, because of land-use change and the urban heat-island effect, currently face changes similar to some expected under climate change, such as changes in water availability, heat-related morbidity, or changes in disease patterns. Concern over those impacts might motivate the implementation of measures that would also help in climate adaptation, despite skepticism among some policy makers about anthropogenic global warming. Studies of ancient civilizations in the southwestern US lends some insight into factors that may or may not be important to successful adaptation.
Role of Utility and Inference in the Evolution of Functional Information
Sharov, Alexei A.
2009-01-01
Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are
Evaluating Content Alignment in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Wise, Steven L.; Kingsbury, G. Gage; Webb, Norman L.
2015-01-01
The alignment between a test and the content domain it measures represents key evidence for the validation of test score inferences. Although procedures have been developed for evaluating the content alignment of linear tests, these procedures are not readily applicable to computerized adaptive tests (CATs), which require large item pools and do…
Inferring the age of a fixed beneficial allele.
Ormond, Louise; Foll, Matthieu; Ewing, Gregory B; Pfeifer, Susanne P; Jensen, Jeffrey D
2016-01-01
Estimating the age and strength of beneficial alleles is central to understanding how adaptation proceeds in response to changing environmental conditions. Several haplotype-based estimators exist for inferring the age of segregating beneficial mutations. Here, we develop an approximate Bayesian-based approach that rather estimates these parameters for fixed beneficial mutations in single populations. We integrate a range of existing diversity, site frequency spectrum, haplotype- and linkage disequilibrium-based summary statistics. We show that for strong selective sweeps on de novo mutations the method can estimate allele age and selection strength even in nonequilibrium demographic scenarios. We extend our approach to models of selection on standing variation, and co-infer the frequency at which selection began to act upon the mutation. Finally, we apply our method to estimate the age and selection strength of a previously identified mutation underpinning cryptic colour adaptation in a wild deer mouse population, and compare our findings with previously published estimates as well as with geological data pertaining to the presumed shift in selective pressure. PMID:26576754
The Impact of Disablers on Predictive Inference
ERIC Educational Resources Information Center
Cummins, Denise Dellarosa
2014-01-01
People consider alternative causes when deciding whether a cause is responsible for an effect (diagnostic inference) but appear to neglect them when deciding whether an effect will occur (predictive inference). Five experiments were conducted to test a 2-part explanation of this phenomenon: namely, (a) that people interpret standard predictive…
Local and Global Thinking in Statistical Inference
ERIC Educational Resources Information Center
Pratt, Dave; Johnston-Wilder, Peter; Ainley, Janet; Mason, John
2008-01-01
In this reflective paper, we explore students' local and global thinking about informal statistical inference through our observations of 10- to 11-year-olds, challenged to infer the unknown configuration of a virtual die, but able to use the die to generate as much data as they felt necessary. We report how they tended to focus on local changes…
Causal inference in economics and marketing.
Varian, Hal R
2016-07-01
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.
Genetic Network Inference Using Hierarchical Structure
Kimura, Shuhei; Tokuhisa, Masato; Okada-Hatakeyama, Mariko
2016-01-01
Many methods for inferring genetic networks have been proposed, but the regulations they infer often include false-positives. Several researchers have attempted to reduce these erroneous regulations by proposing the use of a priori knowledge about the properties of genetic networks such as their sparseness, scale-free structure, and so on. This study focuses on another piece of a priori knowledge, namely, that biochemical networks exhibit hierarchical structures. Based on this idea, we propose an inference approach that uses the hierarchical structure in a target genetic network. To obtain a reasonable hierarchical structure, the first step of the proposed approach is to infer multiple genetic networks from the observed gene expression data. We take this step using an existing method that combines a genetic network inference method with a bootstrap method. The next step is to extract a hierarchical structure from the inferred networks that is consistent with most of the networks. Third, we use the hierarchical structure obtained to assign confidence values to all candidate regulations. Numerical experiments are also performed to demonstrate the effectiveness of using the hierarchical structure in the genetic network inference. The improvement accomplished by the use of the hierarchical structure is small. However, the hierarchical structure could be used to improve the performances of many existing inference methods. PMID:26941653
The Reasoning behind Informal Statistical Inference
ERIC Educational Resources Information Center
Makar, Katie; Bakker, Arthur; Ben-Zvi, Dani
2011-01-01
Informal statistical inference (ISI) has been a frequent focus of recent research in statistics education. Considering the role that context plays in developing ISI calls into question the need to be more explicit about the reasoning that underpins ISI. This paper uses educational literature on informal statistical inference and philosophical…
Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms. PMID:25689102
Causal Inference in Public Health
Glass, Thomas A.; Goodman, Steven N.; Hernán, Miguel A.; Samet, Jonathan M.
2014-01-01
Causal inference has a central role in public health; the determination that an association is causal indicates the possibility for intervention. We review and comment on the long-used guidelines for interpreting evidence as supporting a causal association and contrast them with the potential outcomes framework that encourages thinking in terms of causes that are interventions. We argue that in public health this framework is more suitable, providing an estimate of an action’s consequences rather than the less precise notion of a risk factor’s causal effect. A variety of modern statistical methods adopt this approach. When an intervention cannot be specified, causal relations can still exist, but how to intervene to change the outcome will be unclear. In application, the often-complex structure of causal processes needs to be acknowledged and appropriate data collected to study them. These newer approaches need to be brought to bear on the increasingly complex public health challenges of our globalized world. PMID:23297653
Inference-based constraint satisfaction supports explanation
Sqalli, M.H.; Freuder, E.C.
1996-12-31
Constraint satisfaction problems are typically solved using search, augmented by general purpose consistency inference methods. This paper proposes a paradigm shift in which inference is used as the primary problem solving method, and attention is focused on special purpose, domain specific inference methods. While we expect this approach to have computational advantages, we emphasize here the advantages of a solution method that is more congenial to human thought processes. Specifically we use inference-based constraint satisfaction to support explanations of the problem solving behavior that are considerably more meaningful than a trace of a search process would be. Logic puzzles are used as a case study. Inference-based constraint satisfaction proves surprisingly powerful and easily extensible in this domain. Problems drawn from commercial logic puzzle booklets are used for evaluation. Explanations are produced that compare well with the explanations provided by these booklets.
Adaptations and Access to Assessment of Common Core Content
ERIC Educational Resources Information Center
Kettler, Ryan J.
2015-01-01
This chapter introduces theory that undergirds the role of testing adaptations in assessment, provides examples of item modifications and testing accommodations, reviews research relevant to each, and introduces a new paradigm that incorporates opportunity to learn (OTL), academic enablers, testing adaptations, and inferences that can be made from…
ERIC Educational Resources Information Center
Exceptional Parent, 1987
1987-01-01
Suggestions are presented for helping disabled individuals learn to use or adapt toothbrushes for proper dental care. A directory lists dental health instructional materials available from various organizations. (CB)
Inferring genetic networks from microarray data.
May, Elebeoba Eni; Davidson, George S.; Martin, Shawn Bryan; Werner-Washburne, Margaret C.; Faulon, Jean-Loup Michel
2004-06-01
In theory, it should be possible to infer realistic genetic networks from time series microarray data. In practice, however, network discovery has proved problematic. The three major challenges are: (1) inferring the network; (2) estimating the stability of the inferred network; and (3) making the network visually accessible to the user. Here we describe a method, tested on publicly available time series microarray data, which addresses these concerns. The inference of genetic networks from genome-wide experimental data is an important biological problem which has received much attention. Approaches to this problem have typically included application of clustering algorithms [6]; the use of Boolean networks [12, 1, 10]; the use of Bayesian networks [8, 11]; and the use of continuous models [21, 14, 19]. Overviews of the problem and general approaches to network inference can be found in [4, 3]. Our approach to network inference is similar to earlier methods in that we use both clustering and Boolean network inference. However, we have attempted to extend the process to better serve the end-user, the biologist. In particular, we have incorporated a system to assess the reliability of our network, and we have developed tools which allow interactive visualization of the proposed network.
Statistical Physics of High Dimensional Inference
NASA Astrophysics Data System (ADS)
Advani, Madhu; Ganguli, Surya
To model modern large-scale datasets, we need efficient algorithms to infer a set of P unknown model parameters from N noisy measurements. What are fundamental limits on the accuracy of parameter inference, given limited measurements, signal-to-noise ratios, prior information, and computational tractability requirements? How can we combine prior information with measurements to achieve these limits? Classical statistics gives incisive answers to these questions as the measurement density α =N/P --> ∞ . However, modern high-dimensional inference problems, in fields ranging from bio-informatics to economics, occur at finite α. We formulate and analyze high-dimensional inference analytically by applying the replica and cavity methods of statistical physics where data serves as quenched disorder and inferred parameters play the role of thermal degrees of freedom. Our analysis reveals that widely cherished Bayesian inference algorithms such as maximum likelihood and maximum a posteriori are suboptimal in the modern setting, and yields new tractable, optimal algorithms to replace them as well as novel bounds on the achievable accuracy of a large class of high-dimensional inference algorithms. Thanks to Stanford Graduate Fellowship and Mind Brain Computation IGERT grant for support.
On Bayesian Inductive Inference & Predictive Estimation
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Smelyanskiy, Vadim
2004-01-01
We investigate Bayesian inference and the Principle of Maximum Entropy (PME) as methods for doing inference under uncertainty. This investigation is primarily through concrete examples that have been previously investigated in the literature. We find that it is possible to do Bayesian inference and PME inference using the same information, despite claims to the contrary, but that the results are not directly comparable. This is because Bayesian inference yields a probability density function (pdf) over the unknown model parameters, whereas PME yields point estimates. If mean estimates are extracted from the Bayesian pdfs, the resulting parameter estimates can differ radically from the PME values and also from the Maximum Likelihood values. We conclude that these differences are due to the Bayesian inference not assuming anything beyond the given prior probabilities and the data, whereas PME implicitly assumes that the given constraints are the only constraints that are operating. Since this assumption can be wrong, PME values may have to be revised when subsequent data shows evidence for more constraints. The entropy concentration previously "proved" by E. T. Jaynes is shown to be in error. Further, we show that PME is a generalized form of independence assumption, and so can be a very powerful method of inference when the variables being investigated are largely independent of each other.
Linguistic Markers of Inference Generation While Reading.
Clinton, Virginia; Carlson, Sarah E; Seipel, Ben
2016-06-01
Words can be informative linguistic markers of psychological constructs. The purpose of this study is to examine associations between word use and the process of making meaningful connections to a text while reading (i.e., inference generation). To achieve this purpose, think-aloud data from third-fifth grade students ([Formula: see text]) reading narrative texts were hand-coded for inferences. These data were also processed with a computer text analysis tool, Linguistic Inquiry and Word Count, for percentages of word use in the following categories: cognitive mechanism words, nonfluencies, and nine types of function words. Findings indicate that cognitive mechanisms were an independent, positive predictor of connections to background knowledge (i.e., elaborative inference generation) and nonfluencies were an independent, negative predictor of connections within the text (i.e., bridging inference generation). Function words did not provide unique variance towards predicting inference generation. These findings are discussed in the context of a cognitive reflection model and the differences between bridging and elaborative inference generation. In addition, potential practical implications for intelligent tutoring systems and computer-based methods of inference identification are presented.
Multitask Diffusion Adaptation Over Networks
NASA Astrophysics Data System (ADS)
Chen, Jie; Richard, Cedric; Sayed, Ali H.
2014-08-01
Adaptive networks are suitable for decentralized inference tasks, e.g., to monitor complex natural phenomena. Recent research works have intensively studied distributed optimization problems in the case where the nodes have to estimate a single optimum parameter vector collaboratively. However, there are many important applications that are multitask-oriented in the sense that there are multiple optimum parameter vectors to be inferred simultaneously, in a collaborative manner, over the area covered by the network. In this paper, we employ diffusion strategies to develop distributed algorithms that address multitask problems by minimizing an appropriate mean-square error criterion with $\\ell_2$-regularization. The stability and convergence of the algorithm in the mean and in the mean-square sense is analyzed. Simulations are conducted to verify the theoretical findings, and to illustrate how the distributed strategy can be used in several useful applications related to spectral sensing, target localization, and hyperspectral data unmixing.
Inference and the introductory statistics course
NASA Astrophysics Data System (ADS)
Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross
2011-10-01
This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its hypothetical probabilistic reasoning process is examined in some depth. We argue that the revolution in the teaching of inference must begin. We also discuss some perplexing issues, problematic areas and some new insights into language conundrums associated with introducing the logic of inference through randomization methods.
Degradation monitoring using probabilistic inference
NASA Astrophysics Data System (ADS)
Alpay, Bulent
In order to increase safety and improve economy and performance in a nuclear power plant (NPP), the source and extent of component degradations should be identified before failures and breakdowns occur. It is also crucial for the next generation of NPPs, which are designed to have a long core life and high fuel burnup to have a degradation monitoring system in order to keep the reactor in a safe state, to meet the designed reactor core lifetime and to optimize the scheduled maintenance. Model-based methods are based on determining the inconsistencies between the actual and expected behavior of the plant, and use these inconsistencies for detection and diagnostics of degradations. By defining degradation as a random abrupt change from the nominal to a constant degraded state of a component, we employed nonlinear filtering techniques based on state/parameter estimation. We utilized a Bayesian recursive estimation formulation in the sequential probabilistic inference framework and constructed a hidden Markov model to represent a general physical system. By addressing the problem of a filter's inability to estimate an abrupt change, which is called the oblivious filter problem in nonlinear extensions of Kalman filtering, and the sample impoverishment problem in particle filtering, we developed techniques to modify filtering algorithms by utilizing additional data sources to improve the filter's response to this problem. We utilized a reliability degradation database that can be constructed from plant specific operational experience and test and maintenance reports to generate proposal densities for probable degradation modes. These are used in a multiple hypothesis testing algorithm. We then test samples drawn from these proposal densities with the particle filtering estimates based on the Bayesian recursive estimation formulation with the Metropolis Hastings algorithm, which is a well-known Markov chain Monte Carlo method (MCMC). This multiple hypothesis testing
Automated interpretation of LIBS spectra using a fuzzy logic inference engine.
Hatch, Jeremy J; McJunkin, Timothy R; Hanson, Cynthia; Scott, Jill R
2012-03-01
Automated interpretation of laser-induced breakdown spectroscopy (LIBS) data is necessary due to the plethora of spectra that can be acquired in a relatively short time. However, traditional chemometric and artificial neural network methods that have been employed are not always transparent to a skilled user. A fuzzy logic approach to data interpretation has now been adapted to LIBS spectral interpretation. Fuzzy logic inference rules were developed using methodology that includes data mining methods and operator expertise to differentiate between various copper-containing and stainless steel alloys as well as unknowns. Results using the fuzzy logic inference engine indicate a high degree of confidence in spectral assignment.
NASA Technical Reports Server (NTRS)
2005-01-01
The goal of this research is to develop and demonstrate innovative adaptive seal technologies that can lead to dramatic improvements in engine performance, life, range, and emissions, and enhance operability for next generation gas turbine engines. This work is concentrated on the development of self-adaptive clearance control systems for gas turbine engines. Researchers have targeted the high-pressure turbine (HPT) blade tip seal location for following reasons: Current active clearance control (ACC) systems (e.g., thermal case-cooling schemes) cannot respond to blade tip clearance changes due to mechanical, thermal, and aerodynamic loads. As such they are prone to wear due to the required tight running clearances during operation. Blade tip seal wear (increased clearances) reduces engine efficiency, performance, and service life. Adaptive sealing technology research has inherent impact on all envisioned 21st century propulsion systems (e.g. distributed vectored, hybrid and electric drive propulsion concepts).
48 CFR 1631.205-81 - Inferred reasonableness.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 6 2011-10-01 2011-10-01 false Inferred reasonableness... PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 1631.205-81 Inferred reasonableness. If the... the subcontract's costs shall be inferred....
48 CFR 1631.205-81 - Inferred reasonableness.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Inferred reasonableness... PRINCIPLES AND PROCEDURES Contracts With Commercial Organizations 1631.205-81 Inferred reasonableness. If the... the subcontract's costs shall be inferred....
An inference engine for embedded diagnostic systems
NASA Technical Reports Server (NTRS)
Fox, Barry R.; Brewster, Larry T.
1987-01-01
The implementation of an inference engine for embedded diagnostic systems is described. The system consists of two distinct parts. The first is an off-line compiler which accepts a propositional logical statement of the relationship between facts and conclusions and produces data structures required by the on-line inference engine. The second part consists of the inference engine and interface routines which accept assertions of fact and return the conclusions which necessarily follow. Given a set of assertions, it will generate exactly the conclusions which logically follow. At the same time, it will detect any inconsistencies which may propagate from an inconsistent set of assertions or a poorly formulated set of rules. The memory requirements are fixed and the worst case execution times are bounded at compile time. The data structures and inference algorithms are very simple and well understood. The data structures and algorithms are described in detail. The system has been implemented on Lisp, Pascal, and Modula-2.
Metacognitive inferences from other people's memory performance.
Smith, Robert W; Schwarz, Norbert
2016-09-01
Three studies show that people draw metacognitive inferences about events from how well others remember the event. Given that memory fades over time, detailed accounts of distant events suggest that the event must have been particularly memorable, for example, because it was extreme. Accordingly, participants inferred that a physical assault (Study 1) or a poor restaurant experience (Studies 2-3) were more extreme when they were well remembered one year rather than one week later. These inferences influence behavioral intentions. For example, participants recommended a more severe punishment for a well-remembered distant rather than recent assault (Study 1). These metacognitive inferences are eliminated when people attribute the reporter's good memory to an irrelevant cause (e.g., photographic memory), thus undermining the informational value of memory performance (Study 3). These studies illuminate how people use lay theories of memory to learn from others' memory performance about characteristics of the world. (PsycINFO Database Record
Metacognitive inferences from other people's memory performance.
Smith, Robert W; Schwarz, Norbert
2016-09-01
Three studies show that people draw metacognitive inferences about events from how well others remember the event. Given that memory fades over time, detailed accounts of distant events suggest that the event must have been particularly memorable, for example, because it was extreme. Accordingly, participants inferred that a physical assault (Study 1) or a poor restaurant experience (Studies 2-3) were more extreme when they were well remembered one year rather than one week later. These inferences influence behavioral intentions. For example, participants recommended a more severe punishment for a well-remembered distant rather than recent assault (Study 1). These metacognitive inferences are eliminated when people attribute the reporter's good memory to an irrelevant cause (e.g., photographic memory), thus undermining the informational value of memory performance (Study 3). These studies illuminate how people use lay theories of memory to learn from others' memory performance about characteristics of the world. (PsycINFO Database Record PMID:27414693
Are Evaluations Inferred Directly From Overt Actions?
ERIC Educational Resources Information Center
Brown, Donald; And Others
1975-01-01
The operation of a covert information processing mechanism was investigated in two experiments of the self-persuasion phenomena; i. e., making an inference about a stimulus on the basis of one's past behavior. (Editor)
Metamodel-Driven Evolution with Grammar Inference
NASA Astrophysics Data System (ADS)
Bryant, Barrett R.; Liu, Qichao; Mernik, Marjan
2010-10-01
Domain-specific modeling (DSM) has become one of the most popular techniques for incorporating model-driven engineering (MDE) into software engineering. In DSM, domain experts define metamodels to describe the essential problems in a domain. A model conforms to a schema definition represented by a metamodel in a similar manner to a programming language conforms to a grammar. Metamodel-driven evolution is when a metamodel undergoes evolutions to incorporate new concerns in the domain. However, this results in losing the ability to use existing model instances. Grammar inference is the problem of inferring a grammar from sample strings which the grammar should generate. This paper describes our work in solving the problem of metamodel-driven evolution with grammar inference, by inferring the metamodel from model instances.
Causal inference in economics and marketing
Varian, Hal R.
2016-01-01
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual—a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference. PMID:27382144
Causal inference in economics and marketing.
Varian, Hal R
2016-07-01
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference. PMID:27382144
Operation of the Bayes Inference Engine
Hanson, K.M.; Cunningham, G.S.
1998-07-27
The authors have developed a computer application, called the Bayes Inference Engine, to enable one to make inferences about models of a physical object from radiographs taken of it. In the BIE calculational models are represented by a data-flow diagram that can be manipulated by the analyst in a graphical-programming environment. The authors demonstrate the operation of the BIE in terms of examples of two-dimensional tomographic reconstruction including uncertainty estimation.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
On the criticality of inferred models
NASA Astrophysics Data System (ADS)
Mastromatteo, Iacopo; Marsili, Matteo
2011-10-01
Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.
Adaptive evolution of molecular phenotypes
NASA Astrophysics Data System (ADS)
Held, Torsten; Nourmohammad, Armita; Lässig, Michael
2014-09-01
Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.
Bremer, P. -T.
2014-08-26
ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.
Inference of Isoforms from Short Sequence Reads
NASA Astrophysics Data System (ADS)
Feng, Jianxing; Li, Wei; Jiang, Tao
Due to alternative splicing events in eukaryotic species, the identification of mRNA isoforms (or splicing variants) is a difficult problem. Traditional experimental methods for this purpose are time consuming and cost ineffective. The emerging RNA-Seq technology provides a possible effective method to address this problem. Although the advantages of RNA-Seq over traditional methods in transcriptome analysis have been confirmed by many studies, the inference of isoforms from millions of short sequence reads (e.g., Illumina/Solexa reads) has remained computationally challenging. In this work, we propose a method to calculate the expression levels of isoforms and infer isoforms from short RNA-Seq reads using exon-intron boundary, transcription start site (TSS) and poly-A site (PAS) information. We first formulate the relationship among exons, isoforms, and single-end reads as a convex quadratic program, and then use an efficient algorithm (called IsoInfer) to search for isoforms. IsoInfer can calculate the expression levels of isoforms accurately if all the isoforms are known and infer novel isoforms from scratch. Our experimental tests on known mouse isoforms with both simulated expression levels and reads demonstrate that IsoInfer is able to calculate the expression levels of isoforms with an accuracy comparable to the state-of-the-art statistical method and a 60 times faster speed. Moreover, our tests on both simulated and real reads show that it achieves a good precision and sensitivity in inferring isoforms when given accurate exon-intron boundary, TSS and PAS information, especially for isoforms whose expression levels are significantly high.
Sekar, Booma Devi; Dong, Mingchui
2014-01-01
An intelligent cardiovascular disease (CVD) diagnosis system using hemodynamic parameters (HDPs) derived from sphygmogram (SPG) signal is presented to support the emerging patient-centric healthcare models. To replicate clinical approach of diagnosis through a staged decision process, the Bayesian inference nets (BIN) are adapted. New approaches to construct a hierarchical multistage BIN using defined function formulas and a method employing fuzzy logic (FL) technology to quantify inference nodes with dynamic values of statistical parameters are proposed. The suggested methodology is validated by constructing hierarchical Bayesian fuzzy inference nets (HBFIN) to diagnose various heart pathologies from the deduced HDPs. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness in the diagnosis of cardiovascular disease.
Inference comprehension of adolescents with traumatic brain injury: a working memory hypothesis.
Moran, C; Gillon, G
2005-09-01
This study investigated inference comprehension performance in adolescents who had suffered a traumatic brain injury (TBI). Using stimuli adapted from Lehman-Blake and Tompkins, participants listened to short paragraphs that varied according to the working memory demands of the task and answered comprehension questions that required inferences to be generated. Six adolescents, aged 12-16 years, who had suffered a TBI prior to the age of 10 years, were assessed and their performance was compared to six individually age-matched peers with typical development. Analysis revealed that individuals with TBI did not differ from non-injured peers in their understanding of inferences when the storage demands of the task were minimized. However, when storage demands were high, adolescents with TBI performed poorly compared to their age-matched peers. Results are discussed relative to a working-memory hypothesis of impairment following TBI. PMID:16175835
HLA Type Inference via Haplotypes Identical by Descent
NASA Astrophysics Data System (ADS)
Setty, Manu N.; Gusev, Alexander; Pe'Er, Itsik
The Human Leukocyte Antigen (HLA) genes play a major role in adaptive immune response and are used to differentiate self antigens from non self ones. HLA genes are hyper variable with nearly every locus harboring over a dozen alleles. This variation plays an important role in susceptibility to multiple autoimmune diseases and needs to be matched on for organ transplantation. Unfortunately, HLA typing by serological methods is time consuming and expensive compared to high throughput Single Nucleotide Polymorphism (SNP) data. We present a new computational method to infer per-locus HLA types using shared segments Identical By Descent (IBD), inferred from SNP genotype data. IBD information is modeled as graph where shared haplotypes are explored among clusters of individuals with known and unknown HLA types to identify the latter. We analyze performance of the method in a previously typed subset of the HapMap population, achieving accuracy of 96% in HLA-A, 94% in HLA-B, 95% in HLA-C, 77% in HLA-DR1, 93% in HLA-DQA1 and 90% in HLA-DQB1 genes. We compare our method to a tag SNP based approach and demonstrate higher sensitivity and specificity. Our method demonstrates the power of using shared haplotype segments for large-scale imputation at the HLA locus.
Impaired inference in a case of developmental amnesia
D'Angelo, Maria C.; Rosenbaum, R. Shayna
2016-01-01
ABSTRACT Amnesia is associated with impairments in relational memory, which is critically supported by the hippocampus. By adapting the transitivity paradigm, we previously showed that age‐related impairments in inference were mitigated when judgments could be predicated on known pairwise relations, however, such advantages were not observed in the adult‐onset amnesic case D.A. Here, we replicate and extend this finding in a developmental amnesic case (N.C.), who also shows impaired relational learning and transitive expression. Unlike D.A., N.C.'s damage affected the extended hippocampal system and diencephalic structures, and does not extend to neocortical areas that are affected in D.A. Critically, despite their differences in etiology and affected structures, N.C. and D.A. perform similarly on the task. N.C. showed intact pairwise knowledge, suggesting that he is able to use existing semantic information, but this semantic knowledge was insufficient to support transitive expression. The present results suggest a critical role for regions connected to the hippocampus and/or medial prefrontal cortex in inference beyond learning of pairwise relations. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27258733
Impaired inference in a case of developmental amnesia.
D'Angelo, Maria C; Rosenbaum, R Shayna; Ryan, Jennifer D
2016-10-01
Amnesia is associated with impairments in relational memory, which is critically supported by the hippocampus. By adapting the transitivity paradigm, we previously showed that age-related impairments in inference were mitigated when judgments could be predicated on known pairwise relations, however, such advantages were not observed in the adult-onset amnesic case D.A. Here, we replicate and extend this finding in a developmental amnesic case (N.C.), who also shows impaired relational learning and transitive expression. Unlike D.A., N.C.'s damage affected the extended hippocampal system and diencephalic structures, and does not extend to neocortical areas that are affected in D.A. Critically, despite their differences in etiology and affected structures, N.C. and D.A. perform similarly on the task. N.C. showed intact pairwise knowledge, suggesting that he is able to use existing semantic information, but this semantic knowledge was insufficient to support transitive expression. The present results suggest a critical role for regions connected to the hippocampus and/or medial prefrontal cortex in inference beyond learning of pairwise relations. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc.
Palaeotemperature trend for Precambrian life inferred from resurrected proteins.
Gaucher, Eric A; Govindarajan, Sridhar; Ganesh, Omjoy K
2008-02-01
Biosignatures and structures in the geological record indicate that microbial life has inhabited Earth for the past 3.5 billion years or so. Research in the physical sciences has been able to generate statements about the ancient environment that hosted this life. These include the chemical compositions and temperatures of the early ocean and atmosphere. Only recently have the natural sciences been able to provide experimental results describing the environments of ancient life. Our previous work with resurrected proteins indicated that ancient life lived in a hot environment. Here we expand the timescale of resurrected proteins to provide a palaeotemperature trend of the environments that hosted life from 3.5 to 0.5 billion years ago. The thermostability of more than 25 phylogenetically dispersed ancestral elongation factors suggest that the environment supporting ancient life cooled progressively by 30 degrees C during that period. Here we show that our results are robust to potential statistical bias associated with the posterior distribution of inferred character states, phylogenetic ambiguity, and uncertainties in the amino-acid equilibrium frequencies used by evolutionary models. Our results are further supported by a nearly identical cooling trend for the ancient ocean as inferred from the deposition of oxygen isotopes. The convergence of results from natural and physical sciences suggest that ancient life has continually adapted to changes in environmental temperatures throughout its evolutionary history.
Palaeotemperature trend for Precambrian life inferred from resurrected proteins.
Gaucher, Eric A; Govindarajan, Sridhar; Ganesh, Omjoy K
2008-02-01
Biosignatures and structures in the geological record indicate that microbial life has inhabited Earth for the past 3.5 billion years or so. Research in the physical sciences has been able to generate statements about the ancient environment that hosted this life. These include the chemical compositions and temperatures of the early ocean and atmosphere. Only recently have the natural sciences been able to provide experimental results describing the environments of ancient life. Our previous work with resurrected proteins indicated that ancient life lived in a hot environment. Here we expand the timescale of resurrected proteins to provide a palaeotemperature trend of the environments that hosted life from 3.5 to 0.5 billion years ago. The thermostability of more than 25 phylogenetically dispersed ancestral elongation factors suggest that the environment supporting ancient life cooled progressively by 30 degrees C during that period. Here we show that our results are robust to potential statistical bias associated with the posterior distribution of inferred character states, phylogenetic ambiguity, and uncertainties in the amino-acid equilibrium frequencies used by evolutionary models. Our results are further supported by a nearly identical cooling trend for the ancient ocean as inferred from the deposition of oxygen isotopes. The convergence of results from natural and physical sciences suggest that ancient life has continually adapted to changes in environmental temperatures throughout its evolutionary history. PMID:18256669
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Prediction, Bayesian inference and feedback in speech recognition
Norris, Dennis; McQueen, James M.; Cutler, Anne
2016-01-01
ABSTRACT Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models. PMID:26740960
Causal inference on quantiles with an obstetric application.
Zhang, Zhiwei; Chen, Zhen; Troendle, James F; Zhang, Jun
2012-09-01
The current statistical literature on causal inference is primarily concerned with population means of potential outcomes, while the current statistical practice also involves other meaningful quantities such as quantiles. Motivated by the Consortium on Safe Labor (CSL), a large observational study of obstetric labor progression, we propose and compare methods for estimating marginal quantiles of potential outcomes as well as quantiles among the treated. By adapting existing methods and techniques, we derive estimators based on outcome regression (OR), inverse probability weighting, and stratification, as well as a doubly robust (DR) estimator. By incorporating stratification into the DR estimator, we further develop a hybrid estimator with enhanced numerical stability at the expense of a slight bias under misspecification of the OR model. The proposed methods are illustrated with the CSL data and evaluated in simulation experiments mimicking the CSL.
Viewpoints: feeding mechanics, diet, and dietary adaptations in early hominins.
Daegling, David J; Judex, Stefan; Ozcivici, Engin; Ravosa, Matthew J; Taylor, Andrea B; Grine, Frederick E; Teaford, Mark F; Ungar, Peter S
2013-07-01
Inference of feeding adaptation in extinct species is challenging, and reconstructions of the paleobiology of our ancestors have utilized an array of analytical approaches. Comparative anatomy and finite element analysis assist in bracketing the range of capabilities in taxa, while microwear and isotopic analyses give glimpses of individual behavior in the past. These myriad approaches have limitations, but each contributes incrementally toward the recognition of adaptation in the hominin fossil record. Microwear and stable isotope analysis together suggest that australopiths are not united by a single, increasingly specialized dietary adaptation. Their traditional (i.e., morphological) characterization as "nutcrackers" may only apply to a single taxon, Paranthropus robustus. These inferences can be rejected if interpretation of microwear and isotopic data can be shown to be misguided or altogether erroneous. Alternatively, if these sources of inference are valid, it merely indicates that there are phylogenetic and developmental constraints on morphology. Inherently, finite element analysis is limited in its ability to identify adaptation in paleobiological contexts. Its application to the hominin fossil record to date demonstrates only that under similar loading conditions, the form of the stress field in the australopith facial skeleton differs from that in living primates. This observation, by itself, does not reveal feeding adaptation. Ontogenetic studies indicate that functional and evolutionary adaptation need not be conceptually isolated phenomena. Such a perspective helps to inject consideration of mechanobiological principles of bone formation into paleontological inferences. Finite element analysis must employ such principles to become an effective research tool in this context. PMID:23794331
Estimating uncertainty of inference for validation
Booker, Jane M; Langenbrunner, James R; Hemez, Francois M; Ross, Timothy J
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
NASA Technical Reports Server (NTRS)
Hacker, Scott C. (Inventor); Dean, Richard J. (Inventor); Burge, Scott W. (Inventor); Dartez, Toby W. (Inventor)
2007-01-01
An adapter for installing a connector to a terminal post, wherein the connector is attached to a cable, is presented. In an embodiment, the adapter is comprised of an elongated collet member having a longitudinal axis comprised of a first collet member end, a second collet member end, an outer collet member surface, and an inner collet member surface. The inner collet member surface at the first collet member end is used to engage the connector. The outer collet member surface at the first collet member end is tapered for a predetermined first length at a predetermined taper angle. The collet includes a longitudinal slot that extends along the longitudinal axis initiating at the first collet member end for a predetermined second length. The first collet member end is formed of a predetermined number of sections segregated by a predetermined number of channels and the longitudinal slot.
NASA Astrophysics Data System (ADS)
Odriozola, Iñigo; Lazkano, Elena; Sierra, Basi
2011-10-01
This paper investigates the improvement of the Vector Field Histogram (VFH) local planning algorithm for mobile robot systems. The Adaptive Vector Field Histogram (AVFH) algorithm has been developed to improve the effectiveness of the traditional VFH path planning algorithm overcoming the side effects of using static parameters. This new algorithm permits the adaptation of planning parameters for the different type of areas in an environment. Genetic Algorithms are used to fit the best VFH parameters to each type of sector and, afterwards, every section in the map is labelled with the sector-type which best represents it. The Player/Stage simulation platform has been chosen for making all sort of tests and to prove the new algorithm's adequateness. Even though there is still much work to be carried out, the developed algorithm showed good navigation properties and turned out to be softer and more effective than the traditional VFH algorithm.
Computationally efficient Bayesian inference for inverse problems.
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Reliability of the Granger causality inference
NASA Astrophysics Data System (ADS)
Zhou, Douglas; Zhang, Yaoyu; Xiao, Yanyang; Cai, David
2014-04-01
How to characterize information flows in physical, biological, and social systems remains a major theoretical challenge. Granger causality (GC) analysis has been widely used to investigate information flow through causal interactions. We address one of the central questions in GC analysis, that is, the reliability of the GC evaluation and its implications for the causal structures extracted by this analysis. Our work reveals that the manner in which a continuous dynamical process is projected or coarse-grained to a discrete process has a profound impact on the reliability of the GC inference, and different sampling may potentially yield completely opposite inferences. This inference hazard is present for both linear and nonlinear processes. We emphasize that there is a hazard of reaching incorrect conclusions about network topologies, even including statistical (such as small-world or scale-free) properties of the networks, when GC analysis is blindly applied to infer the network topology. We demonstrate this using a small-world network for which a drastic loss of small-world attributes occurs in the reconstructed network using the standard GC approach. We further show how to resolve the paradox that the GC analysis seemingly becomes less reliable when more information is incorporated using finer and finer sampling. Finally, we present strategies to overcome these inference artifacts in order to obtain a reliable GC result.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Scene Construction, Visual Foraging, and Active Inference
Mirza, M. Berk; Adams, Rick A.; Mathys, Christoph D.; Friston, Karl J.
2016-01-01
This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899
Scene Construction, Visual Foraging, and Active Inference.
Mirza, M Berk; Adams, Rick A; Mathys, Christoph D; Friston, Karl J
2016-01-01
This paper describes an active inference scheme for visual searches and the perceptual synthesis entailed by scene construction. Active inference assumes that perception and action minimize variational free energy, where actions are selected to minimize the free energy expected in the future. This assumption generalizes risk-sensitive control and expected utility theory to include epistemic value; namely, the value (or salience) of information inherent in resolving uncertainty about the causes of ambiguous cues or outcomes. Here, we apply active inference to saccadic searches of a visual scene. We consider the (difficult) problem of categorizing a scene, based on the spatial relationship among visual objects where, crucially, visual cues are sampled myopically through a sequence of saccadic eye movements. This means that evidence for competing hypotheses about the scene has to be accumulated sequentially, calling upon both prediction (planning) and postdiction (memory). Our aim is to highlight some simple but fundamental aspects of the requisite functional anatomy; namely, the link between approximate Bayesian inference under mean field assumptions and functional segregation in the visual cortex. This link rests upon the (neurobiologically plausible) process theory that accompanies the normative formulation of active inference for Markov decision processes. In future work, we hope to use this scheme to model empirical saccadic searches and identify the prior beliefs that underwrite intersubject variability in the way people forage for information in visual scenes (e.g., in schizophrenia). PMID:27378899
Hierarchical cosmic shear power spectrum inference
NASA Astrophysics Data System (ADS)
Alsing, Justin; Heavens, Alan; Jaffe, Andrew H.; Kiessling, Alina; Wandelt, Benjamin; Hoffmann, Till
2016-02-01
We develop a Bayesian hierarchical modelling approach for cosmic shear power spectrum inference, jointly sampling from the posterior distribution of the cosmic shear field and its (tomographic) power spectra. Inference of the shear power spectrum is a powerful intermediate product for a cosmic shear analysis, since it requires very few model assumptions and can be used to perform inference on a wide range of cosmological models a posteriori without loss of information. We show that joint posterior for the shear map and power spectrum can be sampled effectively by Gibbs sampling, iteratively drawing samples from the map and power spectrum, each conditional on the other. This approach neatly circumvents difficulties associated with complicated survey geometry and masks that plague frequentist power spectrum estimators, since the power spectrum inference provides prior information about the field in masked regions at every sampling step. We demonstrate this approach for inference of tomographic shear E-mode, B-mode and EB-cross power spectra from a simulated galaxy shear catalogue with a number of important features; galaxies distributed on the sky and in redshift with photometric redshift uncertainties, realistic random ellipticity noise for every galaxy and a complicated survey mask. The obtained posterior distributions for the tomographic power spectrum coefficients recover the underlying simulated power spectra for both E- and B-modes.
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Inferring learners' knowledge from their actions.
Rafferty, Anna N; LaMar, Michelle M; Griffiths, Thomas L
2015-04-01
Watching another person take actions to complete a goal and making inferences about that person's knowledge is a relatively natural task for people. This ability can be especially important in educational settings, where the inferences can be used for assessment, diagnosing misconceptions, and providing informative feedback. In this paper, we develop a general framework for automatically making such inferences based on observed actions; this framework is particularly relevant for inferring student knowledge in educational games and other interactive virtual environments. Our approach relies on modeling action planning: We formalize the problem as a Markov decision process in which one must choose what actions to take to complete a goal, where choices will be dependent on one's beliefs about how actions affect the environment. We use a variation of inverse reinforcement learning to infer these beliefs. Through two lab experiments, we show that this model can recover people's beliefs in a simple environment, with accuracy comparable to that of human observers. We then demonstrate that the model can be used to provide real-time feedback and to model data from an existing educational game.
Inferring learners' knowledge from their actions.
Rafferty, Anna N; LaMar, Michelle M; Griffiths, Thomas L
2015-04-01
Watching another person take actions to complete a goal and making inferences about that person's knowledge is a relatively natural task for people. This ability can be especially important in educational settings, where the inferences can be used for assessment, diagnosing misconceptions, and providing informative feedback. In this paper, we develop a general framework for automatically making such inferences based on observed actions; this framework is particularly relevant for inferring student knowledge in educational games and other interactive virtual environments. Our approach relies on modeling action planning: We formalize the problem as a Markov decision process in which one must choose what actions to take to complete a goal, where choices will be dependent on one's beliefs about how actions affect the environment. We use a variation of inverse reinforcement learning to infer these beliefs. Through two lab experiments, we show that this model can recover people's beliefs in a simple environment, with accuracy comparable to that of human observers. We then demonstrate that the model can be used to provide real-time feedback and to model data from an existing educational game. PMID:25155381
Watson, B.L.; Aeby, I.
1980-08-26
An adaptive data compression device for compressing data is described. The device has a frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
Watson, Bobby L.; Aeby, Ian
1982-01-01
An adaptive data compression device for compressing data having variable frequency content, including a plurality of digital filters for analyzing the content of the data over a plurality of frequency regions, a memory, and a control logic circuit for generating a variable rate memory clock corresponding to the analyzed frequency content of the data in the frequency region and for clocking the data into the memory in response to the variable rate memory clock.
NASA Astrophysics Data System (ADS)
Barton, P.
1987-04-01
The basic principles of adaptive antennas are outlined in terms of the Wiener-Hopf expression for maximizing signal to noise ratio in an arbitrary noise environment; the analogy with generalized matched filter theory provides a useful aid to understanding. For many applications, there is insufficient information to achieve the above solution and thus non-optimum constrained null steering algorithms are also described, together with a summary of methods for preventing wanted signals being nulled by the adaptive system. The three generic approaches to adaptive weight control are discussed; correlation steepest descent, weight perturbation and direct solutions based on sample matrix conversion. The tradeoffs between hardware complexity and performance in terms of null depth and convergence rate are outlined. The sidelobe cancellor technique is described. Performance variation with jammer power and angular distribution is summarized and the key performance limitations identified. The configuration and performance characteristics of both multiple beam and phase scan array antennas are covered, with a brief discussion of performance factors.
ERIC Educational Resources Information Center
Pillow, Bradford H.; Pearson, RaeAnne M.; Hecht, Mary; Bremer, Amanda
2010-01-01
Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults…
Using Alien Coins to Test Whether Simple Inference Is Bayesian
ERIC Educational Resources Information Center
Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.
2016-01-01
Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…
Dynamical inference of hidden biological populations
NASA Astrophysics Data System (ADS)
Luchinsky, D. G.; Smelyanskiy, V. N.; Millonas, M.; McClintock, P. V. E.
2008-10-01
Population fluctuations in a predator-prey system are analyzed for the case where the number of prey could be determined, subject to measurement noise, but the number of predators was unknown. The problem of how to infer the unmeasured predator dynamics, as well as the model parameters, is addressed. Two solutions are suggested. In the first of these, measurement noise and the dynamical noise in the equation for predator population are neglected; the problem is reduced to a one-dimensional case, and a Bayesian dynamical inference algorithm is employed to reconstruct the model parameters. In the second solution a full-scale Markov Chain Monte Carlo simulation is used to infer both the unknown predator trajectory, and also the model parameters, using the one-dimensional solution as an initial guess.
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
Inferences from counterfactual threats and promises.
Egan, Suzanne M; Byrne, Ruth M J
2012-01-01
We examine how people understand and reason from counterfactual threats, for example, "if you had hit your sister, I would have grounded you" and counterfactual promises, for example, "if you had tidied your room, I would have given you ice-cream." The first experiment shows that people consider counterfactual threats, but not counterfactual promises, to have the illocutionary force of an inducement. They also make the immediate inference that the action mentioned in the "if" part of the counterfactual threat and promise did not occur. The second experiment shows that people make more negative inferences (modus tollens and denial of the antecedent) than affirmative inferences (modus ponens and affirmation of the consequent) from counterfactual threats and promises, unlike indicative threats and promises. We discuss the implications of the results for theories of the mental representations and cognitive processes that underlie conditional inducements. PMID:22580411
Explanatory Preferences Shape Learning and Inference.
Lombrozo, Tania
2016-10-01
Explanations play an important role in learning and inference. People often learn by seeking explanations, and they assess the viability of hypotheses by considering how well they explain the data. An emerging body of work reveals that both children and adults have strong and systematic intuitions about what constitutes a good explanation, and that these explanatory preferences have a systematic impact on explanation-based processes. In particular, people favor explanations that are simple and broad, with the consequence that engaging in explanation can shape learning and inference by leading people to seek patterns and favor hypotheses that support broad and simple explanations. Given the prevalence of explanation in everyday cognition, understanding explanation is therefore crucial to understanding learning and inference. PMID:27567318
Nonintentional analogical inference in text comprehension.
Day, Samuel B; Gentner, Dedre
2007-01-01
We present findings suggesting that analogical inference processes can play a role in fluent comprehension and interpretation. Participants were found to use information from a prior relationally similar example in understanding the content of a later example, but they reported that they were not aware of having done so. These inference processes were sensitive to structural mappings between the two instances, ruling out explanations based solely on more general kinds of activation, such as priming. Reading speed measures were consistent with the possibility that these inferences had taken place during encoding of the target rather than during the later recognition test. These findings suggest that analogical mapping, though often viewed as an explicit deliberative process, can sometimes operate without intent or even awareness.
Consumer psychology: categorization, inferences, affect, and persuasion.
Loken, Barbara
2006-01-01
This chapter reviews research on consumer psychology with emphasis on the topics of categorization, inferences, affect, and persuasion. The chapter reviews theory-based empirical research during the period 1994-2004. Research on categorization includes empirical research on brand categories, goals as organizing frameworks and motivational bases for judgments, and self-based processing. Research on inferences includes numerous types of inferences that are cognitively and/or experienced based. Research on affect includes the effects of mood on processing and cognitive and noncognitive bases for attitudes and intentions. Research on persuasion focuses heavily on the moderating role of elaboration and dual-process models, and includes research on attitude strength responses, advertising responses, and negative versus positive evaluative dimensions.
A formal model of interpersonal inference
Moutoussis, Michael; Trujillo-Barreto, Nelson J.; El-Deredy, Wael; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Introduction: We propose that active Bayesian inference—a general framework for decision-making—can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: (1) Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to “mentalizing” in the psychological literature, is based upon the outcomes of interpersonal exchanges. (2) We show how some well-known social-psychological phenomena (e.g., self-serving biases) can be explained in terms of active interpersonal inference. (3) Mentalizing naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one's own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modeling intersubject variability in mentalizing during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalizing is distorted. PMID:24723872
Declarative memory, awareness, and transitive inference.
Smith, Christine; Squire, Larry R
2005-11-01
A characteristic usually attributed to declarative memory is that what is learned is accessible to awareness. Recently, the relationship between awareness and declarative (hippocampus-dependent) memory has been questioned on the basis of findings from transitive inference tasks. In transitive inference, participants are first trained on overlapping pairs of items (e.g., A+B-, B+C-, C+D-, and D+E-, where + and - indicate correct and incorrect choices). Later, participants who choose B over D when presented with the novel pair BD are said to demonstrate transitive inference. The ability to exhibit transitive inference is thought to depend on the fact that participants have represented the stimulus elements hierarchically (i.e., A>B>C>D>E). We found that performance on five-item and six-item transitive inference tasks was closely related to awareness of the hierarchical relationship among the elements of the training pairs. Participants who were aware of the hierarchy performed near 100% correct on all tests of transitivity, but participants who were unaware of the hierarchy performed poorly (e.g., on transitive pair BD in the five-item problem; on transitive pairs BD, BE, and CE in the six-item problem). When the five-item task was administered to memory-impaired patients with damage thought to be limited to the hippocampal region, the patients were impaired at learning the training pairs. All patients were unaware of the hierarchy and, like unaware controls, performed poorly on the BD pair. The findings indicate that awareness is critical for robust performance on tests of transitive inference and support the view that awareness of what is learned is a fundamental characteristic of declarative memory.
Advances and challenges in the attribution of climate impacts using statistical inference
NASA Astrophysics Data System (ADS)
Hsiang, S. M.
2015-12-01
We discuss recent advances, challenges, and debates in the use of statistical models to infer and attribute climate impacts, such as distinguishing effects of "climate" vs. "weather," accounting for simultaneous environmental changes along multiple dimensions, evaluating multiple sources of uncertainty, accounting for adaptation, and simulating counterfactual economic or social trajectories. We relate these ideas to recent findings linking temperature to economic productivity/violence and tropical cyclones to economic growth.
NASA Astrophysics Data System (ADS)
Deglint, Jason; Kazemzadeh, Farnoud; Wong, Alexander; Clausi, David A.
2015-09-01
One method to acquire multispectral images is to sequentially capture a series of images where each image contains information from a different bandwidth of light. Another method is to use a series of beamsplitters and dichroic filters to guide different bandwidths of light onto different cameras. However, these methods are very time consuming and expensive and perform poorly in dynamic scenes or when observing transient phenomena. An alternative strategy to capturing multispectral data is to infer this data using sparse spectral reflectance measurements captured using an imaging device with overlapping bandpass filters, such as a consumer digital camera using a Bayer filter pattern. Currently the only method of inferring dense reflectance spectra is the Wiener adaptive filter, which makes Gaussian assumptions about the data. However, these assumptions may not always hold true for all data. We propose a new technique to infer dense reflectance spectra from sparse spectral measurements through the use of a non-linear regression model. The non-linear regression model used in this technique is the random forest model, which is an ensemble of decision trees and trained via the spectral characterization of the optical imaging system and spectral data pair generation. This model is then evaluated by spectrally characterizing different patches on the Macbeth color chart, as well as by reconstructing inferred multispectral images. Results show that the proposed technique can produce inferred dense reflectance spectra that correlate well with the true dense reflectance spectra, which illustrates the merits of the technique.
Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology
Poon, Art F.Y.
2015-01-01
The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this “kernel-ABC” method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. PMID:26006189
Gene-network inference by message passing
NASA Astrophysics Data System (ADS)
Braunstein, A.; Pagnani, A.; Weigt, M.; Zecchina, R.
2008-01-01
The inference of gene-regulatory processes from gene-expression data belongs to the major challenges of computational systems biology. Here we address the problem from a statistical-physics perspective and develop a message-passing algorithm which is able to infer sparse, directed and combinatorial regulatory mechanisms. Using the replica technique, the algorithmic performance can be characterized analytically for artificially generated data. The algorithm is applied to genome-wide expression data of baker's yeast under various environmental conditions. We find clear cases of combinatorial control, and enrichment in common functional annotations of regulated genes and their regulators.
Evidence for archaic adaptive introgression in humans
Racimo, Fernando; Sankararaman, Sriram; Nielsen, Rasmus; Huerta-Sánchez, Emilia
2015-01-01
As modern and ancient DNA sequence data from diverse human populations accumulate1–4, evidence is increasing in support of the existence of beneficial variants acquired from archaic humans that may have accelerated adaptation and improved survival in new environments — a process, known as adaptive introgression (AI). Within the past couple of years, a series of studies5–8 have identified genomic regions showing strong evidence for archaic adaptive introgression. In this Review, we provide an overview of the statistical methods developed to identify archaic introgressed fragments in the genome sequences of modern humans, and to determine whether positive selection has acted on these fragments. We discuss recently reported examples of adaptive introgression and consider the level of supporting evidence for each, grouped by selection pressure. We discuss challenges and recommendations for inferring selection on introgressed regions. PMID:25963373
Statistical inference for serial dilution assay data.
Lee, M L; Whitmore, G A
1999-12-01
Serial dilution assays are widely employed for estimating substance concentrations and minimum inhibitory concentrations. The Poisson-Bernoulli model for such assays is appropriate for count data but not for continuous measurements that are encountered in applications involving substance concentrations. This paper presents practical inference methods based on a log-normal model and illustrates these methods using a case application involving bacterial toxins.
Pediatric Pain, Predictive Inference, and Sensitivity Analysis.
ERIC Educational Resources Information Center
Weiss, Robert
1994-01-01
Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Investigating Mathematics Teachers' Thoughts of Statistical Inference
ERIC Educational Resources Information Center
Yang, Kai-Lin
2012-01-01
Research on statistical cognition and application suggests that statistical inference concepts are commonly misunderstood by students and even misinterpreted by researchers. Although some research has been done on students' misunderstanding or misconceptions of confidence intervals (CIs), few studies explore either students' or mathematics…
Causal Inferences in the Campbellian Validity System
ERIC Educational Resources Information Center
Lund, Thorleif
2010-01-01
The purpose of the present paper is to critically examine causal inferences and internal validity as defined by Campbell and co-workers. Several arguments are given against their counterfactual effect definition, and this effect definition should be considered inadequate for causal research in general. Moreover, their defined independence between…
Campbell's and Rubin's Perspectives on Causal Inference
ERIC Educational Resources Information Center
West, Stephen G.; Thoemmes, Felix
2010-01-01
Donald Campbell's approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubin's causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbell's approach focuses on…
Double jeopardy in inferring cognitive processes.
Fific, Mario
2014-01-01
Inferences we make about underlying cognitive processes can be jeopardized in two ways due to problematic forms of aggregation. First, averaging across individuals is typically considered a very useful tool for removing random variability. The threat is that averaging across subjects leads to averaging across different cognitive strategies, thus harming our inferences. The second threat comes from the construction of inadequate research designs possessing a low diagnostic accuracy of cognitive processes. For that reason we introduced the systems factorial technology (SFT), which has primarily been designed to make inferences about underlying processing order (serial, parallel, coactive), stopping rule (terminating, exhaustive), and process dependency. SFT proposes that the minimal research design complexity to learn about n number of cognitive processes should be equal to 2 (n) . In addition, SFT proposes that (a) each cognitive process should be controlled by a separate experimental factor, and (b) The saliency levels of all factors should be combined in a full factorial design. In the current study, the author cross combined the levels of jeopardies in a 2 × 2 analysis, leading to four different analysis conditions. The results indicate a decline in the diagnostic accuracy of inferences made about cognitive processes due to the presence of each jeopardy in isolation and when combined. The results warrant the development of more individual subject analyses and the utilization of full-factorial (SFT) experimental designs.
Preschoolers Infer Ownership from "Control of Permission"
ERIC Educational Resources Information Center
Neary, Karen R.; Friedman, Ori; Burnstein, Corinna L.
2009-01-01
Owners control permission--they forbid and permit others to use their property. So it is reasonable to assume that someone controlling permission over an object is its owner. The authors tested whether preschoolers infer ownership in this way. In the first experiment, 4- and 5-year-olds, but not 3-year-olds, chose as owner of an object a character…
Tactile length contraction as Bayesian inference.
Tong, Jonathan; Ngo, Vy; Goldreich, Daniel
2016-08-01
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process.
John Updike and Norman Mailer: Sport Inferences.
ERIC Educational Resources Information Center
Upshaw, Kathryn Jane
The phenomenon of writer use of sport inferences in the literary genre of the novel is examined in the works of Updike and Mailer. Novels of both authors were reviewed in order to study the pattern of usage in each novel. From these patterns, concepts which illustrated the sport philosophies of each author were used for general comparisons of the…
Model averaging, optimal inference, and habit formation
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl J.
2014-01-01
Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function—the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge—that of determining which model or models of their environment are the best for guiding behavior. Bayesian model averaging—which says that an agent should weight the predictions of different models according to their evidence—provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent's behavior should show an equivalent balance. We hypothesize that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realizable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behavior. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded) Bayesian inference, focusing particularly upon the relationship between goal-directed and habitual behavior. PMID:25018724
Quasi-Experimental Designs for Causal Inference
ERIC Educational Resources Information Center
Kim, Yongnam; Steiner, Peter
2016-01-01
When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…
What Children Infer from Social Categories
ERIC Educational Resources Information Center
Diesendruck, Gil; Eldror, Ehud
2011-01-01
Children hold the belief that social categories have essences. We investigated what kinds of properties children feel licensed to infer about a person based on social category membership. Seventy-two 4-6-year-olds were introduced to novel social categories defined as having one internal--psychological or biological--and one external--behavioral or…
Decision generation tools and Bayesian inference
NASA Astrophysics Data System (ADS)
Jannson, Tomasz; Wang, Wenjian; Forrester, Thomas; Kostrzewski, Andrew; Veeris, Christian; Nielsen, Thomas
2014-05-01
Digital Decision Generation (DDG) tools are important software sub-systems of Command and Control (C2) systems and technologies. In this paper, we present a special type of DDGs based on Bayesian Inference, related to adverse (hostile) networks, including such important applications as terrorism-related networks and organized crime ones.
Interest, Inferences, and Learning from Texts
ERIC Educational Resources Information Center
Clinton, Virginia; van den Broek, Paul
2012-01-01
Topic interest and learning from texts have been found to be positively associated with each other. However, the reason for this positive association is not well understood. The purpose of this study is to examine a cognitive process, inference generation, that could explain the positive association between interest and learning from texts. In…
Linguistic Markers of Inference Generation While Reading
ERIC Educational Resources Information Center
Clinton, Virginia; Carlson, Sarah E.; Seipel, Ben
2016-01-01
Words can be informative linguistic markers of psychological constructs. The purpose of this study is to examine associations between word use and the process of making meaningful connections to a text while reading (i.e., inference generation). To achieve this purpose, think-aloud data from third-fifth grade students (N = 218) reading narrative…
Double jeopardy in inferring cognitive processes
Fific, Mario
2014-01-01
Inferences we make about underlying cognitive processes can be jeopardized in two ways due to problematic forms of aggregation. First, averaging across individuals is typically considered a very useful tool for removing random variability. The threat is that averaging across subjects leads to averaging across different cognitive strategies, thus harming our inferences. The second threat comes from the construction of inadequate research designs possessing a low diagnostic accuracy of cognitive processes. For that reason we introduced the systems factorial technology (SFT), which has primarily been designed to make inferences about underlying processing order (serial, parallel, coactive), stopping rule (terminating, exhaustive), and process dependency. SFT proposes that the minimal research design complexity to learn about n number of cognitive processes should be equal to 2n. In addition, SFT proposes that (a) each cognitive process should be controlled by a separate experimental factor, and (b) The saliency levels of all factors should be combined in a full factorial design. In the current study, the author cross combined the levels of jeopardies in a 2 × 2 analysis, leading to four different analysis conditions. The results indicate a decline in the diagnostic accuracy of inferences made about cognitive processes due to the presence of each jeopardy in isolation and when combined. The results warrant the development of more individual subject analyses and the utilization of full-factorial (SFT) experimental designs. PMID:25374545
Permutation inference for the general linear model
Winkler, Anderson M.; Ridgway, Gerard R.; Webster, Matthew A.; Smith, Stephen M.; Nichols, Thomas E.
2014-01-01
Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (glms) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on glm parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm – the “randomise” algorithm – for permutation inference with the glm. PMID:24530839
A framework for constructing adaptive and reconfigurable systems
Poirot, Pierre-Etienne; Nogiec, Jerzy; Ren, Shangping; /IIT, Chicago
2007-05-01
This paper presents a software approach to augmenting existing real-time systems with self-adaptation capabilities. In this approach, based on the control loop paradigm commonly used in industrial control, self-adaptation is decomposed into observing system events, inferring necessary changes based on a system's functional model, and activating appropriate adaptation procedures. The solution adopts an architectural decomposition that emphasizes independence and separation of concerns. It encapsulates observation, modeling and correction into separate modules to allow for easier customization of the adaptive behavior and flexibility in selecting implementation technologies.
Reliability of inferred age, and coincidence between inferred age and chronological age.
Kataoka, J; Ohara, S; Shibata, S; Maie, K
1996-06-01
Outdoor research is restricted by many factors. The age inference was one of the biggest problems for the outdoor researchers. We have investigated the reliability of inferred age for the Japanese people, and took out the estimation formula for the age, even if it was based on the inferred age. The age classification was the most popular method for this purpose, and there were many classifications. We took the classification of young, middle aged, and elderly groups, in which classification of the SDs were rather small, that is, 4, 5, and 7 years for the young, middle aged, and elderly age groups, respectively. PMID:9551138
Computational statistics using the Bayesian Inference Engine
NASA Astrophysics Data System (ADS)
Weinberg, Martin D.
2013-09-01
This paper introduces the Bayesian Inference Engine (BIE), a general parallel, optimized software package for parameter inference and model selection. This package is motivated by the analysis needs of modern astronomical surveys and the need to organize and reuse expensive derived data. The BIE is the first platform for computational statistics designed explicitly to enable Bayesian update and model comparison for astronomical problems. Bayesian update is based on the representation of high-dimensional posterior distributions using metric-ball-tree based kernel density estimation. Among its algorithmic offerings, the BIE emphasizes hybrid tempered Markov chain Monte Carlo schemes that robustly sample multimodal posterior distributions in high-dimensional parameter spaces. Moreover, the BIE implements a full persistence or serialization system that stores the full byte-level image of the running inference and previously characterized posterior distributions for later use. Two new algorithms to compute the marginal likelihood from the posterior distribution, developed for and implemented in the BIE, enable model comparison for complex models and data sets. Finally, the BIE was designed to be a collaborative platform for applying Bayesian methodology to astronomy. It includes an extensible object-oriented and easily extended framework that implements every aspect of the Bayesian inference. By providing a variety of statistical algorithms for all phases of the inference problem, a scientist may explore a variety of approaches with a single model and data implementation. Additional technical details and download details are available from http://www.astro.umass.edu/bie. The BIE is distributed under the GNU General Public License.
A neural-fuzzy model with confidence measure for controlled stressed-lap surface shape presentation
NASA Astrophysics Data System (ADS)
Chen, Minyou; Wan, Yongjian; Wu, Fan; Xie, Kaigui; Wang, Mingyu; Fan, Bin
2009-05-01
In computer controlled large aspheric mirror polishing process, it is crucially important to build an accurate stressed-lap surface model for shape control. It is desirable to provide a practical measure of prediction confidence to access the reliability of the resulting models. To build a reliable prediction model for representing the surface shape of stressed lap polishing process in large aperture and highly aspheric optical surface, this paper proposed a predictive model with its own confidence interval estimate based on a fuzzy neural network. The calculation of confidence interval accounts for the training data distribution and accuracy of the trained model with the given input-output data. Simulation results show that the proposed confidence interval estimation reflects the data distribution and extrapolation correctly, and works well in high-dimensional sparse data set of the detected stressed lap surface shape changes. The original data from the micro-displacement sensor matrix were used to train the neural network model. The experiment results showed that the proposed model can represent the surface shape of the stressed-lap accurately and facilitate the computer controlled optical polishing process.
MIDER: network inference with mutual information distance and entropy reduction.
Villaverde, Alejandro F; Ross, John; Morán, Federico; Banga, Julio R
2014-01-01
The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information-theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide
MIDER: network inference with mutual information distance and entropy reduction.
Villaverde, Alejandro F; Ross, John; Morán, Federico; Banga, Julio R
2014-01-01
The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information-theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide
MIDER: Network Inference with Mutual Information Distance and Entropy Reduction
Villaverde, Alejandro F.; Ross, John; Morán, Federico; Banga, Julio R.
2014-01-01
The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information–theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide
Dopamine, reward learning, and active inference
FitzGerald, Thomas H. B.; Dolan, Raymond J.; Friston, Karl
2015-01-01
Temporal difference learning models propose phasic dopamine signaling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behavior. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings. PMID:26581305
Constructing inferences during narrative text comprehension.
Graesser, A C; Singer, M; Trabasso, T
1994-07-01
The authors describe a constructionist theory that accounts for the knowledge-based inferences that are constructed when readers comprehend narrative text. Readers potentially generate a rich variety of inferences when they construct a referential situation model of what the text is about. The proposed constructionist theory specifies that some, but not all, of this information is constructed under most conditions of comprehension. The distinctive assumptions of the constructionist theory embrace a principle of search (or effort) after meaning. According to this principle, readers attempt to construct a meaning representation that addresses the reader's goals, that is coherent at both local and global levels, and that explains why actions, events, and states are mentioned in the text. This study reviews empirical evidence that addresses this theory and contrasts it with alternative theoretical frameworks. PMID:7938337
An emergent approach to analogical inference
NASA Astrophysics Data System (ADS)
Thibodeau, Paul H.; Flusberg, Stephen J.; Glick, Jeremy J.; Sternberg, Daniel A.
2013-03-01
In recent years, a growing number of researchers have proposed that analogy is a core component of human cognition. According to the dominant theoretical viewpoint, analogical reasoning requires a specific suite of cognitive machinery, including explicitly coded symbolic representations and a mapping or binding mechanism that operates over these representations. Here we offer an alternative approach: we find that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery. The results also parallel findings from the developmental literature on analogy, demonstrating a shift from an initial reliance on surface feature similarity to the use of relational similarity later in training. Variants of the model allow us to consider and rule out alternative accounts of its performance. We conclude by discussing how these findings can potentially refine our understanding of the processes that are required to perform analogical inference.
Inferring network topology via the propagation process
NASA Astrophysics Data System (ADS)
Zeng, An
2013-11-01
Inferring the network topology from the dynamics is a fundamental problem, with wide applications in geology, biology, and even counter-terrorism. Based on the propagation process, we present a simple method to uncover the network topology. A numerical simulation on artificial networks shows that our method enjoys a high accuracy in inferring the network topology. We find that the infection rate in the propagation process significantly influences the accuracy, and that each network corresponds to an optimal infection rate. Moreover, the method generally works better in large networks. These finding are confirmed in both real social and nonsocial networks. Finally, the method is extended to directed networks, and a similarity measure specific for directed networks is designed.
An Intuitive Dashboard for Bayesian Network Inference
NASA Astrophysics Data System (ADS)
Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.
2014-03-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
The NIFTY way of Bayesian signal inference
Selig, Marco
2014-12-05
We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D{sup 3}PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
The empirical accuracy of uncertain inference models
NASA Technical Reports Server (NTRS)
Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.
1987-01-01
Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.
Inference for current leukemia free survival
Liu, Leiyan; Logan, Brent
2009-01-01
Donor lymphocyte infusion (DLI) for patients who relapse following an allogeneic stem cell transplant has proved remarkably durable. Because of the potential for second remissions with DLI, the current leukemia free survival (CLFS), which is the probability that a patient has not failed the entire course of the treatment, is becoming of interest to clinical investigators. Based on either a multistate Markov model or a linear combination of Kaplan–Meier estimators, we explore regression models for the CLFS. We focus on the two sample problem and we develop confidence bands for the CLFS or for differences in CLFS as well as a Kolmogorov type hypothesis test using a re-sampling technique. We also examine the use of pseudo-values to make inference on the direct effects of covariates on the CLFS function and we develop a score test for the equality of two CLFS. We illustrate these inference methods on a bone marrow transplant dataset. PMID:18663574
Scalable Probabilistic Inference for Global Seismic Monitoring
NASA Astrophysics Data System (ADS)
Arora, N. S.; Dear, T.; Russell, S.
2011-12-01
We describe a probabilistic generative model for seismic events, their transmission through the earth, and their detection (or mis-detection) at seismic stations. We also describe an inference algorithm that constructs the most probable event bulletin explaining the observed set of detections. The model and inference are called NET-VISA (network processing vertically integrated seismic analysis) and is designed to replace the current automated network processing at the IDC, the SEL3 bulletin. Our results (attached table) demonstrate that NET-VISA significantly outperforms SEL3 by reducing the missed events from 30.3% down to 12.5%. The difference is even more dramatic for smaller magnitude events. NET-VISA has no difficulty in locating nuclear explosions as well. The attached figure demonstrates the location predicted by NET-VISA versus other bulletins for the second DPRK event. Further evaluation on dense regional networks demonstrates that NET-VISA finds many events missed in the LEB bulletin, which is produced by the human analysts. Large aftershock sequences, as produced by the 2004 December Sumatra earthquake and the 2011 March Tohoku earthquake, can pose a significant load for automated processing, often delaying the IDC bulletins by weeks or months. Indeed these sequences can overload the serial NET-VISA inference as well. We describe an enhancement to NET-VISA to make it multi-threaded, and hence take full advantage of the processing power of multi-core and -cpu machines. Our experiments show that the new inference algorithm is able to achieve 80% efficiency in parallel speedup.
Impacts of Terraces on Phylogenetic Inference.
Sanderson, Michael J; McMahon, Michelle M; Stamatakis, Alexandros; Zwickl, Derrick J; Steel, Mike
2015-09-01
Terraces are sets of trees with precisely the same likelihood or parsimony score, which can be induced by missing sequences in partitioned multi-locus phylogenetic data matrices. The potentially large set of trees on a terrace can be characterized by enumeration algorithms or consensus methods that exploit the pattern of partial taxon coverage in the data, independent of the sequence data themselves. Terraces can add ambiguity and complexity to phylogenetic inference, particularly in settings where inference is already challenging: data sets with many taxa and relatively few loci. In this article we present five new findings about terraces and their impacts on phylogenetic inference. First, we clarify assumptions about partitioning scheme model parameters that are necessary for the existence of terraces. Second, we explore the dependence of terrace size on partitioning scheme and indicate how to find the partitioning scheme associated with the largest terrace containing a given tree. Third, we highlight the impact of terrace size on bootstrap estimates of confidence limits in clades, and characterize the surprising result that the bootstrap proportion for a clade, as it is usually calculated, can be entirely determined by the frequency of bipartitions on a terrace, with some bipartitions receiving high support even when incorrect. Fourth, we dissect some effects of prior distributions of edge lengths on the computed posterior probabilities of clades on terraces, to understand an example in which long edges "attract" each other in Bayesian inference. Fifth, we describe how assuming relationships between edge-lengths of different loci, as an attempt to avoid terraces, can also be problematic when taxon coverage is partial, specifically when heterotachy is present. Finally, we discuss strategies for remediation of some of these problems. One promising approach finds a minimal set of taxa which, when deleted from the data matrix, reduces the size of a terrace to a
Nonparametric causal inference for bivariate time series.
McCracken, James M; Weigel, Robert S
2016-02-01
We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.
Inference---A Python Package for Astrostatistics
NASA Astrophysics Data System (ADS)
Loredo, T. J.; Connors, A.; Oliphant, T. E.
2004-08-01
Python is an object-oriented ``very high level language'' that is easy to learn, actively supported, and freely available for a large variety of computing platforms. It possesses sophisticated scientific computing capabilities thanks to ongoing work by a community of scientists and engineers who maintain a suite of open source scientific packages. Key contributions come from the STScI group maintaining PyRAF, a Python environment for running IRAF tasks. Python's main scientific computing packages are the Numeric and numarray packages implementing efficient array and image processing, and the SciPy package implementing a wide variety of general-use algorithms including optimization, root finding, special functions, numerical integration, and basic statistical tasks. We describe the Inference package, a collection of tools for carrying out advanced astrostatistical analyses that is about to be released as a supplement to SciPy. The Inference package has two main parts. First is a Parametric Inference Engine that offers a unified environment for analysis of parametric models with a variety of methods, including minimum χ2, maximum likelihood, and Bayesian methods. Several common analysis tasks are available with simple syntax (e.g., optimization, multidimensional exploration and integration, simulation); its parameter syntax is remensicent of that of SHERPA. Second, the package includes a growing library of diverse, specialized astrostatistical methods in a variety of domains including time series, spectrum and survey analysis, and basic image analysis. Where possible, a variety of methods are available for a given problem, enabling users to explore alternative methods in a unified environment, with the guidance of significant documentation. The Inference project is supported by NASA AISRP grant NAG5-12082.
Nonparametric causal inference for bivariate time series
NASA Astrophysics Data System (ADS)
McCracken, James M.; Weigel, Robert S.
2016-02-01
We introduce new quantities for exploratory causal inference between bivariate time series. The quantities, called penchants and leanings, are computationally straightforward to apply, follow directly from assumptions of probabilistic causality, do not depend on any assumed models for the time series generating process, and do not rely on any embedding procedures; these features may provide a clearer interpretation of the results than those from existing time series causality tools. The penchant and leaning are computed based on a structured method for computing probabilities.
Thermodynamics of statistical inference by cells.
Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj
2014-10-01
The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.
Are adaptation costs necessary to build up a local adaptation pattern?
Magalhães, Sara; Blanchet, Elodie; Egas, Martijn; Olivieri, Isabelle
2009-01-01
Background Ecological specialization is pervasive in phytophagous arthropods. In such specialization mode, limits to host range are imposed by trade-offs preventing adaptation to several hosts. The occurrence of such trade-offs is inferred by a pattern of local adaptation, i.e., a negative correlation between relative performance on different hosts. Results To establish a causal link between local adaptation and trade-offs, we performed experimental evolution of spider mites on cucumber, tomato and pepper, starting from a population adapted to cucumber. Spider mites adapted to each novel host within 15 generations and no further evolution was observed at generation 25. A pattern of local adaptation was found, as lines evolving on a novel host performed better on that host than lines evolving on other hosts. However, costs of adaptation were absent. Indeed, lines adapted to tomato had similar or higher performance on pepper than lines evolving on the ancestral host (which represent the initial performance of all lines) and the converse was also true, e.g. negatively correlated responses were not observed on the alternative novel host. Moreover, adapting to novel hosts did not result in decreased performance on the ancestral host. Adaptation did not modify host ranking, as all lines performed best on the ancestral host. Furthermore, mites from all lines preferred the ancestral to novel hosts. Mate choice experiments indicated that crosses between individuals from the same or from a different selection regime were equally likely, hence development of reproductive isolation among lines adapted to different hosts is unlikely. Conclusion Therefore, performance and preference are not expected to impose limits to host range in our study species. Our results show that the evolution of a local adaptation pattern is not necessarily associated with the evolution of an adaptation cost. PMID:19650899
Is There a Free Lunch in Inference?
Rouder, Jeffrey N; Morey, Richard D; Verhagen, Josine; Province, Jordan M; Wagenmakers, Eric-Jan
2016-07-01
The field of psychology, including cognitive science, is vexed by a crisis of confidence. Although the causes and solutions are varied, we focus here on a common logical problem in inference. The default mode of inference is significance testing, which has a free lunch property where researchers need not make detailed assumptions about the alternative to test the null hypothesis. We present the argument that there is no free lunch; that is, valid testing requires that researchers test the null against a well-specified alternative. We show how this requirement follows from the basic tenets of conventional and Bayesian probability. Moreover, we show in both the conventional and Bayesian framework that not specifying the alternative may lead to rejections of the null hypothesis with scant evidence. We review both frequentist and Bayesian approaches to specifying alternatives, and we show how such specifications improve inference. The field of cognitive science will benefit because consideration of reasonable alternatives will undoubtedly sharpen the intellectual underpinnings of research. PMID:27489199
Is There a Free Lunch in Inference?
Rouder, Jeffrey N; Morey, Richard D; Verhagen, Josine; Province, Jordan M; Wagenmakers, Eric-Jan
2016-07-01
The field of psychology, including cognitive science, is vexed by a crisis of confidence. Although the causes and solutions are varied, we focus here on a common logical problem in inference. The default mode of inference is significance testing, which has a free lunch property where researchers need not make detailed assumptions about the alternative to test the null hypothesis. We present the argument that there is no free lunch; that is, valid testing requires that researchers test the null against a well-specified alternative. We show how this requirement follows from the basic tenets of conventional and Bayesian probability. Moreover, we show in both the conventional and Bayesian framework that not specifying the alternative may lead to rejections of the null hypothesis with scant evidence. We review both frequentist and Bayesian approaches to specifying alternatives, and we show how such specifications improve inference. The field of cognitive science will benefit because consideration of reasonable alternatives will undoubtedly sharpen the intellectual underpinnings of research.
Inferred Lunar Boulder Distributions at Decimeter Scales
NASA Technical Reports Server (NTRS)
Baloga, S. M.; Glaze, L. S.; Spudis, P. D.
2012-01-01
Block size distributions of impact deposits on the Moon are diagnostic of the impact process and environmental effects, such as target lithology and weathering. Block size distributions are also important factors in trafficability, habitability, and possibly the identification of indigenous resources. Lunar block sizes have been investigated for many years for many purposes [e.g., 1-3]. An unresolved issue is the extent to which lunar block size distributions can be extrapolated to scales smaller than limits of resolution of direct measurement. This would seem to be a straightforward statistical application, but it is complicated by two issues. First, the cumulative size frequency distribution of observable boulders rolls over due to resolution limitations at the small end. Second, statistical regression provides the best fit only around the centroid of the data [4]. Confidence and prediction limits splay away from the best fit at the endpoints resulting in inferences in the boulder density at the CPR scale that can differ by many orders of magnitude [4]. These issues were originally investigated by Cintala and McBride [2] using Surveyor data. The objective of this study was to determine whether the measured block size distributions from Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC-NAC) images (m-scale resolution) can be used to infer the block size distribution at length scales comparable to Mini-RF Circular Polarization Ratio (CPR) scales, nominally taken as 10 cm. This would set the stage for assessing correlations of inferred block size distributions with CPR returns [6].
Variational Inference for Watson Mixture Model.
Taghia, Jalil; Leijon, Arne
2016-09-01
This paper addresses modelling data using the Watson distribution. The Watson distribution is one of the simplest distributions for analyzing axially symmetric data. This distribution has gained some attention in recent years due to its modeling capability. However, its Bayesian inference is fairly understudied due to difficulty in handling the normalization factor. Recent development of Markov chain Monte Carlo (MCMC) sampling methods can be applied for this purpose. However, these methods can be prohibitively slow for practical applications. A deterministic alternative is provided by variational methods that convert inference problems into optimization problems. In this paper, we present a variational inference for Watson mixture models. First, the variational framework is used to side-step the intractability arising from the coupling of latent states and parameters. Second, the variational free energy is further lower bounded in order to avoid intractable moment computation. The proposed approach provides a lower bound on the log marginal likelihood and retains distributional information over all parameters. Moreover, we show that it can regulate its own complexity by pruning unnecessary mixture components while avoiding over-fitting. We discuss potential applications of the modeling with Watson distributions in the problem of blind source separation, and clustering gene expression data sets. PMID:26571512
Combinatorics of distance-based tree inference.
Pardi, Fabio; Gascuel, Olivier
2012-10-01
Several popular methods for phylogenetic inference (or hierarchical clustering) are based on a matrix of pairwise distances between taxa (or any kind of objects): The objective is to construct a tree with branch lengths so that the distances between the leaves in that tree are as close as possible to the input distances. If we hold the structure (topology) of the tree fixed, in some relevant cases (e.g., ordinary least squares) the optimal values for the branch lengths can be expressed using simple combinatorial formulae. Here we define a general form for these formulae and show that they all have two desirable properties: First, the common tree reconstruction approaches (least squares, minimum evolution), when used in combination with these formulae, are guaranteed to infer the correct tree when given enough data (consistency); second, the branch lengths of all the simple (nearest neighbor interchange) rearrangements of a tree can be calculated, optimally, in quadratic time in the size of the tree, thus allowing the efficient application of hill climbing heuristics. The study presented here is a continuation of that by Mihaescu and Pachter on branch length estimation [Mihaescu R, Pachter L (2008) Proc Natl Acad Sci USA 105:13206-13211]. The focus here is on the inference of the tree itself and on providing a basis for novel algorithms to reconstruct trees from distances.
Combinatorics of distance-based tree inference
Pardi, Fabio; Gascuel, Olivier
2012-01-01
Several popular methods for phylogenetic inference (or hierarchical clustering) are based on a matrix of pairwise distances between taxa (or any kind of objects): The objective is to construct a tree with branch lengths so that the distances between the leaves in that tree are as close as possible to the input distances. If we hold the structure (topology) of the tree fixed, in some relevant cases (e.g., ordinary least squares) the optimal values for the branch lengths can be expressed using simple combinatorial formulae. Here we define a general form for these formulae and show that they all have two desirable properties: First, the common tree reconstruction approaches (least squares, minimum evolution), when used in combination with these formulae, are guaranteed to infer the correct tree when given enough data (consistency); second, the branch lengths of all the simple (nearest neighbor interchange) rearrangements of a tree can be calculated, optimally, in quadratic time in the size of the tree, thus allowing the efficient application of hill climbing heuristics. The study presented here is a continuation of that by Mihaescu and Pachter on branch length estimation [Mihaescu R, Pachter L (2008) Proc Natl Acad Sci USA 105:13206–13211]. The focus here is on the inference of the tree itself and on providing a basis for novel algorithms to reconstruct trees from distances. PMID:23012403
Inferring sparse networks for noisy transient processes.
Tran, Hoang M; Bukkapatnam, Satish T S
2016-01-01
Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the l1-min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of l1-min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues. PMID:26916813
Inferring sparse networks for noisy transient processes
NASA Astrophysics Data System (ADS)
Tran, Hoang M.; Bukkapatnam, Satish T. S.
2016-02-01
Inferring causal structures of real world complex networks from measured time series signals remains an open issue. The current approaches are inadequate to discern between direct versus indirect influences (i.e., the presence or absence of a directed arc connecting two nodes) in the presence of noise, sparse interactions, as well as nonlinear and transient dynamics of real world processes. We report a sparse regression (referred to as the -min) approach with theoretical bounds on the constraints on the allowable perturbation to recover the network structure that guarantees sparsity and robustness to noise. We also introduce averaging and perturbation procedures to further enhance prediction scores (i.e., reduce inference errors), and the numerical stability of -min approach. Extensive investigations have been conducted with multiple benchmark simulated genetic regulatory network and Michaelis-Menten dynamics, as well as real world data sets from DREAM5 challenge. These investigations suggest that our approach can significantly improve, oftentimes by 5 orders of magnitude over the methods reported previously for inferring the structure of dynamic networks, such as Bayesian network, network deconvolution, silencing and modular response analysis methods based on optimizing for sparsity, transients, noise and high dimensionality issues.
Combinatorics of distance-based tree inference.
Pardi, Fabio; Gascuel, Olivier
2012-10-01
Several popular methods for phylogenetic inference (or hierarchical clustering) are based on a matrix of pairwise distances between taxa (or any kind of objects): The objective is to construct a tree with branch lengths so that the distances between the leaves in that tree are as close as possible to the input distances. If we hold the structure (topology) of the tree fixed, in some relevant cases (e.g., ordinary least squares) the optimal values for the branch lengths can be expressed using simple combinatorial formulae. Here we define a general form for these formulae and show that they all have two desirable properties: First, the common tree reconstruction approaches (least squares, minimum evolution), when used in combination with these formulae, are guaranteed to infer the correct tree when given enough data (consistency); second, the branch lengths of all the simple (nearest neighbor interchange) rearrangements of a tree can be calculated, optimally, in quadratic time in the size of the tree, thus allowing the efficient application of hill climbing heuristics. The study presented here is a continuation of that by Mihaescu and Pachter on branch length estimation [Mihaescu R, Pachter L (2008) Proc Natl Acad Sci USA 105:13206-13211]. The focus here is on the inference of the tree itself and on providing a basis for novel algorithms to reconstruct trees from distances. PMID:23012403
Inferring Epidemic Network Topology from Surveillance Data
Wan, Xiang; Liu, Jiming; Cheung, William K.; Tong, Tiejun
2014-01-01
The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases. PMID:24979215
Functional neuroanatomy of intuitive physical inference.
Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy
2016-08-23
To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action. PMID:27503892
Transitive inference in two lemur species (Eulemur macaco and Eulemur fulvus).
Tromp, D; Meunier, H; Roeder, J J
2015-03-01
When confronted with tasks involving reasoning instead of simple learning through trial and error, lemurs appeared to be less competent than simians. Our study aims to investigate lemurs' capability for transitive inference, a form of deductive reasoning in which the subject deduces logical conclusions from preliminary information. Transitive inference may have an adaptative function, especially in species living in large, complex social groups and is proposed to play a major role in rank estimation and establishment of dominance hierarchies. We proposed to test the capacities of reasoning using transitive inference in two species of lemurs, the brown lemur (Eulemur fulvus) and the black lemur (Eulemur macaco), both living in multimale-multifemale societies. For that purpose, we designed an original setup providing, for the first time in this kind of cognitive task, pictures of conspecifics' faces as stimuli. Subjects were trained to differentiate six photographs of unknown conspecifics named randomly from A to F to establish the order A > B > C > D > E > F and select consistently the highest-ranking photograph in five adjacent pairs AB, BC, CD, DE, and EF. Then lemurs were presented with the same adjacent pairs and three new and non-adjacent pairs BD, BE, CE. The results showed that all subjects correctly selected the highest-ranking photograph in every non-adjacent pair, reflecting lemurs' capacity for transitive inference. Our results are discussed in the context of the still debated current theories about the mechanisms underlying this specific capacity.
Data fusion and classification using a hybrid intrinsic cellular inference network
NASA Astrophysics Data System (ADS)
Woodley, Robert; Walenz, Brett; Seiffertt, John; Robinette, Paul; Wunsch, Donald
2010-04-01
Hybrid Intrinsic Cellular Inference Network (HICIN) is designed for battlespace decision support applications. We developed an automatic method of generating hypotheses for an entity-attribute classifier. The capability and effectiveness of a domain specific ontology was used to generate automatic categories for data classification. Heterogeneous data is clustered using an Adaptive Resonance Theory (ART) inference engine on a sample (unclassified) data set. The data set is the Lahman baseball database. The actual data is immaterial to the architecture, however, parallels in the data can be easily drawn (i.e., "Team" maps to organization, "Runs scored/allowed" to Measure of organization performance (positive/negative), "Payroll" to organization resources, etc.). Results show that HICIN classifiers create known inferences from the heterogonous data. These inferences are not explicitly stated in the ontological description of the domain and are strictly data driven. HICIN uses data uncertainty handling to reduce errors in the classification. The uncertainty handling is based on subjective logic. The belief mass allows evidence from multiple sources to be mathematically combined to increase or discount an assertion. In military operations the ability to reduce uncertainty will be vital in the data fusion operation.
An empirical Bayesian approach for model-based inference of cellular signaling networks
2009-01-01
Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF) signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements. PMID:19900289
Inferring Difficulty: Flexibility in the Real-time Processing of Disfluency.
Heller, Daphna; Arnold, Jennifer E; Klein, Natalie; Tanenhaus, Michael K
2015-06-01
Upon hearing a disfluent referring expression, listeners expect the speaker to refer to an object that is previously unmentioned, an object that does not have a straightforward label, or an object that requires a longer description. Two visual-world eye-tracking experiments examined whether listeners directly associate disfluency with these properties of objects, or whether disfluency attribution is more flexible and involves situation-specific inferences. Since in natural situations reference to objects that do not have a straightforward label or that require a longer description is correlated with both production difficulty and with disfluency, we used a mini-artificial lexicon to dissociate difficulty from these properties, building on the fact that recently learned names take longer to produce than existing words in one's mental lexicon. The results demonstrate that disfluency attribution involves situation-specific inferences; we propose that in new situations listeners spontaneously infer what may cause production difficulty. However, the results show that these situation-specific inferences are limited in scope: listeners assessed difficulty relative to their own experience with the artificial names, and did not adapt to the assumed knowledge of the speaker. PMID:26677642
Transitive inference in two lemur species (Eulemur macaco and Eulemur fulvus).
Tromp, D; Meunier, H; Roeder, J J
2015-03-01
When confronted with tasks involving reasoning instead of simple learning through trial and error, lemurs appeared to be less competent than simians. Our study aims to investigate lemurs' capability for transitive inference, a form of deductive reasoning in which the subject deduces logical conclusions from preliminary information. Transitive inference may have an adaptative function, especially in species living in large, complex social groups and is proposed to play a major role in rank estimation and establishment of dominance hierarchies. We proposed to test the capacities of reasoning using transitive inference in two species of lemurs, the brown lemur (Eulemur fulvus) and the black lemur (Eulemur macaco), both living in multimale-multifemale societies. For that purpose, we designed an original setup providing, for the first time in this kind of cognitive task, pictures of conspecifics' faces as stimuli. Subjects were trained to differentiate six photographs of unknown conspecifics named randomly from A to F to establish the order A > B > C > D > E > F and select consistently the highest-ranking photograph in five adjacent pairs AB, BC, CD, DE, and EF. Then lemurs were presented with the same adjacent pairs and three new and non-adjacent pairs BD, BE, CE. The results showed that all subjects correctly selected the highest-ranking photograph in every non-adjacent pair, reflecting lemurs' capacity for transitive inference. Our results are discussed in the context of the still debated current theories about the mechanisms underlying this specific capacity. PMID:25328141
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Malle, Bertram F; Holbrook, Jess
2012-04-01
People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research.
Popinga, Alex; Vaughan, Tim; Stadler, Tanja; Drummond, Alexei J
2015-02-01
Estimation of epidemiological and population parameters from molecular sequence data has become central to the understanding of infectious disease dynamics. Various models have been proposed to infer details of the dynamics that describe epidemic progression. These include inference approaches derived from Kingman's coalescent theory. Here, we use recently described coalescent theory for epidemic dynamics to develop stochastic and deterministic coalescent susceptible-infected-removed (SIR) tree priors. We implement these in a Bayesian phylogenetic inference framework to permit joint estimation of SIR epidemic parameters and the sample genealogy. We assess the performance of the two coalescent models and also juxtapose results obtained with a recently published birth-death-sampling model for epidemic inference. Comparisons are made by analyzing sets of genealogies simulated under precisely known epidemiological parameters. Additionally, we analyze influenza A (H1N1) sequence data sampled in the Canterbury region of New Zealand and HIV-1 sequence data obtained from known United Kingdom infection clusters. We show that both coalescent SIR models are effective at estimating epidemiological parameters from data with large fundamental reproductive number [Formula: see text] and large population size [Formula: see text]. Furthermore, we find that the stochastic variant generally outperforms its deterministic counterpart in terms of error, bias, and highest posterior density coverage, particularly for smaller [Formula: see text] and [Formula: see text]. However, each of these inference models is shown to have undesirable properties in certain circumstances, especially for epidemic outbreaks with [Formula: see text] close to one or with small effective susceptible populations. PMID:25527289
Nonholonomic mobile system control by combining EEG-based BCI with ANFIS.
Yu, Weiwei; Feng, Huashan; Feng, Yangyang; Madani, Kurosh; Sabourin, Christophe
2015-01-01
Motor imagery EEG-based BCI has advantages in the assistance of human control of peripheral devices, such as the mobile robot or wheelchair, because the subject is not exposed to any stimulation and suffers no risk of fatigue. However, the intensive training necessary to recognize the numerous classes of data makes it hard to control these nonholonomic mobile systems accurately and effectively. This paper proposes a new approach which combines motor imagery EEG with the Adaptive Neural Fuzzy Inference System. This approach fuses the intelligence of humans based on motor imagery EEG with the precise capabilities of a mobile system based on ANFIS. This approach realizes a multi-level control, which makes the nonholonomic mobile system highly controllably without stopping or relying on sensor information. Also, because the ANFIS controller can be trained while performing the control task, control accuracy and efficiency is increased for the user. Experimental results of the nonholonomic mobile robot verify the effectiveness of this approach.
Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.
Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan
2010-04-01
Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution. PMID:20227747
Craniofacial biomechanics and functional and dietary inferences in hominin paleontology.
Grine, Frederick E; Judex, Stefan; Daegling, David J; Ozcivici, Engin; Ungar, Peter S; Teaford, Mark F; Sponheimer, Matt; Scott, Jessica; Scott, Robert S; Walker, Alan
2010-04-01
Finite element analysis (FEA) is a potentially powerful tool by which the mechanical behaviors of different skeletal and dental designs can be investigated, and, as such, has become increasingly popular for biomechanical modeling and inferring the behavior of extinct organisms. However, the use of FEA to extrapolate from characterization of the mechanical environment to questions of trophic or ecological adaptation in a fossil taxon is both challenging and perilous. Here, we consider the problems and prospects of FEA applications in paleoanthropology, and provide a critical examination of one such study of the trophic adaptations of Australopithecus africanus. This particular FEA is evaluated with regard to 1) the nature of the A. africanus cranial composite, 2) model validation, 3) decisions made with respect to model parameters, 4) adequacy of data presentation, and 5) interpretation of the results. Each suggests that the results reflect methodological decisions as much as any underlying biological significance. Notwithstanding these issues, this model yields predictions that follow from the posited emphasis on premolar use by A. africanus. These predictions are tested with data from the paleontological record, including a phylogenetically-informed consideration of relative premolar size, and postcanine microwear fabrics and antemortem enamel chipping. In each instance, the data fail to conform to predictions from the model. This model thus serves to emphasize the need for caution in the application of FEA in paleoanthropological enquiry. Theoretical models can be instrumental in the construction of testable hypotheses; but ultimately, the studies that serve to test these hypotheses - rather than data from the models - should remain the source of information pertaining to hominin paleobiology and evolution.
ADAPTATION AND ADAPTABILITY, THE BELLEFAIRE FOLLOWUP STUDY.
ERIC Educational Resources Information Center
ALLERHAND, MELVIN E.; AND OTHERS
A RESEARCH TEAM STUDIED INFLUENCES, ADAPTATION, AND ADAPTABILITY IN 50 POORLY ADAPTING BOYS AT BELLEFAIRE, A REGIONAL CHILD CARE CENTER FOR EMOTIONALLY DISTURBED CHILDREN. THE TEAM ATTEMPTED TO GAUGE THE SUCCESS OF THE RESIDENTIAL TREATMENT CENTER IN TERMS OF THE PSYCHOLOGICAL PATTERNS AND ROLE PERFORMANCES OF THE BOYS DURING INDIVIDUAL CASEWORK…
Bayesian Estimation and Inference Using Stochastic Electronics
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M.; Hamilton, Tara J.; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326
On the scientific inference from clinical trials.
Holmberg, L; Baum, M; Adami, H O
1999-05-01
We have not been able to describe clearly how we generalize findings from a study to our own 'everyday patients'. This difficulty is not surprising, since generalization deals with how empirical observations are related to the growth of scientific knowledge, which is a major philosophical problem. An argument, sometimes used to discard evidence from a trial, is that the patient sample was too selected and therefore not 'representative' enough for the results to be meaningful for generalization. In this paper, we discuss issues of representativeness and generalizability. Other authors have shown that generalization cannot only depend on statistical inference. Then, how do randomized clinical trials contribute to the growth of knowledge? We discuss three aspects of the randomized clinical trial (Mant 1999), First, the trial is an empirical experiment set up to study the intervention on the question as specifically and as much in isolation from other -- biasing and confounding -- factors as possible (Rothman & Greenland 1998). Second, the trial is set up to challenge our prevailing hypotheses (or prejudices) and the trial is above all a help in error elimination (Popper 1992). Third, we need to learn to see new, unexpected and thought-provoking patterns in the data from a trial. Point one -- and partly point two -- refers to the paradigm of the controlled experiment in scientific method. How much a study contributes to our knowledge, with respect to points two and three, relates to its originality. In none of these respects is the representativeness of the patients, or the clinical situations, crucial for judging the study and its possible inferences. However, we also discuss that the biological domain of disease that was studied in a particular trial has to be taken into account. Thus, the inference drawn from a clinical study is not only a question of statistical generalization, but must include a jump from the world of experiences into the world of reason
Nonparametric inference of network structure and dynamics
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.
The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Inferring influenza dynamics and control in households
Lau, Max S.Y.; Cowling, Benjamin J.; Cook, Alex R.; Riley, Steven
2015-01-01
Household-based interventions are the mainstay of public health policy against epidemic respiratory pathogens when vaccination is not available. Although the efficacy of these interventions has traditionally been measured by their ability to reduce the proportion of household contacts who exhibit symptoms [household secondary attack rate (hSAR)], this metric is difficult to interpret and makes only partial use of data collected by modern field studies. Here, we use Bayesian transmission model inference to analyze jointly both symptom reporting and viral shedding data from a three-armed study of influenza interventions. The reduction in hazard of infection in the increased hand hygiene intervention arm was 37.0% [8.3%, 57.8%], whereas the equivalent reduction in the other intervention arm was 27.2% [−0.46%, 52.3%] (increased hand hygiene and face masks). By imputing the presence and timing of unobserved infection, we estimated that only 61.7% [43.1%, 76.9%] of infections met the case criteria and were thus detected by the study design. An assessment of interventions using inferred infections produced more intuitively consistent attack rates when households were stratified by the speed of intervention, compared with the crude hSAR. Compared with adults, children were 2.29 [1.66, 3.23] times as infectious and 3.36 [2.31, 4.82] times as susceptible. The mean generation time was 3.39 d [3.06, 3.70]. Laboratory confirmation of infections by RT-PCR was only able to detect 79.6% [76.5%, 83.0%] of symptomatic infections, even at the peak of shedding. Our results highlight the potential use of robust inference with well-designed mechanistic transmission models to improve the design of intervention studies. PMID:26150502
Inferring influenza dynamics and control in households.
Lau, Max S Y; Cowling, Benjamin J; Cook, Alex R; Riley, Steven
2015-07-21
Household-based interventions are the mainstay of public health policy against epidemic respiratory pathogens when vaccination is not available. Although the efficacy of these interventions has traditionally been measured by their ability to reduce the proportion of household contacts who exhibit symptoms [household secondary attack rate (hSAR)], this metric is difficult to interpret and makes only partial use of data collected by modern field studies. Here, we use Bayesian transmission model inference to analyze jointly both symptom reporting and viral shedding data from a three-armed study of influenza interventions. The reduction in hazard of infection in the increased hand hygiene intervention arm was 37.0% [8.3%, 57.8%], whereas the equivalent reduction in the other intervention arm was 27.2% [-0.46%, 52.3%] (increased hand hygiene and face masks). By imputing the presence and timing of unobserved infection, we estimated that only 61.7% [43.1%, 76.9%] of infections met the case criteria and were thus detected by the study design. An assessment of interventions using inferred infections produced more intuitively consistent attack rates when households were stratified by the speed of intervention, compared with the crude hSAR. Compared with adults, children were 2.29 [1.66, 3.23] times as infectious and 3.36 [2.31, 4.82] times as susceptible. The mean generation time was 3.39 d [3.06, 3.70]. Laboratory confirmation of infections by RT-PCR was only able to detect 79.6% [76.5%, 83.0%] of symptomatic infections, even at the peak of shedding. Our results highlight the potential use of robust inference with well-designed mechanistic transmission models to improve the design of intervention studies. PMID:26150502
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream. PMID:27047326
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Bayesian Inference in Satellite Gravity Inversion
NASA Technical Reports Server (NTRS)
Kis, K. I.; Taylor, Patrick T.; Wittmann, G.; Kim, Hyung Rae; Torony, B.; Mayer-Guerr, T.
2005-01-01
To solve a geophysical inverse problem means applying measurements to determine the parameters of the selected model. The inverse problem is formulated as the Bayesian inference. The Gaussian probability density functions are applied in the Bayes's equation. The CHAMP satellite gravity data are determined at the altitude of 400 kilometer altitude over the South part of the Pannonian basin. The model of interpretation is the right vertical cylinder. The parameters of the model are obtained from the minimum problem solved by the Simplex method.
Inferences on the common coefficient of variation.
Tian, Lili
2005-07-30
The coefficient of variation is often used as a measure of precision and reproducibility of data in medical and biological science. This paper considers the problem of making inference about the common population coefficient of variation when it is a priori suspected that several independent samples are from populations with a common coefficient of variation. The procedures for confidence interval estimation and hypothesis testing are developed based on the concepts of generalized variables. The coverage properties of the proposed confidence intervals and type-I errors of the proposed tests are evaluated by simulation. The proposed methods are illustrated by a real life example.
Predictive Inference Using Latent Variables with Covariates*
Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.
2014-01-01
Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627
Identifying inference attacks against healthcare data repositories
Vaidya, Jaideep; Shafiq, Basit; Jiang, Xiaoqian; Ohno-Machado, Lucila
Health care data repositories play an important role in driving progress in medical research. Finding new pathways to discovery requires having adequate data and relevant analysis. However, it is critical to ensure the privacy and security of the stored data. In this paper, we identify a dangerous inference attack against naive suppression based approaches that are used to protect sensitive information. We base our attack on the querying system provided by the Healthcare Cost and Utilization Project, though it applies in general to any medical database providing a query capability. We also discuss potential solutions to this problem. PMID:24303279
Solar structure: Models and inferences from helioseismology
Guzik, J.A.
1998-12-31
In this review the author summarizes results published during approximately the least three years concerning the state of one-dimensional solar interior modeling. She discusses the effects of refinements to the input physics, motivated by improving the agreement between calculated and observed solar oscillation frequencies, or between calculated and inferred solar structure. She has omitted two- and three-dimensional aspects of the solar structure, such as the rotation profile, detailed modeling of turbulent convection, and magnetic fields, although further progress in refining solar interior models may require including such two- and three-dimensional dynamical effects.
Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.
2009-04-01
Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.
Infants use relative numerical group size to infer social dominance
Pun, Anthea; Birch, Susan A. J.; Baron, Andrew Scott
2016-01-01
Detecting dominance relationships, within and across species, provides a clear fitness advantage because this ability helps individuals assess their potential risk of injury before engaging in a competition. Previous research has demonstrated that 10- to 13-mo-old infants can represent the dominance relationship between two agents in terms of their physical size (larger agent = more dominant), whereas younger infants fail to do so. It is unclear whether infants younger than 10 mo fail to represent dominance relationships in general, or whether they lack sensitivity to physical size as a cue to dominance. Two studies explored whether infants, like many species across the animal kingdom, use numerical group size to assess dominance relationships and whether this capacity emerges before their sensitivity to physical size. A third study ruled out an alternative explanation for our findings. Across these studies, we report that infants 6–12 mo of age use numerical group size to infer dominance relationships. Specifically, preverbal infants expect an agent from a numerically larger group to win in a right-of-way competition against an agent from a numerically smaller group. In addition, this is, to our knowledge, the first study to demonstrate that infants 6–9 mo of age are capable of understanding social dominance relations. These results demonstrate that infants’ understanding of social dominance relations may be based on evolutionarily relevant cues and reveal infants’ early sensitivity to an important adaptive function of social groups. PMID:26884199
Infants use relative numerical group size to infer social dominance.
Pun, Anthea; Birch, Susan A J; Baron, Andrew Scott
2016-03-01
Detecting dominance relationships, within and across species, provides a clear fitness advantage because this ability helps individuals assess their potential risk of injury before engaging in a competition. Previous research has demonstrated that 10- to 13-mo-old infants can represent the dominance relationship between two agents in terms of their physical size (larger agent = more dominant), whereas younger infants fail to do so. It is unclear whether infants younger than 10 mo fail to represent dominance relationships in general, or whether they lack sensitivity to physical size as a cue to dominance. Two studies explored whether infants, like many species across the animal kingdom, use numerical group size to assess dominance relationships and whether this capacity emerges before their sensitivity to physical size. A third study ruled out an alternative explanation for our findings. Across these studies, we report that infants 6-12 mo of age use numerical group size to infer dominance relationships. Specifically, preverbal infants expect an agent from a numerically larger group to win in a right-of-way competition against an agent from a numerically smaller group. In addition, this is, to our knowledge, the first study to demonstrate that infants 6-9 mo of age are capable of understanding social dominance relations. These results demonstrate that infants' understanding of social dominance relations may be based on evolutionarily relevant cues and reveal infants' early sensitivity to an important adaptive function of social groups. PMID:26884199
Bayesian nonparametric adaptive control using Gaussian processes.
Chowdhary, Girish; Kingravi, Hassan A; How, Jonathan P; Vela, Patricio A
2015-03-01
Most current model reference adaptive control (MRAC) methods rely on parametric adaptive elements, in which the number of parameters of the adaptive element are fixed a priori, often through expert judgment. An example of such an adaptive element is radial basis function networks (RBFNs), with RBF centers preallocated based on the expected operating domain. If the system operates outside of the expected operating domain, this adaptive element can become noneffective in capturing and canceling the uncertainty, thus rendering the adaptive controller only semiglobal in nature. This paper investigates a Gaussian process-based Bayesian MRAC architecture (GP-MRAC), which leverages the power and flexibility of GP Bayesian nonparametric models of uncertainty. The GP-MRAC does not require the centers to be preallocated, can inherently handle measurement noise, and enables MRAC to handle a broader set of uncertainties, including those that are defined as distributions over functions. We use stochastic stability arguments to show that GP-MRAC guarantees good closed-loop performance with no prior domain knowledge of the uncertainty. Online implementable GP inference methods are compared in numerical simulations against RBFN-MRAC with preallocated centers and are shown to provide better tracking and improved long-term learning.
Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference
NASA Astrophysics Data System (ADS)
Marzouk, Y.; Parno, M.
2014-12-01
We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.
Adaptive Image Denoising by Mixture Adaptation
NASA Astrophysics Data System (ADS)
Luo, Enming; Chan, Stanley H.; Nguyen, Truong Q.
2016-10-01
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the Expectation-Maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad-hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper: First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. Experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Räsänen, Katja; Hendry, Andrew P
2008-06-01
Adaptive diversification is driven by selection in ecologically different environments. In absence of geographical barriers to dispersal, this adaptive divergence (AD) may be constrained by gene flow (GF). And yet the reverse may also be true, with AD constraining GF (i.e. 'ecological speciation'). Both of these causal effects have frequently been inferred from the presence of negative correlations between AD and GF in nature - yet the bi-directional causality warrants caution in such inferences. We discuss how the ability of correlative studies to infer causation might be improved through the simultaneous measurement of multiple ecological and evolutionary variables. On the one hand, inferences about the causal role of GF can be made by examining correlations between AD and the potential for dispersal. On the other hand, inferences about the causal role of AD can be made by examining correlations between GF and environmental differences. Experimental manipulations of dispersal and environmental differences are a particularly promising approach for inferring causation. At present, the best studies find strong evidence that GF constrains AD and some studies also find the reverse. Improvements in empirical approaches promise to eventually allow general inferences about the relative strength of different causal interactions during adaptive diversification.
Natural frequencies facilitate diagnostic inferences of managers
Hoffrage, Ulrich; Hafenbrädl, Sebastian; Bouquet, Cyril
2015-01-01
In Bayesian inference tasks, information about base rates as well as hit rate and false-alarm rate needs to be integrated according to Bayes’ rule after the result of a diagnostic test became known. Numerous studies have found that presenting information in a Bayesian inference task in terms of natural frequencies leads to better performance compared to variants with information presented in terms of probabilities or percentages. Natural frequencies are the tallies in a natural sample in which hit rate and false-alarm rate are not normalized with respect to base rates. The present research replicates the beneficial effect of natural frequencies with four tasks from the domain of management, and with management students as well as experienced executives as participants. The percentage of Bayesian responses was almost twice as high when information was presented in natural frequencies compared to a presentation in terms of percentages. In contrast to most tasks previously studied, the majority of numerical responses were lower than the Bayesian solutions. Having heard of Bayes’ rule prior to the study did not affect Bayesian performance. An implication of our work is that textbooks explaining Bayes’ rule should teach how to represent information in terms of natural frequencies instead of how to plug probabilities or percentages into a formula. PMID:26157397
Phylogenetic Inference From Conserved sites Alignments
grundy, W.N.; Naylor, G.J.P.
1999-08-15
Molecular sequences provide a rich source of data for inferring the phylogenetic relationships among species. However, recent work indicates that even an accurate multiple alignment of a large sequence set may yield an incorrect phylogeny and that the quality of the phylogenetic tree improves when the input consists only of the highly conserved, motif regions of the alignment. This work introduces two methods of producing multiple alignments that include only the conserved regions of the initial alignment. The first method retains conserved motifs, whereas the second retains individual conserved sites in the initial alignment. Using parsimony analysis on a mitochondrial data set containing 19 species among which the phylogenetic relationships are widely accepted, both conserved alignment methods produce better phylogenetic trees than the complete alignment. Unlike any of the 19 inference methods used before to analyze this data, both methods produce trees that are completely consistent with the known phylogeny. The motif-based method employs far fewer alignment sites for comparable error rates. For a larger data set containing mitochondrial sequences from 39 species, the site-based method produces a phylogenetic tree that is largely consistent with known phylogenetic relationships and suggests several novel placements.
Cooperative inference: Features, objects, and collections.
Searcy, Sophia Ray; Shafto, Patrick
2016-10-01
Cooperation plays a central role in theories of development, learning, cultural evolution, and education. We argue that existing models of learning from cooperative informants have fundamental limitations that prevent them from explaining how cooperation benefits learning. First, existing models are shown to be computationally intractable, suggesting that they cannot apply to realistic learning problems. Second, existing models assume a priori agreement about which concepts are favored in learning, which leads to a conundrum: Learning fails without precise agreement on bias yet there is no single rational choice. We introduce cooperative inference, a novel framework for cooperation in concept learning, which resolves these limitations. Cooperative inference generalizes the notion of cooperation used in previous models from omission of labeled objects to the omission values of features, labels for objects, and labels for collections of objects. The result is an approach that is computationally tractable, does not require a priori agreement about biases, applies to both Boolean and first-order concepts, and begins to approximate the richness of real-world concept learning problems. We conclude by discussing relations to and implications for existing theories of cognition, cognitive development, and cultural evolution. (PsycINFO Database Record PMID:27379575
An Ada inference engine for expert systems
NASA Technical Reports Server (NTRS)
Lavallee, David B.
1986-01-01
The purpose is to investigate the feasibility of using Ada for rule-based expert systems with real-time performance requirements. This includes exploring the Ada features which give improved performance to expert systems as well as optimizing the tradeoffs or workarounds that the use of Ada may require. A prototype inference engine was built using Ada, and rule firing rates in excess of 500 per second were demonstrated on a single MC68000 processor. The knowledge base uses a directed acyclic graph to represent production lines. The graph allows the use of AND, OR, and NOT logical operators. The inference engine uses a combination of both forward and backward chaining in order to reach goals as quickly as possible. Future efforts will include additional investigation of multiprocessing to improve performance and creating a user interface allowing rule input in an Ada-like syntax. Investigation of multitasking and alternate knowledge base representations will help to analyze some of the performance issues as they relate to larger problems.
Spatial Inference for Distributed Remote Sensing Data
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Katzfuss, M.; Nguyen, H.
2014-12-01
Remote sensing data are inherently spatial, and a substantial portion of their value for scientific analyses derives from the information they can provide about spatially dependent processes. Geophysical variables such as atmopsheric temperature, cloud properties, humidity, aerosols and carbon dioxide all exhibit spatial patterns, and satellite observations can help us learn about the physical mechanisms driving them. However, remote sensing observations are often noisy and incomplete, so inferring properties of true geophysical fields from them requires some care. These data can also be massive, which is both a blessing and a curse: using more data drives uncertainties down, but also drives costs up, particularly when data are stored on different computers or in different physical locations. In this talk I will discuss a methodology for spatial inference on massive, distributed data sets that does not require moving large volumes of data. The idea is based on a combination of ideas including modeling spatial covariance structures with low-rank covariance matrices, and distributed estimation in sensor or wireless networks.
Inferring tumor progression from genomic heterogeneity
Navin, Nicholas; Krasnitz, Alexander; Rodgers, Linda; Cook, Kerry; Meth, Jennifer; Kendall, Jude; Riggs, Michael; Eberling, Yvonne; Troge, Jennifer; Grubor, Vladimir; Levy, Dan; Lundin, Pär; Månér, Susanne; Zetterberg, Anders; Hicks, James; Wigler, Michael
2010-01-01
Cancer progression in humans is difficult to infer because we do not routinely sample patients at multiple stages of their disease. However, heterogeneous breast tumors provide a unique opportunity to study human tumor progression because they still contain evidence of early and intermediate subpopulations in the form of the phylogenetic relationships. We have developed a method we call Sector-Ploidy-Profiling (SPP) to study the clonal composition of breast tumors. SPP involves macro-dissecting tumors, flow-sorting genomic subpopulations by DNA content, and profiling genomes using comparative genomic hybridization (CGH). Breast carcinomas display two classes of genomic structural variation: (1) monogenomic and (2) polygenomic. Monogenomic tumors appear to contain a single major clonal subpopulation with a highly stable chromosome structure. Polygenomic tumors contain multiple clonal tumor subpopulations, which may occupy the same sectors, or separate anatomic locations. In polygenomic tumors, we show that heterogeneity can be ascribed to a few clonal subpopulations, rather than a series of gradual intermediates. By comparing multiple subpopulations from different anatomic locations, we have inferred pathways of cancer progression and the organization of tumor growth. PMID:19903760
On uncertain sightings and inference about extinction.
Solow, Andrew R; Beet, Andrew R
2014-08-01
The extinction of many species can only be inferred from the record of sightings of individuals. Solow et al. (2012, Uncertain sightings and the extinction of the Ivory-billed Woodpecker. Conservation Biology 26:180-184) describe a Bayesian approach to such inference and apply it to a sighting record of the Ivory-billed Woodpecker (Campephilus principalis). A feature of this sighting record is that all uncertain sightings occurred after the most recent certain sighting. However, this appears to be an artifact. We extended this earlier work in 2 ways. First, we allowed for overlap in time between certain and uncertain sightings. Second, we considered 2 plausible statistical models of a sighting record. In one of these models, certain and uncertain sightings that are valid arise from the same process whereas in the other they arise from independent processes. We applied both models to the case of the Ivory-billed Woodpecker. The result from the first model did not favor extinction, whereas the result for the second model did. This underscores the importance, in applying tests for extinction, of understanding what could be called the natural history of the sighting record.
Inferring social ties from geographic coincidences
Crandall, David J.; Backstrom, Lars; Cosley, Dan; Suri, Siddharth; Huttenlocher, Daniel; Kleinberg, Jon
2010-01-01
We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals’ physical locations over time. PMID:21148099
Spherical Strong-Shock Inferences on OMEGA
NASA Astrophysics Data System (ADS)
Nora, R.; Lafon, M.; Betti, R.; Theobald, W.; Seka, W.; Delettrez, J. A.
2014-10-01
A milestone for shock ignition is to experimentally verify the generation of several hundred Mbar shocks at shock-ignition-relevant laser intensities. This paper presents the first experimental evidence of strong shocks generated in a spherical geometry. Using the temporal delay between the launch of the strong shock at the outer surface of the spherical target and the time when the shock converges at the center, the shock properties can be inferred using radiation-hydrodynamic simulations. Peak ablation pressures exceeding 200 Mbar are inferred at laser intensities of ~ 3 ×1015 W/cm2. The shock strength is significantly enhanced by the coupling of copius amounts of hot electrons, up to 2 kJ with Thot ~ 50 to 100 keV. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944 and the Office of Fusion Energy Sciences Number DE-FG02-04ER54786.
Causal Inference for Spatial Constancy across Saccades.
Atsma, Jeroen; Maij, Femke; Koppen, Mathieu; Irwin, David E; Medendorp, W Pieter
2016-03-01
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. PMID:26967730
Toddlers infer unobserved causes for spontaneous events
Muentener, Paul; Schulz, Laura
2014-01-01
Previous research suggests that children infer the presence of unobserved causes when objects appear to move spontaneously. Are such inferences limited to motion events or do children assume that unexplained physical events have causes more generally? Here we introduce an apparently spontaneous event and ask whether, even in the absence of spatiotemporal and co-variation cues linking the events, toddlers treat a plausible variable as a cause of the event. Toddlers (24 months) saw a toy that appeared to light up either spontaneously or after an experimenter’s action. Toddlers were also introduced to a button but were not shown any predictive relation between the button and the light. Across three different dependent measures of exploration, predictive looking (Study 1), prompted intervention (Study 2), and spontaneous exploration (Study 3), toddlers were more likely to represent the button as a cause of the light when the event appeared to occur spontaneously. In Study 4, we found that even in the absence of a plausible candidate cause, toddlers engaged in selective exploration when the light appeared to activate spontaneously. These results suggest that toddlers’ exploration is guided by the causal explanatory power of events. PMID:25566161
Information Theory, Inference and Learning Algorithms
NASA Astrophysics Data System (ADS)
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Inferring differentiation pathways from gene expression
Costa, Ivan G.; Roepcke, Stefan; Hafemeister, Christoph; Schliep, Alexander
2008-01-01
Motivation: The regulation of proliferation and differentiation of embryonic and adult stem cells into mature cells is central to developmental biology. Gene expression measured in distinguishable developmental stages helps to elucidate underlying molecular processes. In previous work we showed that functional gene modules, which act distinctly in the course of development, can be represented by a mixture of trees. In general, the similarities in the gene expression programs of cell populations reflect the similarities in the differentiation path. Results: We propose a novel model for gene expression profiles and an unsupervised learning method to estimate developmental similarity and infer differentiation pathways. We assess the performance of our model on simulated data and compare it with favorable results to related methods. We also infer differentiation pathways and predict functional modules in gene expression data of lymphoid development. Conclusions: We demonstrate for the first time how, in principal, the incorporation of structural knowledge about the dependence structure helps to reveal differentiation pathways and potentially relevant functional gene modules from microarray datasets. Our method applies in any area of developmental biology where it is possible to obtain cells of distinguishable differentiation stages. Availability: The implementation of our method (GPL license), data and additional results are available at http://algorithmics.molgen.mpg.de/Supplements/InfDif/ Contact: filho@molgen.mpg.de, schliep@molgen.mpg.de Supplementary information: Supplementary data is available at Bioinformatics online. PMID:18586709
Models for inference in dynamic metacommunity systems
Dorazio, Robert M.; Kery, Marc; Royle, J. Andrew; Plattner, Matthias
2010-01-01
A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species- and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity.
Models for inference in dynamic metacommunity systems
Dorazio, R.M.; Kery, M.; Royle, J. Andrew; Plattner, M.
2010-01-01
A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species-and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity. ?? 2010 by the Ecological Society of America.
Inferring tumor progression from genomic heterogeneity.
Navin, Nicholas; Krasnitz, Alexander; Rodgers, Linda; Cook, Kerry; Meth, Jennifer; Kendall, Jude; Riggs, Michael; Eberling, Yvonne; Troge, Jennifer; Grubor, Vladimir; Levy, Dan; Lundin, Pär; Månér, Susanne; Zetterberg, Anders; Hicks, James; Wigler, Michael
2010-01-01
Cancer progression in humans is difficult to infer because we do not routinely sample patients at multiple stages of their disease. However, heterogeneous breast tumors provide a unique opportunity to study human tumor progression because they still contain evidence of early and intermediate subpopulations in the form of the phylogenetic relationships. We have developed a method we call Sector-Ploidy-Profiling (SPP) to study the clonal composition of breast tumors. SPP involves macro-dissecting tumors, flow-sorting genomic subpopulations by DNA content, and profiling genomes using comparative genomic hybridization (CGH). Breast carcinomas display two classes of genomic structural variation: (1) monogenomic and (2) polygenomic. Monogenomic tumors appear to contain a single major clonal subpopulation with a highly stable chromosome structure. Polygenomic tumors contain multiple clonal tumor subpopulations, which may occupy the same sectors, or separate anatomic locations. In polygenomic tumors, we show that heterogeneity can be ascribed to a few clonal subpopulations, rather than a series of gradual intermediates. By comparing multiple subpopulations from different anatomic locations, we have inferred pathways of cancer progression and the organization of tumor growth.
Natural frequencies facilitate diagnostic inferences of managers.
Hoffrage, Ulrich; Hafenbrädl, Sebastian; Bouquet, Cyril
2015-01-01
In Bayesian inference tasks, information about base rates as well as hit rate and false-alarm rate needs to be integrated according to Bayes' rule after the result of a diagnostic test became known. Numerous studies have found that presenting information in a Bayesian inference task in terms of natural frequencies leads to better performance compared to variants with information presented in terms of probabilities or percentages. Natural frequencies are the tallies in a natural sample in which hit rate and false-alarm rate are not normalized with respect to base rates. The present research replicates the beneficial effect of natural frequencies with four tasks from the domain of management, and with management students as well as experienced executives as participants. The percentage of Bayesian responses was almost twice as high when information was presented in natural frequencies compared to a presentation in terms of percentages. In contrast to most tasks previously studied, the majority of numerical responses were lower than the Bayesian solutions. Having heard of Bayes' rule prior to the study did not affect Bayesian performance. An implication of our work is that textbooks explaining Bayes' rule should teach how to represent information in terms of natural frequencies instead of how to plug probabilities or percentages into a formula. PMID:26157397
Inferring social ties from geographic coincidences.
Crandall, David J; Backstrom, Lars; Cosley, Dan; Suri, Siddharth; Huttenlocher, Daniel; Kleinberg, Jon
2010-12-28
We investigate the extent to which social ties between people can be inferred from co-occurrence in time and space: Given that two people have been in approximately the same geographic locale at approximately the same time, on multiple occasions, how likely are they to know each other? Furthermore, how does this likelihood depend on the spatial and temporal proximity of the co-occurrences? Such issues arise in data originating in both online and offline domains as well as settings that capture interfaces between online and offline behavior. Here we develop a framework for quantifying the answers to such questions, and we apply this framework to publicly available data from a social media site, finding that even a very small number of co-occurrences can result in a high empirical likelihood of a social tie. We then present probabilistic models showing how such large probabilities can arise from a natural model of proximity and co-occurrence in the presence of social ties. In addition to providing a method for establishing some of the first quantifiable estimates of these measures, our findings have potential privacy implications, particularly for the ways in which social structures can be inferred from public online records that capture individuals' physical locations over time.
Haplotype inference constrained by plausible haplotype data.
Fellows, Michael R; Hartman, Tzvika; Hermelin, Danny; Landau, Gad M; Rosamond, Frances; Rozenberg, Liat
2011-01-01
The haplotype inference problem (HIP) asks to find a set of haplotypes which resolve a given set of genotypes. This problem is important in practical fields such as the investigation of diseases or other types of genetic mutations. In order to find the haplotypes which are as close as possible to the real set of haplotypes that comprise the genotypes, two models have been suggested which are by now well-studied: The perfect phylogeny model and the pure parsimony model. All known algorithms up till now for haplotype inference may find haplotypes that are not necessarily plausible, i.e., very rare haplotypes or haplotypes that were never observed in the population. In order to overcome this disadvantage, we study in this paper, a new constrained version of HIP under the above-mentioned models. In this new version, a pool of plausible haplotypes H is given together with the set of genotypes G, and the goal is to find a subset H ⊆ H that resolves G. For constrained perfect phlogeny haplotyping (CPPH), we provide initial insights and polynomial-time algorithms for some restricted cases of the problem. For constrained parsimony haplotyping (CPH), we show that the problem is fixed parameter tractable when parameterized by the size of the solution set of haplotypes.
Issues with inferring Internet topological attributes
NASA Astrophysics Data System (ADS)
Amini, Lisa D.; Shaikh, Anees; Schulzrinne, Henning G.
2002-07-01
A number of recent studies are based on data collected from routing tables of inter-domain routers utilizing Border Gateway Protocol (BGP) and tools, such as traceroute, to probe end-to-end paths. The goal is to infer Internet topological properties. However, as more data is collected, it becomes obvious that data intended to represent the same properties, if gathered at different points within the network, can depict significantly different characteristics. While systematic data collection from a number of network vantage points can reduce certain ambiguities, thus far, no methods have been reported for fully resolving these issues. The goal of our study was to quantify the effect these anomalies have on key Internet structural attributes. We report on our analysis of over 290,000 measurements from globally distributed sites. We contrast results obtained from router-level measurements with those obtained from BGP routing tables, and offer insights as to why certain inferred properties differ. We demonstrate that the effect on some attributes, such as the average path length and the AS degree distribution can be minimized through careful data collection techniques. We also illustrate how using this same data to model other attributes, such as the actual forwarding path between a pair of nodes, or the level of AS path asymmetry, can produce substantially misleading results.
Inferring epigenetic dynamics from kin correlations.
Hormoz, Sahand; Desprat, Nicolas; Shraiman, Boris I
2015-05-01
Populations of isogenic embryonic stem cells or clonal bacteria often exhibit extensive phenotypic heterogeneity that arises from intrinsic stochastic dynamics of cells. The phenotypic state of a cell can be transmitted epigenetically in cell division, leading to correlations in the states of cells related by descent. The extent of these correlations is determined by the rates of transitions between the phenotypic states. Therefore, a snapshot of the phenotypes of a collection of cells with known genealogical structure contains information on phenotypic dynamics. Here, we use a model of phenotypic dynamics on a genealogical tree to define an inference method that allows extraction of an approximate probabilistic description of the dynamics from observed phenotype correlations as a function of the degree of kinship. The approach is tested and validated on the example of Pyoverdine dynamics in Pseudomonas aeruginosa colonies. Interestingly, we find that correlations among pairs and triples of distant relatives have a simple but nontrivial structure indicating that observed phenotypic dynamics on the genealogical tree is approximately conformal--a symmetry characteristic of critical behavior in physical systems. The proposed inference method is sufficiently general to be applied in any system where lineage information is available. PMID:25902540
A new method to infer higher-order spike correlations from membrane potentials.
Reimer, Imke C G; Staude, Benjamin; Boucsein, Clemens; Rotter, Stefan
2013-10-01
What is the role of higher-order spike correlations for neuronal information processing? Common data analysis methods to address this question are devised for the application to spike recordings from multiple single neurons. Here, we present a new method which evaluates the subthreshold membrane potential fluctuations of one neuron, and infers higher-order correlations among the neurons that constitute its presynaptic population. This has two important advantages: Very large populations of up to several thousands of neurons can be studied, and the spike sorting is obsolete. Moreover, this new approach truly emphasizes the functional aspects of higher-order statistics, since we infer exactly those correlations which are seen by a neuron. Our approach is to represent the subthreshold membrane potential fluctuations as presynaptic activity filtered with a fixed kernel, as it would be the case for a leaky integrator neuron model. This allows us to adapt the recently proposed method CuBIC (cumulant based inference of higher-order correlations from the population spike count; Staude et al., J Comput Neurosci 29(1-2):327-350, 2010c) with which the maximal order of correlation can be inferred. By numerical simulation we show that our new method is reasonably sensitive to weak higher-order correlations, and that only short stretches of membrane potential are required for their reliable inference. Finally, we demonstrate its remarkable robustness against violations of the simplifying assumptions made for its construction, and discuss how it can be employed to analyze in vivo intracellular recordings of membrane potentials.
Rahmati, Vahid; Kirmse, Knut; Marković, Dimitrije; Holthoff, Knut; Kiebel, Stefan J.
2016-01-01
Calcium imaging has been used as a promising technique to monitor the dynamic activity of neuronal populations. However, the calcium trace is temporally smeared which restricts the extraction of quantities of interest such as spike trains of individual neurons. To address this issue, spike reconstruction algorithms have been introduced. One limitation of such reconstructions is that the underlying models are not informed about the biophysics of spike and burst generations. Such existing prior knowledge might be useful for constraining the possible solutions of spikes. Here we describe, in a novel Bayesian approach, how principled knowledge about neuronal dynamics can be employed to infer biophysical variables and parameters from fluorescence traces. By using both synthetic and in vitro recorded fluorescence traces, we demonstrate that the new approach is able to reconstruct different repetitive spiking and/or bursting patterns with accurate single spike resolution. Furthermore, we show that the high inference precision of the new approach is preserved even if the fluorescence trace is rather noisy or if the fluorescence transients show slow rise kinetics lasting several hundred milliseconds, and inhomogeneous rise and decay times. In addition, we discuss the use of the new approach for inferring parameter changes, e.g. due to a pharmacological intervention, as well as for inferring complex characteristics of immature neuronal circuits. PMID:26894748
Inference generation and story comprehension among children with ADHD.
Van Neste, Jessica; Hayden, Angela; Lorch, Elizabeth P; Milich, Richard
2015-02-01
Academic difficulties are well-documented among children with ADHD. Exploring these difficulties through story comprehension research has revealed deficits among children with ADHD in making causal connections between events and in using causal structure and thematic importance to guide recall of stories. Important to theories of story comprehension and implied in these deficits is the ability to make inferences. Often, characters' goals are implicit and explanations of events must be inferred. The purpose of the present study was to compare the inferences generated during story comprehension by 23 7- to 11-year-old children with ADHD (16 males) and 35 comparison peers (19 males). Children watched two televised stories, each paused at five points. In the experimental condition, at each pause children told what they were thinking about the story, whereas in the control condition no responses were made during pauses. After viewing, children recalled the story. Several types of inferences and inference plausibility were coded. Children with ADHD generated fewer of the most essential inferences, plausible explanatory inferences, than did comparison children, both during story processing and during story recall. The groups did not differ on production of other types of inferences. Group differences in generating inferences during the think-aloud task significantly mediated group differences in patterns of recall. Both groups recalled more of the most important story information after completing the think-aloud task. Generating fewer explanatory inferences has important implications for story comprehension deficits in children with ADHD.
Children's inference generation: The role of vocabulary and working memory.
Currie, Nicola Kate; Cain, Kate
2015-09-01
Inferences are crucial to successful discourse comprehension. We assessed the contributions of vocabulary and working memory to inference making in children aged 5 and 6years (n=44), 7 and 8years (n=43), and 9 and 10years (n=43). Children listened to short narratives and answered questions to assess local and global coherence inferences after each one. Analysis of variance (ANOVA) confirmed developmental improvements on both types of inference. Although standardized measures of both vocabulary and working memory were correlated with inference making, multiple regression analyses determined that vocabulary was the key predictor. For local coherence inferences, only vocabulary predicted unique variance for the 6- and 8-year-olds; in contrast, none of the variables predicted performance for the 10-year-olds. For global coherence inferences, vocabulary was the only unique predictor for each age group. Mediation analysis confirmed that although working memory was associated with the ability to generate local and global coherence inferences in 6- to 10-year-olds, the effect was mediated by vocabulary. We conclude that vocabulary knowledge supports inference making in two ways: through knowledge of word meanings required to generate inferences and through its contribution to memory processes.
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-01-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-12-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important
Measure of librarian pressure using fuzzy inference system: A case study in Longyan University
NASA Astrophysics Data System (ADS)
Huang, Jian-Jing
2014-10-01
As the hierarchy of middle managers in college's librarian. They may own much work pressure from their mind. How to adapt psychological problem, control the emotion and keep a good relationship in their work place, it becomes an important issue. Especially, they work in China mainland environment. How estimate the librarians work pressure and improve the quality of service in college libraries. Those are another serious issues. In this article, the authors would like discuss how can we use fuzzy inference to test librarian work pressure.
Automated Interpretation of LIBS Spectra using a Fuzzy Logic Inference Engine
Jeremy J. Hatch; Timothy R. McJunkin; Cynthia Hanson; Jill R. Scott
2012-02-01
Automated interpretation of laser-induced breakdown spectroscopy (LIBS) data is necessary due to the plethora of spectra that can be acquired in a relatively short time. However, traditional chemometric and artificial neural network methods that have been employed are not always transparent to a skilled user. A fuzzy logic approach to data interpretation has now been adapted to LIBS spectral interpretation. A fuzzy logic inference engine (FLIE) was used to differentiate between various copper containing and stainless steel alloys as well as unknowns. Results using FLIE indicate a high degree of confidence in spectral assignment.
Human brain lesion-deficit inference remapped
Mah, Yee-Haur; Husain, Masud; Rees, Geraint
2014-01-01
Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury—the commonest aetiology in lesion-deficit studies—where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant
Human brain lesion-deficit inference remapped.
Mah, Yee-Haur; Husain, Masud; Rees, Geraint; Nachev, Parashkev
2014-09-01
Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury-the commonest aetiology in lesion-deficit studies-where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant
Causal Network Inference Via Group Sparse Regularization.
Bolstad, Andrew; Van Veen, Barry D; Nowak, Robert
2011-06-11
This paper addresses the problem of inferring sparse causal networks modeled by multivariate autoregressive (MAR) processes. Conditions are derived under which the Group Lasso (gLasso) procedure consistently estimates sparse network structure. The key condition involves a "false connection score" ψ. In particular, we show that consistent recovery is possible even when the number of observations of the network is far less than the number of parameters describing the network, provided that ψ < 1. The false connection score is also demonstrated to be a useful metric of recovery in nonasymptotic regimes. The conditions suggest a modified gLasso procedure which tends to improve the false connection score and reduce the chances of reversing the direction of causal influence. Computational experiments and a real network based electrocorticogram (ECoG) simulation study demonstrate the effectiveness of the approach.
[Rearrangement and inference of chromosome structures].
Gorbunov, K Yu; Gershgorin, R A; Lyubetsky, V A
2015-01-01
The chromosome structure is defined as a set of chromosomes that consist of genes assigned to one of the DNA strands and represented in a circular or linear arrangement. A widely investigated problem is to define the shortest algorithmic path of chromosome rearrangements that transforms one chromosome structure into another. When equal rearrangement costs and constant gene content are considered, the solution to the problem is known. In this work, a principally novel approach was developed that presents an exact algorithm with linear time complexity for both equal and unequal costs, in which chromosome structures defined on the same set of genes were considered. In addition, to solve the problem of the inference of ancestral chromosome structures containing different sets of genes when the original structures are fixed in leaves, exact and heuristic algorithms were developed.
Migration of objects and inferences across episodes.
Hannigan, Sharon L; Reinitz, Mark Tippens
2003-04-01
Participants viewed episodes in the form of a series of photographs portraying ordinary routines (e.g., eating at a restaurant) and later received a recognition test. In Experiment 1, it was shown that objects (e.g., a vase of flowers, a pewter lantern) that appeared in a single episode during the study phase migrated between memories of episodes described by the same abstract schema (e.g., from Restaurant Episode A at study to Restaurant Episode B at test), and not between episodes anchored by different schemas. In Experiment 2, it was demonstrated that backward causal inferences from one study episode influenced memories of other episodes described by the same schema, and that high-schema-relevant items viewed in one episode were sometimes remembered as having occurred in another episode of the same schematic type.
Causal Network Inference Via Group Sparse Regularization
Bolstad, Andrew; Van Veen, Barry D.; Nowak, Robert
2011-01-01
This paper addresses the problem of inferring sparse causal networks modeled by multivariate autoregressive (MAR) processes. Conditions are derived under which the Group Lasso (gLasso) procedure consistently estimates sparse network structure. The key condition involves a “false connection score” ψ. In particular, we show that consistent recovery is possible even when the number of observations of the network is far less than the number of parameters describing the network, provided that ψ < 1. The false connection score is also demonstrated to be a useful metric of recovery in nonasymptotic regimes. The conditions suggest a modified gLasso procedure which tends to improve the false connection score and reduce the chances of reversing the direction of causal influence. Computational experiments and a real network based electrocorticogram (ECoG) simulation study demonstrate the effectiveness of the approach. PMID:21918591
Migration of objects and inferences across episodes.
Hannigan, Sharon L; Reinitz, Mark Tippens
2003-04-01
Participants viewed episodes in the form of a series of photographs portraying ordinary routines (e.g., eating at a restaurant) and later received a recognition test. In Experiment 1, it was shown that objects (e.g., a vase of flowers, a pewter lantern) that appeared in a single episode during the study phase migrated between memories of episodes described by the same abstract schema (e.g., from Restaurant Episode A at study to Restaurant Episode B at test), and not between episodes anchored by different schemas. In Experiment 2, it was demonstrated that backward causal inferences from one study episode influenced memories of other episodes described by the same schema, and that high-schema-relevant items viewed in one episode were sometimes remembered as having occurred in another episode of the same schematic type. PMID:12795485
SERIES - Satellite Emission Range Inferred Earth Surveying
NASA Technical Reports Server (NTRS)
Macdoran, P. F.; Spitzmesser, D. J.; Buennagel, L. A.
1983-01-01
The Satellite Emission Range Inferred Earth Surveying (SERIES) concept is based on the utilization of NAVSTAR Global Positioning System (GPS) radio transmissions without any satellite modifications and in a totally passive mode. The SERIES stations are equipped with lightweight 1.5 m diameter dish antennas mounted on trailers. A series baseline measurement accuracy demonstration is considered, taking into account a 100 meter baseline estimation from approximately one hour of differential Doppler data. It is planned to conduct the next phase of experiments on a 150 m baseline. Attention is given to details regarding future baseline measurement accuracy demonstrations, aspects of ionospheric calibration in connection with subdecimeter baseline accuracy requirements of geodesy, and advantages related to the use of the differential Doppler or pseudoranging mode.
Inferring evolutionary trees from ordinal data
Kearney, P.E.; Hayward, R.B.; Meijer, H.
1997-06-01
In this paper we present four results on the inference of evolutionary trees from ordinal information. An evolutionary tree T, or phylogeny, is an ordinal representation of a distance matrix , for all species a, b, c and d under consideration. In particular, we show that (1) Ordinal representations of distance matrices can be found in O(n{sup 2}log{sup 2} n) time where n is the number of species. Ordinal representations are shown to be unique, when they exist. (3) Determining if there is an ordinal representation for an incomplete distance matrix, a situation which arises in evolutionary studies, is NP-complete. (3) Finding a phylogeny that best fits a distance matrix containing ordinal errors is NP-complete. (4) Under reasonable conditions, a weighted ordinal representation of a distance matrix can be obtained in polynomial time.
Cancer evolution: mathematical models and computational inference.
Beerenwinkel, Niko; Schwarz, Roland F; Gerstung, Moritz; Markowetz, Florian
2015-01-01
Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy.
Mechanisms of phonological inference in speech perception.
Gaskell, M G; Marslen-Wilson, W D
1998-04-01
Cross-modal priming experiments have shown that surface variations in speech are perceptually tolerated as long as they occur in phonologically viable contexts. For example, [symbol: see text] (frayp) gains access to the mental representation of freight when in the context of [symbol: see text] (frayp bearer) because the change occurs in normal speech as a process of place assimilation. The locus of these effects in the perceptual system was examined. Sentences containing surface changes were created that either agreed with or violated assimilation rules. The lexical status of the assimilated word also was manipulated, contrasting lexical and nonlexical accounts. Two phoneme monitoring experiments showed strong effects of phonological viability for words, with weaker effects for nonwords. It is argued that the listener's percept of the form of speech is a product of a phonological inference process that recovers the underlying form of speech. This process can operate on both words and nonwords, although it interacts with the retrieval of lexical information.
Inferring Network Connectivity by Delayed Feedback Control
Yu, Dongchuan; Parlitz, Ulrich
2011-01-01
We suggest a control based approach to topology estimation of networks with elements. This method first drives the network to steady states by a delayed feedback control; then performs structural perturbations for shifting the steady states times; and finally infers the connection topology from the steady states' shifts by matrix inverse algorithm () or -norm convex optimization strategy applicable to estimate the topology of sparse networks from perturbations. We discuss as well some aspects important for applications, such as the topology reconstruction quality and error sources, advantages and disadvantages of the suggested method, and the influence of (control) perturbations, inhomegenity, sparsity, coupling functions, and measurement noise. Some examples of networks with Chua's oscillators are presented to illustrate the reliability of the suggested technique. PMID:21969856
Cancer Evolution: Mathematical Models and Computational Inference
Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian
2015-01-01
Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804
Chloroplast Phylogenomic Inference of Green Algae Relationships
Sun, Linhua; Fang, Ling; Zhang, Zhenhua; Chang, Xin; Penny, David; Zhong, Bojian
2016-01-01
The green algal phylum Chlorophyta has six diverse classes, but the phylogenetic relationship of the classes within Chlorophyta remains uncertain. In order to better understand the ancient Chlorophyta evolution, we have applied a site pattern sorting method to study compositional heterogeneity and the model fit in the green algal chloroplast genomic data. We show that the fastest-evolving sites are significantly correlated with among-site compositional heterogeneity, and these sites have a much poorer fit to the evolutionary model. Our phylogenomic analyses suggest that the class Chlorophyceae is a monophyletic group, and the classes Ulvophyceae, Trebouxiophyceae and Prasinophyceae are non-monophyletic groups. Our proposed phylogenetic tree of Chlorophyta will offer new insights to investigate ancient green algae evolution, and our analytical framework will provide a useful approach for evaluating and mitigating the potential errors of phylogenomic inferences. PMID:26846729
Nuclear Forensic Inferences Using Iterative Multidimensional Statistics
Robel, M; Kristo, M J; Heller, M A
2009-06-09
Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. In fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the iterative PLS-DA method
Bayesian inference tools for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2013-08-01
In this paper, first the basics of Bayesian inference with a parametric model of the data is presented. Then, the needed extensions are given when dealing with inverse problems and in particular the linear models such as Deconvolution or image reconstruction in Computed Tomography (CT). The main point to discuss then is the prior modeling of signals and images. A classification of these priors is presented, first in separable and Markovien models and then in simple or hierarchical with hidden variables. For practical applications, we need also to consider the estimation of the hyper parameters. Finally, we see that we have to infer simultaneously on the unknowns, the hidden variables and the hyper parameters. Very often, the expression of this joint posterior law is too complex to be handled directly. Indeed, rarely we can obtain analytical solutions to any point estimators such the Maximum A posteriori (MAP) or Posterior Mean (PM). Three main tools are then can be used: Laplace approximation (LAP), Markov Chain Monte Carlo (MCMC) and Bayesian Variational Approximations (BVA). To illustrate all these aspects, we will consider a deconvolution problem where we know that the input signal is sparse and propose to use a Student-t prior for that. Then, to handle the Bayesian computations with this model, we use the property of Student-t which is modelling it via an infinite mixture of Gaussians, introducing thus hidden variables which are the variances. Then, the expression of the joint posterior of the input signal samples, the hidden variables (which are here the inverse variances of those samples) and the hyper-parameters of the problem (for example the variance of the noise) is given. From this point, we will present the joint maximization by alternate optimization and the three possible approximation methods. Finally, the proposed methodology is applied in different applications such as mass spectrometry, spectrum estimation of quasi periodic biological signals and
Pathway network inference from gene expression data
2014-01-01
Background The development of high-throughput omics technologies enabled genome-wide measurements of the activity of cellular elements and provides the analytical resources for the progress of the Systems Biology discipline. Analysis and interpretation of gene expression data has evolved from the gene to the pathway and interaction level, i.e. from the detection of differentially expressed genes, to the establishment of gene interaction networks and the identification of enriched functional categories. Still, the understanding of biological systems requires a further level of analysis that addresses the characterization of the interaction between functional modules. Results We present a novel computational methodology to study the functional interconnections among the molecular elements of a biological system. The PANA approach uses high-throughput genomics measurements and a functional annotation scheme to extract an activity profile from each functional block -or pathway- followed by machine-learning methods to infer the relationships between these functional profiles. The result is a global, interconnected network of pathways that represents the functional cross-talk within the molecular system. We have applied this approach to describe the functional transcriptional connections during the yeast cell cycle and to identify pathways that change their connectivity in a disease condition using an Alzheimer example. Conclusions PANA is a useful tool to deepen in our understanding of the functional interdependences that operate within complex biological systems. We show the approach is algorithmically consistent and the inferred network is well supported by the available functional data. The method allows the dissection of the molecular basis of the functional connections and we describe the different regulatory mechanisms that explain the network's topology obtained for the yeast cell cycle data. PMID:25032889
Phylodynamic inference for structured epidemiological models.
Rasmussen, David A; Volz, Erik M; Koelle, Katia
2014-04-01
Coalescent theory is routinely used to estimate past population dynamics and demographic parameters from genealogies. While early work in coalescent theory only considered simple demographic models, advances in theory have allowed for increasingly complex demographic scenarios to be considered. The success of this approach has lead to coalescent-based inference methods being applied to populations with rapidly changing population dynamics, including pathogens like RNA viruses. However, fitting epidemiological models to genealogies via coalescent models remains a challenging task, because pathogen populations often exhibit complex, nonlinear dynamics and are structured by multiple factors. Moreover, it often becomes necessary to consider stochastic variation in population dynamics when fitting such complex models to real data. Using recently developed structured coalescent models that accommodate complex population dynamics and population structure, we develop a statistical framework for fitting stochastic epidemiological models to genealogies. By combining particle filtering methods with Bayesian Markov chain Monte Carlo methods, we are able to fit a wide class of stochastic, nonlinear epidemiological models with different forms of population structure to genealogies. We demonstrate our framework using two structured epidemiological models: a model with disease progression between multiple stages of infection and a two-population model reflecting spatial structure. We apply the multi-stage model to HIV genealogies and show that the proposed method can be used to estimate the stage-specific transmission rates and prevalence of HIV. Finally, using the two-population model we explore how much information about population structure is contained in genealogies and what sample sizes are necessary to reliably infer parameters like migration rates. PMID:24743590
Inferring Protein Associations Using Protein Pulldown Assays
Sharp, Julia L.; Anderson, Kevin K.; Daly, Don S.; Auberry, Deanna L.; Borkowski, John J.; Cannon, William R.
2007-02-01
Background: One method to infer protein-protein associations is through a “bait-prey pulldown” assay using a protein affinity agent and an LC-MS (liquid chromatography-mass spectrometry)-based protein identification method. False positive and negative protein identifications are not uncommon, however, leading to incorrect inferences. Methods: A pulldown experiment generates a protein association matrix wherein each column represents a sample from one bait protein, each row represents one prey protein and each cell contains a presence/absence association indicator. Our method evaluates the presence/absence pattern across a prey protein (row) with a Likelihood Ratio Test (LRT), computing its p-value with simulated LRT test statistic distributions after a check with simulated binomial random variates disqualified the large sample 2 test. A pulldown experiment often involves hundreds of tests so we apply the false discovery rate method to control the false positive rate. Based on the p-value, each prey protein is assigned a category (specific association, non-specific association, or not associated) and appraised with respect to the pulldown experiment’s goal and design. The method is illustrated using a pulldown experiment investigating the protein complexes of Shewanella oneidensis MR-1. Results: The Monte Carlo simulated LRT p-values objectively reveal specific and ubiquitous prey, as well as potential systematic errors. The example analysis shows the results to be biologically sensible and more realistic than the ad hoc screening methods previously utilized. Conclusions: The method presented appears to be informative for screening for protein-protein associations.
Network geometry inference using common neighbors
NASA Astrophysics Data System (ADS)
Papadopoulos, Fragkiskos; Aldecoa, Rodrigo; Krioukov, Dmitri
2015-08-01
We introduce and explore a method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes, the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O (t4) running time to map a network of t nodes, versus O (t3) in the link-based method. But we also develop a hybrid method with O (t3) running time, which combines the common-neighbors and link-based approaches, and we explore a heuristic that reduces its running time further to O (t2) , without significant reduction in the mapping accuracy. We apply this method to the autonomous systems (ASs) Internet, and we reveal how soft communities of ASs evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASs. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step toward understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections.
Network geometry inference using common neighbors.
Papadopoulos, Fragkiskos; Aldecoa, Rodrigo; Krioukov, Dmitri
2015-08-01
We introduce and explore a method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes, the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O(t4) running time to map a network of t nodes, versus O(t3) in the link-based method. But we also develop a hybrid method with O(t3) running time, which combines the common-neighbors and link-based approaches, and we explore a heuristic that reduces its running time further to O(t2), without significant reduction in the mapping accuracy. We apply this method to the autonomous systems (ASs) Internet, and we reveal how soft communities of ASs evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASs. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step toward understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections. PMID:26382454
Length Scales in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gh.; Adam, S.
2016-02-01
Two conceptual developments in the Bayesian automatic adaptive quadrature approach to the numerical solution of one-dimensional Riemann integrals [Gh. Adam, S. Adam, Springer LNCS 7125, 1-16 (2012)] are reported. First, it is shown that the numerical quadrature which avoids the overcomputing and minimizes the hidden floating point loss of precision asks for the consideration of three classes of integration domain lengths endowed with specific quadrature sums: microscopic (trapezoidal rule), mesoscopic (Simpson rule), and macroscopic (quadrature sums of high algebraic degrees of precision). Second, sensitive diagnostic tools for the Bayesian inference on macroscopic ranges, coming from the use of Clenshaw-Curtis quadrature, are derived.
Adaptation and adaptation transfer characteristics of five different saccade types in the monkey.
Kojima, Yoshiko; Fuchs, Albert F; Soetedjo, Robijanto
2015-07-01
Shifts in the direction of gaze are accomplished by different kinds of saccades, which are elicited under different circumstances. Saccade types include targeting saccades to simple jumping targets, delayed saccades to visible targets after a waiting period, memory-guided (MG) saccades to remembered target locations, scanning saccades to stationary target arrays, and express saccades after very short latencies. Studies of human cases and neurophysiological experiments in monkeys suggest that separate pathways, which converge on a common locus that provides the motor command, generate these different types of saccade. When behavioral manipulations in humans cause targeting saccades to have persistent dysmetrias as might occur naturally from growth, aging, and injury, they gradually adapt to reduce the dysmetria. Although results differ slightly between laboratories, this adaptation generalizes or transfers to all the other saccade types mentioned above. Also, when one of the other types of saccade undergoes adaptation, it often transfers to another saccade type. Similar adaptation and transfer experiments, which allow inferences to be drawn about the site(s) of adaptation for different saccade types, have yet to be done in monkeys. Here we show that simian targeting and MG saccades adapt more than express, scanning, and delayed saccades. Adaptation of targeting saccades transfers to all the other saccade types. However, the adaptation of MG saccades transfers only to delayed saccades. These data suggest that adaptation of simian targeting saccades occurs on the pathway common to all saccade types. In contrast, only the delayed saccade command passes through the adaptation site of the MG saccade. PMID:25855693
Interplanetary magnetic sector polarity inferred from polar geomagnetic field observations
NASA Technical Reports Server (NTRS)
Friis-Christensen, E.; Lassen, K.; Wilcox, J. M.; Gonzalez, W.; Colburn, D. S.
1971-01-01
In order to infer the interplanetary sector polarity from polar geomagnetic field diurnal variations, measurements were carried out at Godhavn and Thule (Denmark) Geomagnetic Observatories. The inferred interplanetary sector polarity was compared with the polarity observed at the same time by Explorer 33 and 35 magnetometers. It is shown that the polarity (toward or away from the sun) of the interplanetary magnetic field can be reliably inferred from observations of the polar cap geomagnetic fields.
[Inferences and verbal comprehension in children with developmental language disorders].
Monfort, Isabelle; Monfort, Marc
2013-02-22
We review the concept of inference in language comprehension -both oral and written- recalling the different proposals of classification. We analyze the type of difficulties that children might encounter in their application of the inferences, depending on the type of language or development pathology. Finally, we describe the proposals for intervention that have been made to enhance the ability to apply inferences in language comprehension. PMID:23446716
Dynamical Logic Driven by Classified Inferences Including Abduction
NASA Astrophysics Data System (ADS)
Sawa, Koji; Gunji, Yukio-Pegio
2010-11-01
We propose a dynamical model of formal logic which realizes a representation of logical inferences, deduction and induction. In addition, it also represents abduction which is classified by Peirce as the third inference following deduction and induction. The three types of inference are represented as transformations of a directed graph. The state of a relation between objects of the model fluctuates between the collective and the distinctive. In addition, the location of the relation in the sequence of the relation influences its state.
Inferring word meanings by assuming that speakers are informative.
Frank, Michael C; Goodman, Noah D
2014-12-01
Language comprehension is more than a process of decoding the literal meaning of a speaker's utterance. Instead, by making the assumption that speakers choose their words to be informative in context, listeners routinely make pragmatic inferences that go beyond the linguistic data. If language learners make these same assumptions, they should be able to infer word meanings in otherwise ambiguous situations. We use probabilistic tools to formalize these kinds of informativeness inferences-extending a model of pragmatic language comprehension to the acquisition setting-and present four experiments whose data suggest that preschool children can use informativeness to infer word meanings and that adult judgments track quantitatively with informativeness. PMID:25238461
Fuzzy inference game approach to uncertainty in business decisions and market competitions.
Oderanti, Festus Oluseyi
2013-01-01
The increasing challenges and complexity of business environments are making business decisions and operations more difficult for entrepreneurs to predict the outcomes of these processes. Therefore, we developed a decision support scheme that could be used and adapted to various business decision processes. These involve decisions that are made under uncertain situations such as business competition in the market or wage negotiation within a firm. The scheme uses game strategies and fuzzy inference concepts to effectively grasp the variables in these uncertain situations. The games are played between human and fuzzy players. The accuracy of the fuzzy rule base and the game strategies help to mitigate the adverse effects that a business may suffer from these uncertain factors. We also introduced learning which enables the fuzzy player to adapt over time. We tested this scheme in different scenarios and discover that it could be an invaluable tool in the hand of entrepreneurs that are operating under uncertain and competitive business environments. PMID:24109562
Batista, Philip D; Janes, Jasmine K; Boone, Celia K; Murray, Brent W; Sperling, Felix A H
2016-09-01
Assessments of population genetic structure and demographic history have traditionally been based on neutral markers while explicitly excluding adaptive markers. In this study, we compared the utility of putatively adaptive and neutral single-nucleotide polymorphisms (SNPs) for inferring mountain pine beetle population structure across its geographic range. Both adaptive and neutral SNPs, and their combination, allowed range-wide structure to be distinguished and delimited a population that has recently undergone range expansion across northern British Columbia and Alberta. Using an equal number of both adaptive and neutral SNPs revealed that adaptive SNPs resulted in a stronger correlation between sampled populations and inferred clustering. Our results suggest that adaptive SNPs should not be excluded prior to analysis from neutral SNPs as a combination of both marker sets resulted in better resolution of genetic differentiation between populations than either marker set alone. These results demonstrate the utility of adaptive loci for resolving population genetic structure in a nonmodel organism. PMID:27648243
Batista, Philip D; Janes, Jasmine K; Boone, Celia K; Murray, Brent W; Sperling, Felix A H
2016-09-01
Assessments of population genetic structure and demographic history have traditionally been based on neutral markers while explicitly excluding adaptive markers. In this study, we compared the utility of putatively adaptive and neutral single-nucleotide polymorphisms (SNPs) for inferring mountain pine beetle population structure across its geographic range. Both adaptive and neutral SNPs, and their combination, allowed range-wide structure to be distinguished and delimited a population that has recently undergone range expansion across northern British Columbia and Alberta. Using an equal number of both adaptive and neutral SNPs revealed that adaptive SNPs resulted in a stronger correlation between sampled populations and inferred clustering. Our results suggest that adaptive SNPs should not be excluded prior to analysis from neutral SNPs as a combination of both marker sets resulted in better resolution of genetic differentiation between populations than either marker set alone. These results demonstrate the utility of adaptive loci for resolving population genetic structure in a nonmodel organism.
Inferring topologies via driving-based generalized synchronization of two-layer networks
NASA Astrophysics Data System (ADS)
Wang, Yingfei; Wu, Xiaoqun; Feng, Hui; Lu, Jun-an; Xu, Yuhua
2016-05-01
The interaction topology among the constituents of a complex network plays a crucial role in the network’s evolutionary mechanisms and functional behaviors. However, some network topologies are usually unknown or uncertain. Meanwhile, coupling delays are ubiquitous in various man-made and natural networks. Hence, it is necessary to gain knowledge of the whole or partial topology of a complex dynamical network by taking into consideration communication delay. In this paper, topology identification of complex dynamical networks is investigated via generalized synchronization of a two-layer network. Particularly, based on the LaSalle-type invariance principle of stochastic differential delay equations, an adaptive control technique is proposed by constructing an auxiliary layer and designing proper control input and updating laws so that the unknown topology can be recovered upon successful generalized synchronization. Numerical simulations are provided to illustrate the effectiveness of the proposed method. The technique provides a certain theoretical basis for topology inference of complex networks. In particular, when the considered network is composed of systems with high-dimension or complicated dynamics, a simpler response layer can be constructed, which is conducive to circuit design. Moreover, it is practical to take into consideration perturbations caused by control input. Finally, the method is applicable to infer topology of a subnetwork embedded within a complex system and locate hidden sources. We hope the results can provide basic insight into further research endeavors on understanding practical and economical topology inference of networks.
Expressing Adaptation Strategies Using Adaptation Patterns
ERIC Educational Resources Information Center
Zemirline, N.; Bourda, Y.; Reynaud, C.
2012-01-01
Today, there is a real challenge to enable personalized access to information. Several systems have been proposed to address this challenge including Adaptive Hypermedia Systems (AHSs). However, the specification of adaptation strategies remains a difficult task for creators of such systems. In this paper, we consider the problem of the definition…
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
Streamflow Forecasting Using Nuero-Fuzzy Inference System
NASA Astrophysics Data System (ADS)
Nanduri, U. V.; Swain, P. C.
2005-12-01
The prediction of flow into a reservoir is fundamental in water resources planning and management. The need for timely and accurate streamflow forecasting is widely recognized and emphasized by many in water resources fraternity. Real-time forecasts of natural inflows to reservoirs are of particular interest for operation and scheduling. The physical system of the river basin that takes the rainfall as an input and produces the runoff is highly nonlinear, complicated and very difficult to fully comprehend. The system is influenced by large number of factors and variables. The large spatial extent of the systems forces the uncertainty into the hydrologic information. A variety of methods have been proposed for forecasting reservoir inflows including conceptual (physical) and empirical (statistical) models (WMO 1994), but none of them can be considered as unique superior model (Shamseldin 1997). Owing to difficulties of formulating reasonable non-linear watershed models, recent attempts have resorted to Neural Network (NN) approach for complex hydrologic modeling. In recent years the use of soft computing in the field of hydrological forecasting is gaining ground. The relatively new soft computing technique of Adaptive Neuro-Fuzzy Inference System (ANFIS), developed by Jang (1993) is able to take care of the non-linearity, uncertainty, and vagueness embedded in the system. It is a judicious combination of the Neural Networks and fuzzy systems. It can learn and generalize highly nonlinear and uncertain phenomena due to the embedded neural network (NN). NN is efficient in learning and generalization, and the fuzzy system mimics the cognitive capability of human brain. Hence, ANFIS can learn the complicated processes involved in the basin and correlate the precipitation to the corresponding discharge. In the present study, one step ahead forecasts are made for ten-daily flows, which are mostly required for short term operational planning of multipurpose reservoirs. A
Inferring network dynamics and neuron properties from population recordings.
Linaro, Daniele; Storace, Marco; Mattia, Maurizio
2011-01-01
Understanding the computational capabilities of the nervous system means to "identify" its emergent multiscale dynamics. For this purpose, we propose a novel model-driven identification procedure and apply it to sparsely connected populations of excitatory integrate-and-fire neurons with spike frequency adaptation (SFA). Our method does not characterize the system from its microscopic elements in a bottom-up fashion, and does not resort to any linearization. We investigate networks as a whole, inferring their properties from the response dynamics of the instantaneous discharge rate to brief and aspecific supra-threshold stimulations. While several available methods assume generic expressions for the system as a black box, we adopt a mean-field theory for the evolution of the network transparently parameterized by identified elements (such as dynamic timescales), which are in turn non-trivially related to single-neuron properties. In particular, from the elicited transient responses, the input-output gain function of the neurons in the network is extracted and direct links to the microscopic level are made available: indeed, we show how to extract the decay time constant of the SFA, the absolute refractory period and the average synaptic efficacy. In addition and contrary to previous attempts, our method captures the system dynamics across bifurcations separating qualitatively different dynamical regimes. The robustness and the generality of the methodology is tested on controlled simulations, reporting a good agreement between theoretically expected and identified values. The assumptions behind the underlying theoretical framework make the method readily applicable to biological preparations like cultured neuron networks and in vitro brain slices. PMID:22016731
Contemporary Quantitative Methods and "Slow" Causal Inference: Response to Palinkas
ERIC Educational Resources Information Center
Stone, Susan
2014-01-01
This response considers together simultaneously occurring discussions about causal inference in social work and allied health and social science disciplines. It places emphasis on scholarship that integrates the potential outcomes model with directed acyclic graphing techniques to extract core steps in causal inference. Although this scholarship…
Reasoning about Causal Relationships: Inferences on Causal Networks
Rottman, Benjamin Margolin; Hastie, Reid
2013-01-01
Over the last decade, a normative framework for making causal inferences, Bayesian Probabilistic Causal Networks, has come to dominate psychological studies of inference based on causal relationships. The following causal networks—[X→Y→Z, X←Y→Z, X→Y←Z]—supply answers for questions like, “Suppose both X and Y occur, what is the probability Z occurs?” or “Suppose you intervene and make Y occur, what is the probability Z occurs?” In this review, we provide a tutorial for how normatively to calculate these inferences. Then, we systematically detail the results of behavioral studies comparing human qualitative and quantitative judgments to the normative calculations for many network structures and for several types of inferences on those networks. Overall, when the normative calculations imply that an inference should increase, judgments usually go up; when calculations imply a decrease, judgments usually go down. However, two systematic deviations appear. First, people’s inferences violate the Markov assumption. For example, when inferring Z from the structure X→Y→Z, people think that X is relevant even when Y completely mediates the relationship between X and Z. Second, even when people’s inferences are directionally consistent with the normative calculations, they are often not as sensitive to the parameters and the structure of the network as they should be. We conclude with a discussion of productive directions for future research. PMID:23544658
Effects of Inferred Motive on Evaluations of Nonaccommodative Communication
ERIC Educational Resources Information Center
Gasiorek, Jessica; Giles, Howard
2012-01-01
In two studies, we propose, refine, and test a new model of inferred motive predicting of individuals' reactions to nonaccommodation, defined as communicative behavior that is inappropriately adjusted for participants in an interaction. Inferring a negative motive for others' problematic behavior resulted in significantly less positive evaluations…
Strategic Processing and Predictive Inference Generation in L2 Reading
ERIC Educational Resources Information Center
Nahatame, Shingo
2014-01-01
Predictive inference is the anticipation of the likely consequences of events described in a text. This study investigated predictive inference generation during second language (L2) reading, with a focus on the effects of strategy instructions. In this experiment, Japanese university students read several short narrative passages designed to…
Another Look At The Canon of Plausible Inference
NASA Astrophysics Data System (ADS)
Solana-Ortega, Alberto; Solana, Vicente
2005-11-01
Systematic study of plausible inference is very recent. Axiomatics have been traditionally limited to the development of uninterpreted pure calculi for comparing individual inferences, ignoring the need of formalisms to solve each of these inferences and leaving the interpretation and application of such calculi to ad hoc statistical criteria which are open to inconsistencies. Here we defend a different viewpoint, regarding plausible inference in a holistic manner. Specifically we consider that all tasks involved in it, including the formalization of languages in which to pose problems, the definitions and axiomatics leading to calculation rules and those for deriving inference procedures or assignment rules, ought to be based on common grounds. For this purpose a set of elementary requirements establishing desirable properties so fundamental any theory of scientific inference should satisfy is proposed under the name of plausible inference canon. Its logical status as an extramathematical foundation is investigated, together with the different roles it plays as constructive guideline, standard for contrasting frameworks or normative stipulation. We also highlight the novelties it introduces with respect to similar proposals by other authors. In particular we concentrate on those aspects of the canon related to the critical issue of adequately incorporating basic evidential knowledge to inference.
The Effect of Gender on the Construction of Backward Inferences
ERIC Educational Resources Information Center
Cakir, Ozler
2008-01-01
The main objective in the present study is to examine the effect of gender on primary school students' construction of elaborative backward inferences during text processing. A total of 333 children, aged 10-11 years (n = 158 girls and 175 boys) participated in the study. Each participant completed a backward inference test. The results indicate…
A Probability Index of the Robustness of a Causal Inference
ERIC Educational Resources Information Center
Pan, Wei; Frank, Kenneth A.
2003-01-01
Causal inference is an important, controversial topic in the social sciences, where it is difficult to conduct experiments or measure and control for all confounding variables. To address this concern, the present study presents a probability index to assess the robustness of a causal inference to the impact of a confounding variable. The…
Aging and Predicting Inferences: A Diffusion Model Analysis
ERIC Educational Resources Information Center
McKoon, Gail; Ratcliff, Roger
2013-01-01
In the domain of discourse processing, it has been claimed that older adults (60-0-year-olds) are less likely to encode and remember some kinds of information from texts than young adults. The experiment described here shows that they do make a particular kind of inference to the same extent that college-age adults do. The inferences examined were…
The Strong-Inference Protocol: Not Just for Grant Proposals
ERIC Educational Resources Information Center
Hiebert, Sara M.
2007-01-01
The strong-inference protocol puts into action the important concepts in Platt's often-assigned, classic paper on the strong-inference method (10). Yet, perhaps because students are frequently performing experiments with known outcomes, the protocols they write as undergraduates are usually little more than step-by-step instructions for performing…
Deontic Introduction: A Theory of Inference from Is to Ought
ERIC Educational Resources Information Center
Elqayam, Shira; Thompson, Valerie A.; Wilkinson, Meredith R.; Evans, Jonathan St. B. T.; Over, David E.
2015-01-01
Humans have a unique ability to generate novel norms. Faced with the knowledge that there are hungry children in Somalia, we easily and naturally infer that we ought to donate to famine relief charities. Although a contentious and lively issue in metaethics, such inference from "is" to "ought" has not been systematically…
Developing Young Students' Informal Inference Skills in Data Analysis
ERIC Educational Resources Information Center
Paparistodemou, Efi; Meletiou-Mavrotheris, Maria
2008-01-01
This paper focuses on developing students' informal inference skills, reporting on how a group of third grade students formulated and evaluated data-based inferences using the dynamic statistics data-visualization environment TinkerPlots[TM] (Konold & Miller, 2005), software specifically designed to meet the learning needs of students in the early…
Atomic Inference from Weak Gravitational Lensing Data
Marshall, Phil; /KIPAC, Menlo Park
2005-12-14
We present a novel approach to reconstructing the projected mass distribution from the sparse and noisy weak gravitational lensing shear data. The reconstructions are regularized via the knowledge gained from numerical simulations of clusters, with trial mass distributions constructed from n NFW profile ellipsoidal components. The parameters of these ''atoms'' are distributed a priori as in the simulated clusters. Sampling the mass distributions from the atom parameter probability density function allows estimates of the properties of the mass distribution to be generated, with error bars. The appropriate number of atoms is inferred from the data itself via the Bayesian evidence, and is typically found to be small, reecting the quality of the data. Ensemble average mass maps are found to be robust to the details of the noise realization, and succeed in recovering the demonstration input mass distribution (from a realistic simulated cluster) over a wide range of scales. As an application of such a reliable mapping algorithm, we comment on the residuals of the reconstruction and the implications for predicting convergence and shear at specific points on the sky.
Virtual reality and consciousness inference in dreaming.
Hobson, J Allan; Hong, Charles C-H; Friston, Karl J
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.
Causal Inference for Vaccine Effects on Infectiousness
Halloran, M. Elizabeth; Hudgens, Michael G.
2012-01-01
If a vaccine does not protect individuals completely against infection, it could still reduce infectiousness of infected vaccinated individuals to others. Typically, vaccine efficacy for infectiousness is estimated based on contrasts between the transmission risk to susceptible individuals from infected vaccinated individuals compared with that from infected unvaccinated individuals. Such estimates are problematic, however, because they are subject to selection bias and do not have a causal interpretation. Here, we develop causal estimands for vaccine efficacy for infectiousness for four different scenarios of populations of transmission units of size two. These causal estimands incorporate both principal stratification, based on the joint potential infection outcomes under vaccine and control, and interference between individuals within transmission units. In the most general scenario, both individuals can be exposed to infection outside the transmission unit and both can be assigned either vaccine or control. The three other scenarios are special cases of the general scenario where only one individual is exposed outside the transmission unit or can be assigned vaccine. The causal estimands for vaccine efficacy for infectiousness are well defined only within certain principal strata and, in general, are identifiable only with strong unverifiable assumptions. Nonetheless, the observed data do provide some information, and we derive large sample bounds on the causal vaccine efficacy for infectiousness estimands. An example of the type of data observed in a study to estimate vaccine efficacy for infectiousness is analyzed in the causal inference framework we developed. PMID:22499732
How prescriptive norms influence causal inferences.
Samland, Jana; Waldmann, Michael R
2016-11-01
Recent experimental findings suggest that prescriptive norms influence causal inferences. The cognitive mechanism underlying this finding is still under debate. We compare three competing theories: The culpable control model of blame argues that reasoners tend to exaggerate the causal influence of norm-violating agents, which should lead to relatively higher causal strength estimates for these agents. By contrast, the counterfactual reasoning account of causal selection assumes that norms do not alter the representation of the causal model, but rather later causal selection stages. According to this view, reasoners tend to preferentially consider counterfactual states of abnormal rather than normal factors, which leads to the choice of the abnormal factor in a causal selection task. A third view, the accountability hypothesis, claims that the effects of prescriptive norms are generated by the ambiguity of the causal test question. Asking whether an agent is a cause can be understood as a request to assess her causal contribution but also her moral accountability. According to this theory norm effects on causal selection are mediated by accountability judgments that are not only sensitive to the abnormality of behavior but also to mitigating factors, such as intentionality and knowledge of norms. Five experiments are presented that favor the accountability account over the two alternative theories. PMID:27591550
Inferences of Ice Processes From Properties
NASA Astrophysics Data System (ADS)
Alley, R. B.; Wilen, L. A.; Spencer, M. K.; Hansen, D. P.; Fitzpatrick, J. J.
2001-12-01
Barclay Kamb's pioneering work on the physics and mineralogy of laboratory and natural ices has guided glaciological research spanning 40 years. Much of that research required extremely tedious use of optical universal stages to study thin sections of ice. Recent advances in digital systems have revolutionized data collection and offer great opportunities to use ice properties to infer processes that operate too slowly for proper laboratory investigation, leading toward a greatly improved understanding of the history of ice and its softness for further deformation (Wilen, 1999; Hansen and Wilen, in review; Wilen et al., this meeting). Patterns of nearest-neighbor c-axis orientations reveal the influence of nucleation-and-growth recrystallization (typically indicative of steady-state deformation) or polygonization. Combining these results with correlations between grain sizes and dust and chemical loadings reveals impurity effects on active processes. The relations between mean grain size and c-axis-fabric strength may show the importance of grain-boundary processes in deformation. Bubble sizes reveal climate conditions during firnification, and bubble shapes can provide information on in situ strain rates. These and many other possibilities should enhance our understanding of ice flow and of the paleoclimatic records archived in ice.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Inference by replication in densely connected systems
Neirotti, Juan P.; Saad, David
2007-10-15
An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.
The renormalization group via statistical inference
NASA Astrophysics Data System (ADS)
Bény, Cédric; Osborne, Tobias J.
2015-08-01
In physics, one attempts to infer the rules governing a system given only the results of imperfect measurements. Hence, microscopic theories may be effectively indistinguishable experimentally. We develop an operationally motivated procedure to identify the corresponding equivalence classes of states, and argue that the renormalization group (RG) arises from the inherent ambiguities associated with the classes: one encounters flow parameters as, e.g., a regulator, a scale, or a measure of precision, which specify representatives in a given equivalence class. This provides a unifying framework and reveals the role played by information in renormalization. We validate this idea by showing that it justifies the use of low-momenta n-point functions as statistically relevant observables around a Gaussian hypothesis. These results enable the calculation of distinguishability in quantum field theory. Our methods also provide a way to extend renormalization techniques to effective models which are not based on the usual quantum-field formalism, and elucidates the relationships between various type of RG.
Inferring human mobility using communication patterns
NASA Astrophysics Data System (ADS)
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-08-01
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems.
Inferring unstable equilibrium configurations from experimental data
NASA Astrophysics Data System (ADS)
Virgin, L. N.; Wiebe, R.; Spottswood, S. M.; Beberniss, T.
2016-09-01
This research considers the structural behavior of slender, mechanically buckled beams and panels of the type commonly found in aerospace structures. The specimens were deflected and then clamped in a rigid frame in order to exhibit snap-through. That is, the initial equilibrium and the buckled (snapped-through) equilibrium configurations both co-existed for the given clamped conditions. In order to transit between these two stable equilibrium configurations (for example, under the action of an externally applied load), it is necessary for the structural component to pass through an intermediate unstable equilibrium configuration. A sequence of sudden impacts was imparted to the system, of various strengths and at various locations. The goal of this impact force was to induce relatively intermediate-sized transients that effectively slowed-down in the vicinity of the unstable equilibrium configuration. Thus, monitoring the velocity of the motion, and specifically its slowing down, should give an indication of the presence of an equilibrium configuration, even though it is unstable and not amenable to direct experimental observation. A digital image correlation (DIC) system was used in conjunction with an instrumented impact hammer to track trajectories and statistical methods used to infer the presence of unstable equilibria in both a beam and a panel.
Global atmospheric black carbon inferred from AERONET
NASA Astrophysics Data System (ADS)
Sato, Makiko; Hansen, James; Koch, Dorothy; Lacis, Andrew; Ruedy, Reto; Dubovik, Oleg; Holben, Brent; Chin, Mian; Novakov, Tica
2003-05-01
AERONET, a network of well calibrated sunphotometers, provides data on aerosol optical depth and absorption optical depth at >250 sites around the world. The spectral range of AERONET allows discrimination between constituents that absorb most strongly in the UV region, such as soil dust and organic carbon, and the more ubiquitously absorbing black carbon (BC). AERONET locations, primarily continental, are not representative of the global mean, but they can be used to calibrate global aerosol climatologies produced by tracer transport models. We find that the amount of BC in current climatologies must be increased by a factor of 2-4 to yield best agreement with AERONET, in the approximation in which BC is externally mixed with other aerosols. The inferred climate forcing by BC, regardless of whether it is internally or externally mixed, is 1 W/m2, most of which is probably anthropogenic. This positive forcing (warming) by BC must substantially counterbalance cooling by anthropogenic reflective aerosols. Thus, especially if reflective aerosols such as sulfates are reduced, it is important to reduce BC to minimize global warming. aerosols | air pollution | climate change
Global atmospheric black carbon inferred from AERONET
Sato, Makiko; Hansen, James; Koch, Dorothy; Lacis, Andrew; Ruedy, Reto; Dubovik, Oleg; Holben, Brent; Chin, Mian; Novakov, Tica
2003-01-01
AERONET, a network of well calibrated sunphotometers, provides data on aerosol optical depth and absorption optical depth at >250 sites around the world. The spectral range of AERONET allows discrimination between constituents that absorb most strongly in the UV region, such as soil dust and organic carbon, and the more ubiquitously absorbing black carbon (BC). AERONET locations, primarily continental, are not representative of the global mean, but they can be used to calibrate global aerosol climatologies produced by tracer transport models. We find that the amount of BC in current climatologies must be increased by a factor of 2–4 to yield best agreement with AERONET, in the approximation in which BC is externally mixed with other aerosols. The inferred climate forcing by BC, regardless of whether it is internally or externally mixed, is ≈1 W/m2, most of which is probably anthropogenic. This positive forcing (warming) by BC must substantially counterbalance cooling by anthropogenic reflective aerosols. Thus, especially if reflective aerosols such as sulfates are reduced, it is important to reduce BC to minimize global warming. PMID:12746494
Inferring human mobility using communication patterns.
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-08-22
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems.
Virtual reality and consciousness inference in dreaming
Hobson, J. Allan; Hong, Charles C.-H.; Friston, Karl J.
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that – through experience-dependent plasticity – becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep – and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain’s generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis – evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research. PMID:25346710
Virtual reality and consciousness inference in dreaming.
Hobson, J Allan; Hong, Charles C-H; Friston, Karl J
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research. PMID:25346710
Functional network inference of the suprachiasmatic nucleus.
Abel, John H; Meeker, Kirsten; Granados-Fuentes, Daniel; St John, Peter C; Wang, Thomas J; Bales, Benjamin B; Doyle, Francis J; Herzog, Erik D; Petzold, Linda R
2016-04-19
In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.
Functional network inference of the suprachiasmatic nucleus.
Abel, John H; Meeker, Kirsten; Granados-Fuentes, Daniel; St John, Peter C; Wang, Thomas J; Bales, Benjamin B; Doyle, Francis J; Herzog, Erik D; Petzold, Linda R
2016-04-19
In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure. PMID:27044085
Inferring human mobility using communication patterns
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-01-01
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems. PMID:25146347
Scaling Multidimensional Inference for Structured Gaussian Processes.
Gilboa, Elad; Saatçi, Yunus; Cunningham, John P
2013-09-30
Exact Gaussian process (GP) regression has O(N^3) runtime for data size N, making it intractable for large N. Many algorithms for improving GP scaling approximate the covariance with lower rank matrices. Other work has exploited structure inherent in particular covariance functions, including GPs with implied Markov structure, and inputs on a lattice (both enable O(N) or O(N log N) runtime). However, these GP advances have not been well extended to the multidimensional input setting, despite the preponderance of multidimensional applications. This paper introduces and tests three novel extensions of structured GPs to multidimensional inputs, for models with additive and multiplicative kernels. First we present a new method for inference in additive GPs, showing a novel connection between the classic backfitting method and the Bayesian framework. We extend this model using two advances: a variant of projection pursuit regression, and a Laplace approximation for non-Gaussian observations. Lastly, for multiplicative kernel structure, we present a novel method for GPs with inputs on a multidimensional grid. We illustrate the power of these three advances on several datasets, achieving performance equal to or very close to the naive GP at orders of magnitude less cost.
Scaling Multidimensional Inference for Structured Gaussian Processes.
Gilboa, Elad; Saatçi, Yunus; Cunningham, John P
2015-02-01
Exact Gaussian process (GP) regression has O(N(3)) runtime for data size N, making it intractable for large N . Many algorithms for improving GP scaling approximate the covariance with lower rank matrices. Other work has exploited structure inherent in particular covariance functions, including GPs with implied Markov structure, and inputs on a lattice (both enable O(N) or O(N log N) runtime). However, these GP advances have not been well extended to the multidimensional input setting, despite the preponderance of multidimensional applications. This paper introduces and tests three novel extensions of structured GPs to multidimensional inputs, for models with additive and multiplicative kernels. First we present a new method for inference in additive GPs, showing a novel connection between the classic backfitting method and the Bayesian framework. We extend this model using two advances: a variant of projection pursuit regression, and a Laplace approximation for non-Gaussian observations. Lastly, for multiplicative kernel structure, we present a novel method for GPs with inputs on a multidimensional grid. We illustrate the power of these three advances on several data sets, achieving performance equal to or very close to the naive GP at orders of magnitude less cost.
Aesthetic quality inference for online fashion shopping
NASA Astrophysics Data System (ADS)
Chen, Ming; Allebach, Jan
2014-03-01
On-line fashion communities in which participants post photos of personal fashion items for viewing and possible purchase by others are becoming increasingly popular. Generally, these photos are taken by individuals who have no training in photography with low-cost mobile phone cameras. It is desired that photos of the products have high aesthetic quality to improve the users' online shopping experience. In this work, we design features for aesthetic quality inference in the context of online fashion shopping. Psychophysical experiments are conducted to construct a database of the photos' aesthetic evaluation, specifically for photos from an online fashion shopping website. We then extract both generic low-level features and high-level image attributes to represent the aesthetic quality. Using a support vector machine framework, we train a predictor of the aesthetic quality rating based on the feature vector. Experimental results validate the efficacy of our approach. Metadata such as the product type are also used to further improve the result.
Probabilistic phylogenetic inference with insertions and deletions.
Rivas, Elena; Eddy, Sean R
2008-01-01
A fundamental task in sequence analysis is to calculate the probability of a multiple alignment given a phylogenetic tree relating the sequences and an evolutionary model describing how sequences change over time. However, the most widely used phylogenetic models only account for residue substitution events. We describe a probabilistic model of a multiple sequence alignment that accounts for insertion and deletion events in addition to substitutions, given a phylogenetic tree, using a rate matrix augmented by the gap character. Starting from a continuous Markov process, we construct a non-reversible generative (birth-death) evolutionary model for insertions and deletions. The model assumes that insertion and deletion events occur one residue at a time. We apply this model to phylogenetic tree inference by extending the program dnaml in phylip. Using standard benchmarking methods on simulated data and a new "concordance test" benchmark on real ribosomal RNA alignments, we show that the extended program dnamlepsilon improves accuracy relative to the usual approach of ignoring gaps, while retaining the computational efficiency of the Felsenstein peeling algorithm. PMID:18787703
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception. PMID:27391681
Anchoring and adjustment during social inferences.
Tamir, Diana I; Mitchell, Jason P
2013-02-01
Simulation theories of social cognition suggest that people use their own mental states to understand those of others-particularly similar others. However, perceivers cannot rely solely on self-knowledge to understand another person; they must also correct for differences between the self and others. Here we investigated serial adjustment as a mechanism for correction from self-knowledge anchors during social inferences. In 3 studies, participants judged the attitudes of a similar or dissimilar person and reported their own attitudes. For each item, we calculated the discrepancy between responses for the self and other. The adjustment process unfolds serially, so to the extent that individuals indeed anchor on self-knowledge and then adjust away, trials with a large amount of self-other discrepancy should be associated with longer response times, whereas small self-other discrepancy should correspond to shorter response times. Analyses consistently revealed this positive linear relationship between reaction time and self-other discrepancy, evidence of anchoring-and-adjustment, but only during judgments of similar targets. These results suggest that perceivers mentalize about similar others using the cognitive process of anchoring-and-adjustment. PMID:22506753
Relationship inference based on DNA mixtures.
Kaur, Navreet; Bouzga, Mariam M; Dørum, Guro; Egeland, Thore
2016-03-01
Today, there exists a number of tools for solving kinship cases. But what happens when information comes from a mixture? DNA mixtures are in general rarely seen in kinship cases, but in a case presented to the Norwegian Institute of Public Health, sample DNA was obtained after a rape case that resulted in an unwanted pregnancy and abortion. The only available DNA from the fetus came in form of a mixture with the mother, and it was of interest to find the father of the fetus. The mother (the victim), however, refused to give her reference data and so commonly used methods for paternity testing were no longer applicable. As this case illustrates, kinship cases involving mixtures and missing reference profiles do occur and make the use of existing methods rather inconvenient. We here present statistical methods that may handle general relationship inference based on DNA mixtures. The basic idea is that likelihood calculations for mixtures can be decomposed into a series of kinship problems. This formulation of the problem facilitates the use of kinship software. We present the freely available R package relMix which extends on the R version of Familias. Complicating factors like mutations, silent alleles, and θ-correction are then easily handled for quite general family relationships, and are included in the statistical methods we develop in this paper. The methods and their implementations are exemplified on the data from the rape case.
MISTIC: Mutual information server to infer coevolution.
Simonetti, Franco L; Teppa, Elin; Chernomoretz, Ariel; Nielsen, Morten; Marino Buslje, Cristina
2013-07-01
MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface to explore and characterize the MI network is provided. Several tools are offered for selecting subsets of nodes from the network for visualization. Node coloring can be set to match different attributes, such as conservation, cumulative MI, proximity MI and secondary structure. Finally, a zip file containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use of circos representation of MI networks and the visualization of the cumulative MI and proximity MI concepts is novel.
Wilbertz, Gregor; van Slooten, Joanne; Sterzer, Philipp
2014-01-01
Perception is an inferential process, which becomes immediately evident when sensory information is conflicting or ambiguous and thus allows for more than one perceptual interpretation. Thinking the idea of perception as inference through to the end results in a blurring of boundaries between perception and action selection, as perceptual inference implies the construction of a percept as an active process. Here we therefore wondered whether perception shares a key characteristic of action selection, namely that it is shaped by reinforcement learning. In two behavioral experiments, we used binocular rivalry to examine whether perceptual inference can be influenced by the association of perceptual outcomes with reward or punishment, respectively, in analogy to instrumental conditioning. Binocular rivalry was evoked by two orthogonal grating stimuli presented to the two eyes, resulting in perceptual alternations between the two gratings. Perception was tracked indirectly and objectively through a target detection task, which allowed us to preclude potential reporting biases. Monetary reward or punishments were given repeatedly during perception of only one of the two rivaling stimuli. We found an increase in dominance durations for the percept associated with reward, relative to the non-rewarded percept. In contrast, punishment led to an increase of the non-punished compared to a relative decrease of the punished percept. Our results show that perception shares key characteristics with action selection, in that it is influenced by reward and punishment in opposite directions, thus narrowing the gap between the conceptually separated domains of perception and action selection. We conclude that perceptual inference is an adaptive process that is shaped by its consequences. PMID:25520687
Identifiability and inference of pathway motifs by epistasis analysis
NASA Astrophysics Data System (ADS)
Phenix, Hilary; Perkins, Theodore; Kærn, Mads
2013-06-01
The accuracy of genetic network inference is limited by the assumptions used to determine if one hypothetical model is better than another in explaining experimental observations. Most previous work on epistasis analysis—in which one attempts to infer pathway relationships by determining equivalences among traits following mutations—has been based on Boolean or linear models. Here, we delineate the ultimate limits of epistasis-based inference by systematically surveying all two-gene network motifs and use symbolic algebra with arbitrary regulation functions to examine trait equivalences. Our analysis divides the motifs into equivalence classes, where different genetic perturbations result in indistinguishable experimental outcomes. We demonstrate that this partitioning can reveal important information about network architecture, and show, using simulated data, that it greatly improves the accuracy of genetic network inference methods. Because of the minimal assumptions involved, equivalence partitioning has broad applicability for gene network inference.
Quantum-Like Representation of Non-Bayesian Inference
NASA Astrophysics Data System (ADS)
Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.
2013-01-01
This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.
Identifiability and inference of pathway motifs by epistasis analysis.
Phenix, Hilary; Perkins, Theodore; Kærn, Mads
2013-06-01
The accuracy of genetic network inference is limited by the assumptions used to determine if one hypothetical model is better than another in explaining experimental observations. Most previous work on epistasis analysis-in which one attempts to infer pathway relationships by determining equivalences among traits following mutations-has been based on Boolean or linear models. Here, we delineate the ultimate limits of epistasis-based inference by systematically surveying all two-gene network motifs and use symbolic algebra with arbitrary regulation functions to examine trait equivalences. Our analysis divides the motifs into equivalence classes, where different genetic perturbations result in indistinguishable experimental outcomes. We demonstrate that this partitioning can reveal important information about network architecture, and show, using simulated data, that it greatly improves the accuracy of genetic network inference methods. Because of the minimal assumptions involved, equivalence partitioning has broad applicability for gene network inference. PMID:23822501
Inferring climate variability from skewed proxy records
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and
Vertically Integrated Seismological Analysis II : Inference
NASA Astrophysics Data System (ADS)
Arora, N. S.; Russell, S.; Sudderth, E.
2009-12-01
Methods for automatically associating detected waveform features with hypothesized seismic events, and localizing those events, are a critical component of efforts to verify the Comprehensive Test Ban Treaty (CTBT). As outlined in our companion abstract, we have developed a hierarchical model which views detection, association, and localization as an integrated probabilistic inference problem. In this abstract, we provide more details on the Markov chain Monte Carlo (MCMC) methods used to solve this inference task. MCMC generates samples from a posterior distribution π(x) over possible worlds x by defining a Markov chain whose states are the worlds x, and whose stationary distribution is π(x). In the Metropolis-Hastings (M-H) method, transitions in the Markov chain are constructed in two steps. First, given the current state x, a candidate next state x‧ is generated from a proposal distribution q(x‧ | x), which may be (more or less) arbitrary. Second, the transition to x‧ is not automatic, but occurs with an acceptance probability—α(x‧ | x) = min(1, π(x‧)q(x | x‧)/π(x)q(x‧ | x)). The seismic event model outlined in our companion abstract is quite similar to those used in multitarget tracking, for which MCMC has proved very effective. In this model, each world x is defined by a collection of events, a list of properties characterizing those events (times, locations, magnitudes, and types), and the association of each event to a set of observed detections. The target distribution π(x) = P(x | y), the posterior distribution over worlds x given the observed waveform data y at all stations. Proposal distributions then implement several types of moves between worlds. For example, birth moves create new events; death moves delete existing events; split moves partition the detections for an event into two new events; merge moves combine event pairs; swap moves modify the properties and assocations for pairs of events. Importantly, the rules for
Inferring correlations: from exemplars to categories.
Vogel, Tobias; Kutzner, Florian; Freytag, Peter; Fiedler, Klaus
2014-10-01
Research and theorizing suggest a processing advantage of category-level correlations over exemplar-level correlations. That research has also shown that category-level correlations serve as a proxy for inferring exemplar-level correlations. For example, an individual may learn that the demand for a product category, like cheese, in one store predicts the demand for this category in another. The individual could then draw the unwarranted conclusion that the demand for an exemplar, like cheddar, would also predict the demand for this exemplar in the other store. This notion is supported by previous experiments demonstrating that the subjective exemplar-level correlation follows the implication of the category-level correlation. However, in virtually all previous experiments suggesting a processing advantage for category-level over exemplar-level correlations, the stimulus correlation at the category level was substantial, whereas the correlation at the exemplar level was weak. Here, we tested the hypothesis that individuals process the level that is most informative, either the exemplar or the category level. We presented participants with a zero correlation at the category level, but varied the correlation at the exemplar level. Participants presented with a zero correlation across exemplar products correctly reproduced a zero correlation across product categories. When presented with a substantial correlation at the exemplar level, however, they erroneously reproduced a similar correlation at the category level. These findings therefore imply that there is no general processing advantage for correlations at higher aggregation levels. Instead, individuals seemingly attend to the level that holds the most regular information. Findings are discussed regarding the role of covariation strength in correlation detection and use. PMID:24493021
Inferring correlation networks from genomic survey data.
Friedman, Jonathan; Alm, Eric J
2012-01-01
High-throughput sequencing based techniques, such as 16S rRNA gene profiling, have the potential to elucidate the complex inner workings of natural microbial communities - be they from the world's oceans or the human gut. A key step in exploring such data is the identification of dependencies between members of these communities, which is commonly achieved by correlation analysis. However, it has been known since the days of Karl Pearson that the analysis of the type of data generated by such techniques (referred to as compositional data) can produce unreliable results since the observed data take the form of relative fractions of genes or species, rather than their absolute abundances. Using simulated and real data from the Human Microbiome Project, we show that such compositional effects can be widespread and severe: in some real data sets many of the correlations among taxa can be artifactual, and true correlations may even appear with opposite sign. Additionally, we show that community diversity is the key factor that modulates the acuteness of such compositional effects, and develop a new approach, called SparCC (available at https://bitbucket.org/yonatanf/sparcc), which is capable of estimating correlation values from compositional data. To illustrate a potential application of SparCC, we infer a rich ecological network connecting hundreds of interacting species across 18 sites on the human body. Using the SparCC network as a reference, we estimated that the standard approach yields 3 spurious species-species interactions for each true interaction and misses 60% of the true interactions in the human microbiome data, and, as predicted, most of the erroneous links are found in the samples with the lowest diversity. PMID:23028285
Inferring Correlation Networks from Genomic Survey Data
Friedman, Jonathan; Alm, Eric J.
2012-01-01
High-throughput sequencing based techniques, such as 16S rRNA gene profiling, have the potential to elucidate the complex inner workings of natural microbial communities - be they from the world's oceans or the human gut. A key step in exploring such data is the identification of dependencies between members of these communities, which is commonly achieved by correlation analysis. However, it has been known since the days of Karl Pearson that the analysis of the type of data generated by such techniques (referred to as compositional data) can produce unreliable results since the observed data take the form of relative fractions of genes or species, rather than their absolute abundances. Using simulated and real data from the Human Microbiome Project, we show that such compositional effects can be widespread and severe: in some real data sets many of the correlations among taxa can be artifactual, and true correlations may even appear with opposite sign. Additionally, we show that community diversity is the key factor that modulates the acuteness of such compositional effects, and develop a new approach, called SparCC (available at https://bitbucket.org/yonatanf/sparcc), which is capable of estimating correlation values from compositional data. To illustrate a potential application of SparCC, we infer a rich ecological network connecting hundreds of interacting species across 18 sites on the human body. Using the SparCC network as a reference, we estimated that the standard approach yields 3 spurious species-species interactions for each true interaction and misses 60% of the true interactions in the human microbiome data, and, as predicted, most of the erroneous links are found in the samples with the lowest diversity. PMID:23028285
Inferring mental states from neuroimaging data: From reverse inference to large-scale decoding
Poldrack, Russell A.
2011-01-01
A common goal of neuroimaging research is to use imaging data to identify the mental processes that are engaged when a subject performs a mental task. The use of reasoning from activation to mental functions, known as “reverse inference”, has been previously criticized on the basis that it does not take into account how selectively the area is activated by the mental process in question. In this Perspective, I outline the critique of informal reverse inference, and describe a number of new developments that provide the ability to more formally test the predictive power of neuroimaging data. PMID:22153367
RulNet: A Web-Oriented Platform for Regulatory Network Inference, Application to Wheat –Omics Data
Vincent, Jonathan; Martre, Pierre; Gouriou, Benjamin; Ravel, Catherine; Dai, Zhanwu; Petit, Jean-Marc; Pailloux, Marie
2015-01-01
With the increasing amount of –omics data available, a particular effort has to be made to provide suitable analysis tools. A major challenge is that of unraveling the molecular regulatory networks from massive and heterogeneous datasets. Here we describe RulNet, a web-oriented platform dedicated to the inference and analysis of regulatory networks from qualitative and quantitative –omics data by means of rule discovery. Queries for rule discovery can be written in an extended form of the RQL query language, which has a syntax similar to SQL. RulNet also offers users interactive features that progressively adjust and refine the inferred networks. In this paper, we present a functional characterization of RulNet and compare inferred networks with correlation-based approaches. The performance of RulNet has been evaluated using the three benchmark datasets used for the transcriptional network inference challenge DREAM5. Overall, RulNet performed as well as the best methods that participated in this challenge and it was shown to behave more consistently when compared across the three datasets. Finally, we assessed the suitability of RulNet to analyze experimental –omics data and to infer regulatory networks involved in the response to nitrogen and sulfur supply in wheat (Triticum aestivum L.) grains. The results highlight putative actors governing the response to nitrogen and sulfur supply in wheat grains. We evaluate the main characteristics and features of RulNet as an all-in-one solution for RN inference, visualization and editing. Using simple yet powerful RulNet queries allowed RNs involved in the adaptation of wheat grain to N and S supply to be discovered. We demonstrate the effectiveness and suitability of RulNet as a platform for the analysis of RNs involving different types of –omics data. The results are promising since they are consistent with what was previously established by the scientific community. PMID:25993562
RulNet: A Web-Oriented Platform for Regulatory Network Inference, Application to Wheat -Omics Data.
Vincent, Jonathan; Martre, Pierre; Gouriou, Benjamin; Ravel, Catherine; Dai, Zhanwu; Petit, Jean-Marc; Pailloux, Marie
2015-01-01
With the increasing amount of -omics data available, a particular effort has to be made to provide suitable analysis tools. A major challenge is that of unraveling the molecular regulatory networks from massive and heterogeneous datasets. Here we describe RulNet, a web-oriented platform dedicated to the inference and analysis of regulatory networks from qualitative and quantitative -omics data by means of rule discovery. Queries for rule discovery can be written in an extended form of the RQL query language, which has a syntax similar to SQL. RulNet also offers users interactive features that progressively adjust and refine the inferred networks. In this paper, we present a functional characterization of RulNet and compare inferred networks with correlation-based approaches. The performance of RulNet has been evaluated using the three benchmark datasets used for the transcriptional network inference challenge DREAM5. Overall, RulNet performed as well as the best methods that participated in this challenge and it was shown to behave more consistently when compared across the three datasets. Finally, we assessed the suitability of RulNet to analyze experimental -omics data and to infer regulatory networks involved in the response to nitrogen and sulfur supply in wheat (Triticum aestivum L.) grains. The results highlight putative actors governing the response to nitrogen and sulfur supply in wheat grains. We evaluate the main characteristics and features of RulNet as an all-in-one solution for RN inference, visualization and editing. Using simple yet powerful RulNet queries allowed RNs involved in the adaptation of wheat grain to N and S supply to be discovered. We demonstrate the effectiveness and suitability of RulNet as a platform for the analysis of RNs involving different types of -omics data. The results are promising since they are consistent with what was previously established by the scientific community. PMID:25993562